May 10 00:46:55.976606 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 9 23:12:23 -00 2025 May 10 00:46:55.976637 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:46:55.976671 kernel: BIOS-provided physical RAM map: May 10 00:46:55.983695 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 10 00:46:55.983724 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 10 00:46:55.983736 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 10 00:46:55.983752 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 10 00:46:55.983765 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 10 00:46:55.983783 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 10 00:46:55.983795 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 10 00:46:55.983807 kernel: NX (Execute Disable) protection: active May 10 00:46:55.983818 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable May 10 00:46:55.983830 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable May 10 00:46:55.983840 kernel: extended physical RAM map: May 10 00:46:55.983856 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 10 00:46:55.983867 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable May 10 00:46:55.983878 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable May 10 00:46:55.983889 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable May 10 00:46:55.983900 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 10 00:46:55.983911 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 10 00:46:55.983924 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 10 00:46:55.983937 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 10 00:46:55.983950 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 10 00:46:55.983962 kernel: efi: EFI v2.70 by EDK II May 10 00:46:55.983977 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77004a98 May 10 00:46:55.983989 kernel: SMBIOS 2.7 present. May 10 00:46:55.984000 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 10 00:46:55.984012 kernel: Hypervisor detected: KVM May 10 00:46:55.984023 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 00:46:55.984034 kernel: kvm-clock: cpu 0, msr 3d196001, primary cpu clock May 10 00:46:55.984046 kernel: kvm-clock: using sched offset of 4406483244 cycles May 10 00:46:55.984058 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 00:46:55.984070 kernel: tsc: Detected 2499.996 MHz processor May 10 00:46:55.984082 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 00:46:55.984094 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 00:46:55.984108 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 10 00:46:55.984120 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 00:46:55.984132 kernel: Using GB pages for direct mapping May 10 00:46:55.984144 kernel: Secure boot disabled May 10 00:46:55.984157 kernel: ACPI: Early table checksum verification disabled May 10 00:46:55.984175 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 10 00:46:55.984188 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 10 00:46:55.984203 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 10 00:46:55.984216 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 10 00:46:55.984229 kernel: ACPI: FACS 0x00000000789D0000 000040 May 10 00:46:55.984253 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 10 00:46:55.984266 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 10 00:46:55.984279 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 10 00:46:55.984292 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 10 00:46:55.984307 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 10 00:46:55.984320 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 10 00:46:55.984333 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 10 00:46:55.984346 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 10 00:46:55.984359 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 10 00:46:55.984372 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 10 00:46:55.984385 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 10 00:46:55.984399 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 10 00:46:55.984412 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 10 00:46:55.984429 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 10 00:46:55.984442 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 10 00:46:55.984455 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 10 00:46:55.984468 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 10 00:46:55.984482 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 10 00:46:55.984495 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 10 00:46:55.984509 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 10 00:46:55.984522 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 10 00:46:55.984536 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 10 00:46:55.984552 kernel: NUMA: Initialized distance table, cnt=1 May 10 00:46:55.984566 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 10 00:46:55.984580 kernel: Zone ranges: May 10 00:46:55.984593 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 00:46:55.984607 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 10 00:46:55.984620 kernel: Normal empty May 10 00:46:55.984634 kernel: Movable zone start for each node May 10 00:46:55.984647 kernel: Early memory node ranges May 10 00:46:55.984672 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 10 00:46:55.984690 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 10 00:46:55.984703 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 10 00:46:55.984718 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 10 00:46:55.984732 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 00:46:55.984745 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 10 00:46:55.984760 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 10 00:46:55.984774 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 10 00:46:55.984788 kernel: ACPI: PM-Timer IO Port: 0xb008 May 10 00:46:55.984801 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 00:46:55.984818 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 10 00:46:55.984832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 00:46:55.984846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 00:46:55.984860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 00:46:55.984874 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 00:46:55.984888 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 00:46:55.984902 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 10 00:46:55.984916 kernel: TSC deadline timer available May 10 00:46:55.984930 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 10 00:46:55.984944 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 10 00:46:55.984961 kernel: Booting paravirtualized kernel on KVM May 10 00:46:55.984976 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 00:46:55.984990 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 10 00:46:55.985004 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 10 00:46:55.985017 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 10 00:46:55.985032 kernel: pcpu-alloc: [0] 0 1 May 10 00:46:55.985045 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 May 10 00:46:55.985059 kernel: kvm-guest: PV spinlocks enabled May 10 00:46:55.985073 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 10 00:46:55.985090 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 10 00:46:55.985105 kernel: Policy zone: DMA32 May 10 00:46:55.985122 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:46:55.985137 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:46:55.985152 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 00:46:55.985166 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 10 00:46:55.985179 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:46:55.985197 kernel: Memory: 1876640K/2037804K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 160904K reserved, 0K cma-reserved) May 10 00:46:55.985211 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 10 00:46:55.985225 kernel: Kernel/User page tables isolation: enabled May 10 00:46:55.985239 kernel: ftrace: allocating 34584 entries in 136 pages May 10 00:46:55.985254 kernel: ftrace: allocated 136 pages with 2 groups May 10 00:46:55.985268 kernel: rcu: Hierarchical RCU implementation. May 10 00:46:55.985284 kernel: rcu: RCU event tracing is enabled. May 10 00:46:55.985311 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 10 00:46:55.985327 kernel: Rude variant of Tasks RCU enabled. May 10 00:46:55.985342 kernel: Tracing variant of Tasks RCU enabled. May 10 00:46:55.985358 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:46:55.985373 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 10 00:46:55.985392 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 10 00:46:55.985407 kernel: random: crng init done May 10 00:46:55.985420 kernel: Console: colour dummy device 80x25 May 10 00:46:55.985434 kernel: printk: console [tty0] enabled May 10 00:46:55.985448 kernel: printk: console [ttyS0] enabled May 10 00:46:55.985462 kernel: ACPI: Core revision 20210730 May 10 00:46:55.985477 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 10 00:46:55.985494 kernel: APIC: Switch to symmetric I/O mode setup May 10 00:46:55.985508 kernel: x2apic enabled May 10 00:46:55.985522 kernel: Switched APIC routing to physical x2apic. May 10 00:46:55.985537 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 10 00:46:55.985551 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) May 10 00:46:55.985565 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 10 00:46:55.985580 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 10 00:46:55.985597 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 00:46:55.985611 kernel: Spectre V2 : Mitigation: Retpolines May 10 00:46:55.985625 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 00:46:55.985639 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 10 00:46:55.985653 kernel: RETBleed: Vulnerable May 10 00:46:55.985682 kernel: Speculative Store Bypass: Vulnerable May 10 00:46:55.985696 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 10 00:46:55.985710 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 10 00:46:55.985725 kernel: GDS: Unknown: Dependent on hypervisor status May 10 00:46:55.985739 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 10 00:46:55.985753 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 10 00:46:55.985770 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 10 00:46:55.985784 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 10 00:46:55.985798 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 10 00:46:55.985813 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 10 00:46:55.985827 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 10 00:46:55.985841 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 10 00:46:55.985855 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 10 00:46:55.985870 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 10 00:46:55.985883 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 10 00:46:55.985897 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 10 00:46:55.985912 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 10 00:46:55.985929 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 10 00:46:55.985943 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 10 00:46:55.985958 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 10 00:46:55.985972 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 10 00:46:55.985986 kernel: Freeing SMP alternatives memory: 32K May 10 00:46:55.986001 kernel: pid_max: default: 32768 minimum: 301 May 10 00:46:55.986015 kernel: LSM: Security Framework initializing May 10 00:46:55.986030 kernel: SELinux: Initializing. May 10 00:46:55.986044 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 10 00:46:55.986058 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 10 00:46:55.986073 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 10 00:46:55.986091 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 10 00:46:55.986105 kernel: signal: max sigframe size: 3632 May 10 00:46:55.986120 kernel: rcu: Hierarchical SRCU implementation. May 10 00:46:55.986136 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 10 00:46:55.986150 kernel: smp: Bringing up secondary CPUs ... May 10 00:46:55.986164 kernel: x86: Booting SMP configuration: May 10 00:46:55.986177 kernel: .... node #0, CPUs: #1 May 10 00:46:55.986191 kernel: kvm-clock: cpu 1, msr 3d196041, secondary cpu clock May 10 00:46:55.986205 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 May 10 00:46:55.986224 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 10 00:46:55.986239 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 10 00:46:55.986252 kernel: smp: Brought up 1 node, 2 CPUs May 10 00:46:55.986267 kernel: smpboot: Max logical packages: 1 May 10 00:46:55.986282 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) May 10 00:46:55.986297 kernel: devtmpfs: initialized May 10 00:46:55.986310 kernel: x86/mm: Memory block size: 128MB May 10 00:46:55.986325 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 10 00:46:55.986339 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:46:55.986357 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 10 00:46:55.986371 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:46:55.986385 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:46:55.986399 kernel: audit: initializing netlink subsys (disabled) May 10 00:46:55.986413 kernel: audit: type=2000 audit(1746838015.747:1): state=initialized audit_enabled=0 res=1 May 10 00:46:55.986427 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:46:55.986440 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 00:46:55.986453 kernel: cpuidle: using governor menu May 10 00:46:55.986467 kernel: ACPI: bus type PCI registered May 10 00:46:55.986485 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:46:55.986499 kernel: dca service started, version 1.12.1 May 10 00:46:55.986514 kernel: PCI: Using configuration type 1 for base access May 10 00:46:55.986528 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 00:46:55.986543 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:46:55.986558 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:46:55.986573 kernel: ACPI: Added _OSI(Module Device) May 10 00:46:55.986586 kernel: ACPI: Added _OSI(Processor Device) May 10 00:46:55.986601 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:46:55.986618 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:46:55.986633 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 10 00:46:55.986647 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 10 00:46:55.986679 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 10 00:46:55.986695 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 10 00:46:55.986708 kernel: ACPI: Interpreter enabled May 10 00:46:55.986722 kernel: ACPI: PM: (supports S0 S5) May 10 00:46:55.986738 kernel: ACPI: Using IOAPIC for interrupt routing May 10 00:46:55.986753 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 00:46:55.986771 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 10 00:46:55.986784 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 00:46:55.986994 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 10 00:46:55.987134 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 10 00:46:55.987153 kernel: acpiphp: Slot [3] registered May 10 00:46:55.987168 kernel: acpiphp: Slot [4] registered May 10 00:46:55.987180 kernel: acpiphp: Slot [5] registered May 10 00:46:55.987197 kernel: acpiphp: Slot [6] registered May 10 00:46:55.987211 kernel: acpiphp: Slot [7] registered May 10 00:46:55.987223 kernel: acpiphp: Slot [8] registered May 10 00:46:55.987235 kernel: acpiphp: Slot [9] registered May 10 00:46:55.987249 kernel: acpiphp: Slot [10] registered May 10 00:46:55.987263 kernel: acpiphp: Slot [11] registered May 10 00:46:55.987278 kernel: acpiphp: Slot [12] registered May 10 00:46:55.987293 kernel: acpiphp: Slot [13] registered May 10 00:46:55.987308 kernel: acpiphp: Slot [14] registered May 10 00:46:55.987322 kernel: acpiphp: Slot [15] registered May 10 00:46:55.987339 kernel: acpiphp: Slot [16] registered May 10 00:46:55.987353 kernel: acpiphp: Slot [17] registered May 10 00:46:55.987368 kernel: acpiphp: Slot [18] registered May 10 00:46:55.987383 kernel: acpiphp: Slot [19] registered May 10 00:46:55.987398 kernel: acpiphp: Slot [20] registered May 10 00:46:55.987411 kernel: acpiphp: Slot [21] registered May 10 00:46:55.987424 kernel: acpiphp: Slot [22] registered May 10 00:46:55.987437 kernel: acpiphp: Slot [23] registered May 10 00:46:55.987452 kernel: acpiphp: Slot [24] registered May 10 00:46:55.987470 kernel: acpiphp: Slot [25] registered May 10 00:46:55.987485 kernel: acpiphp: Slot [26] registered May 10 00:46:55.987499 kernel: acpiphp: Slot [27] registered May 10 00:46:55.987513 kernel: acpiphp: Slot [28] registered May 10 00:46:55.987525 kernel: acpiphp: Slot [29] registered May 10 00:46:55.987537 kernel: acpiphp: Slot [30] registered May 10 00:46:55.987549 kernel: acpiphp: Slot [31] registered May 10 00:46:55.987561 kernel: PCI host bridge to bus 0000:00 May 10 00:46:55.987717 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 00:46:55.987840 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 00:46:55.987955 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 00:46:55.988075 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 10 00:46:55.988195 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 10 00:46:55.989781 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 00:46:55.989951 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 10 00:46:55.990097 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 10 00:46:55.990243 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 10 00:46:55.990374 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 10 00:46:55.990504 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 10 00:46:55.990633 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 10 00:46:55.990786 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 10 00:46:55.990917 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 10 00:46:55.991052 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 10 00:46:55.991180 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 10 00:46:55.991316 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 10 00:46:55.991446 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 10 00:46:55.991576 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 10 00:46:55.991718 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 10 00:46:55.991850 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 10 00:46:55.991998 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 10 00:46:55.992130 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 10 00:46:55.992277 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 10 00:46:55.992408 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 10 00:46:55.992426 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 00:46:55.992441 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 00:46:55.992455 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 00:46:55.992473 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 00:46:55.992488 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 10 00:46:55.992503 kernel: iommu: Default domain type: Translated May 10 00:46:55.992517 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 00:46:55.992643 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 10 00:46:56.002962 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 10 00:46:56.003200 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 10 00:46:56.003223 kernel: vgaarb: loaded May 10 00:46:56.003239 kernel: pps_core: LinuxPPS API ver. 1 registered May 10 00:46:56.003265 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 10 00:46:56.003280 kernel: PTP clock support registered May 10 00:46:56.003295 kernel: Registered efivars operations May 10 00:46:56.003310 kernel: PCI: Using ACPI for IRQ routing May 10 00:46:56.003325 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 00:46:56.003341 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] May 10 00:46:56.003356 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 10 00:46:56.003371 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 10 00:46:56.003385 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 10 00:46:56.003403 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 10 00:46:56.003418 kernel: clocksource: Switched to clocksource kvm-clock May 10 00:46:56.003433 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:46:56.003449 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:46:56.003464 kernel: pnp: PnP ACPI init May 10 00:46:56.003480 kernel: pnp: PnP ACPI: found 5 devices May 10 00:46:56.003495 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 00:46:56.003511 kernel: NET: Registered PF_INET protocol family May 10 00:46:56.003527 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 00:46:56.003545 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 10 00:46:56.003561 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:46:56.003577 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 10 00:46:56.003592 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 10 00:46:56.003607 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 10 00:46:56.003623 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 10 00:46:56.003638 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 10 00:46:56.003653 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:46:56.003688 kernel: NET: Registered PF_XDP protocol family May 10 00:46:56.003825 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 00:46:56.003938 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 00:46:56.004049 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 00:46:56.004155 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 10 00:46:56.004268 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 10 00:46:56.004391 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 10 00:46:56.004508 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 10 00:46:56.004530 kernel: PCI: CLS 0 bytes, default 64 May 10 00:46:56.004544 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 10 00:46:56.004558 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 10 00:46:56.004570 kernel: clocksource: Switched to clocksource tsc May 10 00:46:56.004583 kernel: Initialise system trusted keyrings May 10 00:46:56.004595 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 10 00:46:56.004608 kernel: Key type asymmetric registered May 10 00:46:56.004621 kernel: Asymmetric key parser 'x509' registered May 10 00:46:56.004634 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 10 00:46:56.004650 kernel: io scheduler mq-deadline registered May 10 00:46:56.004674 kernel: io scheduler kyber registered May 10 00:46:56.004687 kernel: io scheduler bfq registered May 10 00:46:56.004700 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 00:46:56.004712 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:46:56.004724 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 00:46:56.004738 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 00:46:56.004750 kernel: i8042: Warning: Keylock active May 10 00:46:56.004763 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 00:46:56.004778 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 00:46:56.004906 kernel: rtc_cmos 00:00: RTC can wake from S4 May 10 00:46:56.005013 kernel: rtc_cmos 00:00: registered as rtc0 May 10 00:46:56.005121 kernel: rtc_cmos 00:00: setting system clock to 2025-05-10T00:46:55 UTC (1746838015) May 10 00:46:56.005226 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 10 00:46:56.005242 kernel: intel_pstate: CPU model not supported May 10 00:46:56.005256 kernel: efifb: probing for efifb May 10 00:46:56.005270 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 10 00:46:56.005287 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 10 00:46:56.005301 kernel: efifb: scrolling: redraw May 10 00:46:56.005349 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 10 00:46:56.005362 kernel: Console: switching to colour frame buffer device 100x37 May 10 00:46:56.005375 kernel: fb0: EFI VGA frame buffer device May 10 00:46:56.005388 kernel: pstore: Registered efi as persistent store backend May 10 00:46:56.005450 kernel: NET: Registered PF_INET6 protocol family May 10 00:46:56.005467 kernel: Segment Routing with IPv6 May 10 00:46:56.005505 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:46:56.005522 kernel: NET: Registered PF_PACKET protocol family May 10 00:46:56.005537 kernel: Key type dns_resolver registered May 10 00:46:56.005550 kernel: IPI shorthand broadcast: enabled May 10 00:46:56.005586 kernel: sched_clock: Marking stable (345003070, 130255053)->(560468091, -85209968) May 10 00:46:56.005600 kernel: registered taskstats version 1 May 10 00:46:56.005616 kernel: Loading compiled-in X.509 certificates May 10 00:46:56.009743 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 0c62a22cd9157131d2e97d5a2e1bd9023e187117' May 10 00:46:56.009788 kernel: Key type .fscrypt registered May 10 00:46:56.009804 kernel: Key type fscrypt-provisioning registered May 10 00:46:56.009828 kernel: pstore: Using crash dump compression: deflate May 10 00:46:56.009844 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 00:46:56.009860 kernel: ima: Allocated hash algorithm: sha1 May 10 00:46:56.009876 kernel: ima: No architecture policies found May 10 00:46:56.009892 kernel: clk: Disabling unused clocks May 10 00:46:56.009907 kernel: Freeing unused kernel image (initmem) memory: 47456K May 10 00:46:56.009923 kernel: Write protecting the kernel read-only data: 28672k May 10 00:46:56.009939 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 10 00:46:56.009955 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 10 00:46:56.009973 kernel: Run /init as init process May 10 00:46:56.009988 kernel: with arguments: May 10 00:46:56.010004 kernel: /init May 10 00:46:56.010019 kernel: with environment: May 10 00:46:56.010035 kernel: HOME=/ May 10 00:46:56.010050 kernel: TERM=linux May 10 00:46:56.010066 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:46:56.010087 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:46:56.010108 systemd[1]: Detected virtualization amazon. May 10 00:46:56.010125 systemd[1]: Detected architecture x86-64. May 10 00:46:56.010140 systemd[1]: Running in initrd. May 10 00:46:56.010156 systemd[1]: No hostname configured, using default hostname. May 10 00:46:56.010172 systemd[1]: Hostname set to . May 10 00:46:56.010188 systemd[1]: Initializing machine ID from VM UUID. May 10 00:46:56.010203 systemd[1]: Queued start job for default target initrd.target. May 10 00:46:56.010223 systemd[1]: Started systemd-ask-password-console.path. May 10 00:46:56.010241 systemd[1]: Reached target cryptsetup.target. May 10 00:46:56.010257 systemd[1]: Reached target paths.target. May 10 00:46:56.010272 systemd[1]: Reached target slices.target. May 10 00:46:56.010288 systemd[1]: Reached target swap.target. May 10 00:46:56.010303 systemd[1]: Reached target timers.target. May 10 00:46:56.010323 systemd[1]: Listening on iscsid.socket. May 10 00:46:56.010339 systemd[1]: Listening on iscsiuio.socket. May 10 00:46:56.010355 systemd[1]: Listening on systemd-journald-audit.socket. May 10 00:46:56.010371 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 00:46:56.010387 systemd[1]: Listening on systemd-journald.socket. May 10 00:46:56.010403 systemd[1]: Listening on systemd-networkd.socket. May 10 00:46:56.010418 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:46:56.010434 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:46:56.010453 systemd[1]: Reached target sockets.target. May 10 00:46:56.010469 systemd[1]: Starting kmod-static-nodes.service... May 10 00:46:56.010485 systemd[1]: Finished network-cleanup.service. May 10 00:46:56.010501 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:46:56.010517 systemd[1]: Starting systemd-journald.service... May 10 00:46:56.010532 systemd[1]: Starting systemd-modules-load.service... May 10 00:46:56.010548 systemd[1]: Starting systemd-resolved.service... May 10 00:46:56.010564 systemd[1]: Starting systemd-vconsole-setup.service... May 10 00:46:56.010580 systemd[1]: Finished kmod-static-nodes.service. May 10 00:46:56.010599 kernel: audit: type=1130 audit(1746838015.994:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.010615 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:46:56.010631 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 00:46:56.010664 systemd-journald[185]: Journal started May 10 00:46:56.010754 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2e23b9fb308774bc744cb0bc23a8f5) is 4.8M, max 38.3M, 33.5M free. May 10 00:46:55.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.011516 systemd-modules-load[186]: Inserted module 'overlay' May 10 00:46:56.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.028679 kernel: audit: type=1130 audit(1746838016.020:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.028724 systemd[1]: Started systemd-journald.service. May 10 00:46:56.037696 systemd-resolved[187]: Positive Trust Anchors: May 10 00:46:56.039410 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:46:56.042244 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:46:56.072039 kernel: audit: type=1130 audit(1746838016.042:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.072088 kernel: audit: type=1130 audit(1746838016.062:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.044559 systemd[1]: Finished systemd-vconsole-setup.service. May 10 00:46:56.060101 systemd-resolved[187]: Defaulting to hostname 'linux'. May 10 00:46:56.093414 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:46:56.093450 kernel: audit: type=1130 audit(1746838016.078:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.093471 kernel: Bridge firewalling registered May 10 00:46:56.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.064248 systemd[1]: Started systemd-resolved.service. May 10 00:46:56.080523 systemd[1]: Reached target nss-lookup.target. May 10 00:46:56.088563 systemd-modules-load[186]: Inserted module 'br_netfilter' May 10 00:46:56.091891 systemd[1]: Starting dracut-cmdline-ask.service... May 10 00:46:56.099763 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:46:56.116802 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:46:56.127378 kernel: audit: type=1130 audit(1746838016.116:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.139602 kernel: SCSI subsystem initialized May 10 00:46:56.139682 kernel: audit: type=1130 audit(1746838016.131:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.131743 systemd[1]: Finished dracut-cmdline-ask.service. May 10 00:46:56.140991 systemd[1]: Starting dracut-cmdline.service... May 10 00:46:56.156953 dracut-cmdline[202]: dracut-dracut-053 May 10 00:46:56.166042 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:46:56.166077 kernel: device-mapper: uevent: version 1.0.3 May 10 00:46:56.166097 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 10 00:46:56.166115 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:46:56.185234 kernel: audit: type=1130 audit(1746838016.180:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.179434 systemd-modules-load[186]: Inserted module 'dm_multipath' May 10 00:46:56.180353 systemd[1]: Finished systemd-modules-load.service. May 10 00:46:56.182996 systemd[1]: Starting systemd-sysctl.service... May 10 00:46:56.205958 systemd[1]: Finished systemd-sysctl.service. May 10 00:46:56.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.217721 kernel: audit: type=1130 audit(1746838016.208:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.244684 kernel: Loading iSCSI transport class v2.0-870. May 10 00:46:56.263682 kernel: iscsi: registered transport (tcp) May 10 00:46:56.289687 kernel: iscsi: registered transport (qla4xxx) May 10 00:46:56.289759 kernel: QLogic iSCSI HBA Driver May 10 00:46:56.321723 systemd[1]: Finished dracut-cmdline.service. May 10 00:46:56.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.323770 systemd[1]: Starting dracut-pre-udev.service... May 10 00:46:56.375711 kernel: raid6: avx512x4 gen() 15477 MB/s May 10 00:46:56.393701 kernel: raid6: avx512x4 xor() 8268 MB/s May 10 00:46:56.411690 kernel: raid6: avx512x2 gen() 15360 MB/s May 10 00:46:56.429685 kernel: raid6: avx512x2 xor() 24411 MB/s May 10 00:46:56.447688 kernel: raid6: avx512x1 gen() 15336 MB/s May 10 00:46:56.465704 kernel: raid6: avx512x1 xor() 21986 MB/s May 10 00:46:56.483690 kernel: raid6: avx2x4 gen() 15243 MB/s May 10 00:46:56.501686 kernel: raid6: avx2x4 xor() 7319 MB/s May 10 00:46:56.519702 kernel: raid6: avx2x2 gen() 15194 MB/s May 10 00:46:56.537697 kernel: raid6: avx2x2 xor() 18232 MB/s May 10 00:46:56.555684 kernel: raid6: avx2x1 gen() 11550 MB/s May 10 00:46:56.573681 kernel: raid6: avx2x1 xor() 15810 MB/s May 10 00:46:56.591681 kernel: raid6: sse2x4 gen() 9533 MB/s May 10 00:46:56.609681 kernel: raid6: sse2x4 xor() 6151 MB/s May 10 00:46:56.627680 kernel: raid6: sse2x2 gen() 10541 MB/s May 10 00:46:56.645681 kernel: raid6: sse2x2 xor() 6252 MB/s May 10 00:46:56.663682 kernel: raid6: sse2x1 gen() 9479 MB/s May 10 00:46:56.682098 kernel: raid6: sse2x1 xor() 4872 MB/s May 10 00:46:56.682150 kernel: raid6: using algorithm avx512x4 gen() 15477 MB/s May 10 00:46:56.682169 kernel: raid6: .... xor() 8268 MB/s, rmw enabled May 10 00:46:56.683303 kernel: raid6: using avx512x2 recovery algorithm May 10 00:46:56.697687 kernel: xor: automatically using best checksumming function avx May 10 00:46:56.798688 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 10 00:46:56.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.806000 audit: BPF prog-id=7 op=LOAD May 10 00:46:56.807000 audit: BPF prog-id=8 op=LOAD May 10 00:46:56.807786 systemd[1]: Finished dracut-pre-udev.service. May 10 00:46:56.809173 systemd[1]: Starting systemd-udevd.service... May 10 00:46:56.822775 systemd-udevd[384]: Using default interface naming scheme 'v252'. May 10 00:46:56.828169 systemd[1]: Started systemd-udevd.service. May 10 00:46:56.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.830025 systemd[1]: Starting dracut-pre-trigger.service... May 10 00:46:56.849860 dracut-pre-trigger[390]: rd.md=0: removing MD RAID activation May 10 00:46:56.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.881498 systemd[1]: Finished dracut-pre-trigger.service. May 10 00:46:56.882740 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:46:56.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.926292 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:46:56.987678 kernel: cryptd: max_cpu_qlen set to 1000 May 10 00:46:57.019559 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 10 00:46:57.043951 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 10 00:46:57.044076 kernel: AVX2 version of gcm_enc/dec engaged. May 10 00:46:57.044094 kernel: AES CTR mode by8 optimization enabled May 10 00:46:57.044106 kernel: nvme nvme0: pci function 0000:00:04.0 May 10 00:46:57.044214 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 10 00:46:57.044244 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 10 00:46:57.044340 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:1f:85:59:2e:8f May 10 00:46:57.049717 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 10 00:46:57.061612 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 00:46:57.061690 kernel: GPT:9289727 != 16777215 May 10 00:46:57.061703 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 00:46:57.061714 kernel: GPT:9289727 != 16777215 May 10 00:46:57.061733 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:46:57.062965 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 10 00:46:57.075014 (udev-worker)[437]: Network interface NamePolicy= disabled on kernel command line. May 10 00:46:57.134685 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (436) May 10 00:46:57.164619 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 10 00:46:57.172771 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:46:57.195610 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 10 00:46:57.197799 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 10 00:46:57.203383 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 10 00:46:57.205288 systemd[1]: Starting disk-uuid.service... May 10 00:46:57.212500 disk-uuid[592]: Primary Header is updated. May 10 00:46:57.212500 disk-uuid[592]: Secondary Entries is updated. May 10 00:46:57.212500 disk-uuid[592]: Secondary Header is updated. May 10 00:46:57.221701 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 10 00:46:57.226684 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 10 00:46:57.232686 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 10 00:46:58.233754 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 10 00:46:58.233813 disk-uuid[593]: The operation has completed successfully. May 10 00:46:58.337383 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:46:58.337491 systemd[1]: Finished disk-uuid.service. May 10 00:46:58.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.339017 systemd[1]: Starting verity-setup.service... May 10 00:46:58.357006 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 10 00:46:58.442393 systemd[1]: Found device dev-mapper-usr.device. May 10 00:46:58.445195 systemd[1]: Mounting sysusr-usr.mount... May 10 00:46:58.450153 systemd[1]: Finished verity-setup.service. May 10 00:46:58.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.541675 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 10 00:46:58.542421 systemd[1]: Mounted sysusr-usr.mount. May 10 00:46:58.543284 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 10 00:46:58.544005 systemd[1]: Starting ignition-setup.service... May 10 00:46:58.548061 systemd[1]: Starting parse-ip-for-networkd.service... May 10 00:46:58.568532 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 10 00:46:58.568589 kernel: BTRFS info (device nvme0n1p6): using free space tree May 10 00:46:58.568601 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 10 00:46:58.592677 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 10 00:46:58.603071 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 00:46:58.615116 systemd[1]: Finished ignition-setup.service. May 10 00:46:58.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.616606 systemd[1]: Starting ignition-fetch-offline.service... May 10 00:46:58.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.633000 audit: BPF prog-id=9 op=LOAD May 10 00:46:58.633737 systemd[1]: Finished parse-ip-for-networkd.service. May 10 00:46:58.638025 systemd[1]: Starting systemd-networkd.service... May 10 00:46:58.659773 systemd-networkd[1107]: lo: Link UP May 10 00:46:58.659784 systemd-networkd[1107]: lo: Gained carrier May 10 00:46:58.660492 systemd-networkd[1107]: Enumeration completed May 10 00:46:58.660603 systemd[1]: Started systemd-networkd.service. May 10 00:46:58.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.660895 systemd-networkd[1107]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:46:58.664118 systemd[1]: Reached target network.target. May 10 00:46:58.664305 systemd-networkd[1107]: eth0: Link UP May 10 00:46:58.664310 systemd-networkd[1107]: eth0: Gained carrier May 10 00:46:58.666215 systemd[1]: Starting iscsiuio.service... May 10 00:46:58.674832 systemd[1]: Started iscsiuio.service. May 10 00:46:58.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.678505 systemd[1]: Starting iscsid.service... May 10 00:46:58.682803 systemd-networkd[1107]: eth0: DHCPv4 address 172.31.20.182/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 10 00:46:58.685101 iscsid[1112]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 10 00:46:58.685101 iscsid[1112]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 10 00:46:58.685101 iscsid[1112]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 10 00:46:58.685101 iscsid[1112]: If using hardware iscsi like qla4xxx this message can be ignored. May 10 00:46:58.685101 iscsid[1112]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 10 00:46:58.685101 iscsid[1112]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 10 00:46:58.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.686504 systemd[1]: Started iscsid.service. May 10 00:46:58.689702 systemd[1]: Starting dracut-initqueue.service... May 10 00:46:58.704394 systemd[1]: Finished dracut-initqueue.service. May 10 00:46:58.705626 systemd[1]: Reached target remote-fs-pre.target. May 10 00:46:58.708058 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:46:58.709599 systemd[1]: Reached target remote-fs.target. May 10 00:46:58.712103 systemd[1]: Starting dracut-pre-mount.service... May 10 00:46:58.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.723691 systemd[1]: Finished dracut-pre-mount.service. May 10 00:46:59.179731 ignition[1092]: Ignition 2.14.0 May 10 00:46:59.179747 ignition[1092]: Stage: fetch-offline May 10 00:46:59.179862 ignition[1092]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:46:59.179892 ignition[1092]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:46:59.196809 ignition[1092]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:46:59.197155 ignition[1092]: Ignition finished successfully May 10 00:46:59.199207 systemd[1]: Finished ignition-fetch-offline.service. May 10 00:46:59.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:59.200840 systemd[1]: Starting ignition-fetch.service... May 10 00:46:59.208999 ignition[1131]: Ignition 2.14.0 May 10 00:46:59.209012 ignition[1131]: Stage: fetch May 10 00:46:59.209221 ignition[1131]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:46:59.209255 ignition[1131]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:46:59.217380 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:46:59.218758 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:46:59.225176 ignition[1131]: INFO : PUT result: OK May 10 00:46:59.226632 ignition[1131]: DEBUG : parsed url from cmdline: "" May 10 00:46:59.227339 ignition[1131]: INFO : no config URL provided May 10 00:46:59.227339 ignition[1131]: INFO : reading system config file "/usr/lib/ignition/user.ign" May 10 00:46:59.227339 ignition[1131]: INFO : no config at "/usr/lib/ignition/user.ign" May 10 00:46:59.227339 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:46:59.227339 ignition[1131]: INFO : PUT result: OK May 10 00:46:59.227339 ignition[1131]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 10 00:46:59.231335 ignition[1131]: INFO : GET result: OK May 10 00:46:59.231335 ignition[1131]: DEBUG : parsing config with SHA512: d27cf995126ff4429a71989f854122a2a7b228a8f47aca18a226563c4cf47620fc827ea6c186e00fe9ab04e361ab512fdde17c4bef48fde8f3945513683df515 May 10 00:46:59.232454 unknown[1131]: fetched base config from "system" May 10 00:46:59.232936 ignition[1131]: fetch: fetch complete May 10 00:46:59.232461 unknown[1131]: fetched base config from "system" May 10 00:46:59.232941 ignition[1131]: fetch: fetch passed May 10 00:46:59.232467 unknown[1131]: fetched user config from "aws" May 10 00:46:59.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:59.232988 ignition[1131]: Ignition finished successfully May 10 00:46:59.234915 systemd[1]: Finished ignition-fetch.service. May 10 00:46:59.237072 systemd[1]: Starting ignition-kargs.service... May 10 00:46:59.246916 ignition[1137]: Ignition 2.14.0 May 10 00:46:59.246926 ignition[1137]: Stage: kargs May 10 00:46:59.247070 ignition[1137]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:46:59.247092 ignition[1137]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:46:59.253431 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:46:59.254127 ignition[1137]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:46:59.254760 ignition[1137]: INFO : PUT result: OK May 10 00:46:59.256496 ignition[1137]: kargs: kargs passed May 10 00:46:59.256548 ignition[1137]: Ignition finished successfully May 10 00:46:59.257979 systemd[1]: Finished ignition-kargs.service. May 10 00:46:59.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:59.259383 systemd[1]: Starting ignition-disks.service... May 10 00:46:59.267630 ignition[1143]: Ignition 2.14.0 May 10 00:46:59.267642 ignition[1143]: Stage: disks May 10 00:46:59.267805 ignition[1143]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:46:59.267824 ignition[1143]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:46:59.272859 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:46:59.279914 ignition[1143]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:46:59.282795 ignition[1143]: INFO : PUT result: OK May 10 00:46:59.296015 ignition[1143]: disks: disks passed May 10 00:46:59.296530 ignition[1143]: Ignition finished successfully May 10 00:46:59.302368 systemd[1]: Finished ignition-disks.service. May 10 00:46:59.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:59.305916 systemd[1]: Reached target initrd-root-device.target. May 10 00:46:59.309567 systemd[1]: Reached target local-fs-pre.target. May 10 00:46:59.310812 systemd[1]: Reached target local-fs.target. May 10 00:46:59.312724 systemd[1]: Reached target sysinit.target. May 10 00:46:59.313907 systemd[1]: Reached target basic.target. May 10 00:46:59.316907 systemd[1]: Starting systemd-fsck-root.service... May 10 00:46:59.364914 systemd-fsck[1151]: ROOT: clean, 623/553520 files, 56023/553472 blocks May 10 00:46:59.368302 systemd[1]: Finished systemd-fsck-root.service. May 10 00:46:59.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:59.369759 systemd[1]: Mounting sysroot.mount... May 10 00:46:59.393679 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 10 00:46:59.394745 systemd[1]: Mounted sysroot.mount. May 10 00:46:59.396906 systemd[1]: Reached target initrd-root-fs.target. May 10 00:46:59.405470 systemd[1]: Mounting sysroot-usr.mount... May 10 00:46:59.408941 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 10 00:46:59.409009 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:46:59.409048 systemd[1]: Reached target ignition-diskful.target. May 10 00:46:59.412772 systemd[1]: Mounted sysroot-usr.mount. May 10 00:46:59.417584 systemd[1]: Starting initrd-setup-root.service... May 10 00:46:59.430697 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:46:59.459613 initrd-setup-root[1180]: cut: /sysroot/etc/group: No such file or directory May 10 00:46:59.463772 initrd-setup-root[1188]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:46:59.468441 initrd-setup-root[1196]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:46:59.475063 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:46:59.498702 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1204) May 10 00:46:59.504003 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 10 00:46:59.504066 kernel: BTRFS info (device nvme0n1p6): using free space tree May 10 00:46:59.504080 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 10 00:46:59.526962 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 10 00:46:59.535867 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:46:59.639153 systemd[1]: Finished initrd-setup-root.service. May 10 00:46:59.641084 systemd[1]: Starting ignition-mount.service... May 10 00:46:59.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:59.646395 systemd[1]: Starting sysroot-boot.service... May 10 00:46:59.651810 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 10 00:46:59.651936 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 10 00:46:59.666035 ignition[1233]: INFO : Ignition 2.14.0 May 10 00:46:59.666035 ignition[1233]: INFO : Stage: mount May 10 00:46:59.669097 ignition[1233]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:46:59.669097 ignition[1233]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:46:59.678433 ignition[1233]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:46:59.679298 ignition[1233]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:46:59.681330 ignition[1233]: INFO : PUT result: OK May 10 00:46:59.684463 ignition[1233]: INFO : mount: mount passed May 10 00:46:59.685177 ignition[1233]: INFO : Ignition finished successfully May 10 00:46:59.685907 systemd[1]: Finished sysroot-boot.service. May 10 00:46:59.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:59.686779 systemd[1]: Finished ignition-mount.service. May 10 00:46:59.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:59.688847 systemd[1]: Starting ignition-files.service... May 10 00:46:59.696489 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:46:59.718690 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1243) May 10 00:46:59.722020 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 10 00:46:59.722087 kernel: BTRFS info (device nvme0n1p6): using free space tree May 10 00:46:59.722107 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 10 00:46:59.758694 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 10 00:46:59.762318 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:46:59.772738 ignition[1262]: INFO : Ignition 2.14.0 May 10 00:46:59.772738 ignition[1262]: INFO : Stage: files May 10 00:46:59.774255 ignition[1262]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:46:59.774255 ignition[1262]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:46:59.780962 ignition[1262]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:46:59.781703 ignition[1262]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:46:59.782378 ignition[1262]: INFO : PUT result: OK May 10 00:46:59.785482 ignition[1262]: DEBUG : files: compiled without relabeling support, skipping May 10 00:46:59.792932 ignition[1262]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:46:59.792932 ignition[1262]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:46:59.806182 ignition[1262]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:46:59.807710 ignition[1262]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:46:59.809190 unknown[1262]: wrote ssh authorized keys file for user: core May 10 00:46:59.810176 ignition[1262]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:46:59.818776 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/eks/bootstrap.sh" May 10 00:46:59.820221 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 10 00:46:59.823494 ignition[1262]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem505827401" May 10 00:46:59.823494 ignition[1262]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem505827401": device or resource busy May 10 00:46:59.823494 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem505827401", trying btrfs: device or resource busy May 10 00:46:59.823494 ignition[1262]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem505827401" May 10 00:46:59.829410 ignition[1262]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem505827401" May 10 00:46:59.838810 ignition[1262]: INFO : op(3): [started] unmounting "/mnt/oem505827401" May 10 00:46:59.839720 ignition[1262]: INFO : op(3): [finished] unmounting "/mnt/oem505827401" May 10 00:46:59.839720 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" May 10 00:46:59.839720 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 10 00:46:59.839720 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:46:59.839720 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:46:59.839720 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:46:59.839720 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 10 00:46:59.839720 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 10 00:46:59.853350 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" May 10 00:46:59.853350 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 10 00:46:59.853350 ignition[1262]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2694714776" May 10 00:46:59.853350 ignition[1262]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2694714776": device or resource busy May 10 00:46:59.853350 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2694714776", trying btrfs: device or resource busy May 10 00:46:59.853350 ignition[1262]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2694714776" May 10 00:46:59.853350 ignition[1262]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2694714776" May 10 00:46:59.853350 ignition[1262]: INFO : op(6): [started] unmounting "/mnt/oem2694714776" May 10 00:46:59.853350 ignition[1262]: INFO : op(6): [finished] unmounting "/mnt/oem2694714776" May 10 00:46:59.853350 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" May 10 00:46:59.853350 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" May 10 00:46:59.853350 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 10 00:46:59.853350 ignition[1262]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2007985127" May 10 00:46:59.853350 ignition[1262]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2007985127": device or resource busy May 10 00:46:59.853350 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2007985127", trying btrfs: device or resource busy May 10 00:46:59.853350 ignition[1262]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2007985127" May 10 00:46:59.853350 ignition[1262]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2007985127" May 10 00:46:59.853350 ignition[1262]: INFO : op(9): [started] unmounting "/mnt/oem2007985127" May 10 00:46:59.853350 ignition[1262]: INFO : op(9): [finished] unmounting "/mnt/oem2007985127" May 10 00:46:59.853350 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" May 10 00:46:59.853350 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 10 00:46:59.886069 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 10 00:46:59.886069 ignition[1262]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3245507757" May 10 00:46:59.886069 ignition[1262]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3245507757": device or resource busy May 10 00:46:59.886069 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3245507757", trying btrfs: device or resource busy May 10 00:46:59.886069 ignition[1262]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3245507757" May 10 00:46:59.886069 ignition[1262]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3245507757" May 10 00:46:59.886069 ignition[1262]: INFO : op(c): [started] unmounting "/mnt/oem3245507757" May 10 00:46:59.886069 ignition[1262]: INFO : op(c): [finished] unmounting "/mnt/oem3245507757" May 10 00:46:59.886069 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 10 00:46:59.886069 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 10 00:46:59.886069 ignition[1262]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 10 00:47:00.325620 ignition[1262]: INFO : GET result: OK May 10 00:47:00.495795 systemd-networkd[1107]: eth0: Gained IPv6LL May 10 00:47:00.715703 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 10 00:47:00.715703 ignition[1262]: INFO : files: op(b): [started] processing unit "amazon-ssm-agent.service" May 10 00:47:00.722806 ignition[1262]: INFO : files: op(b): op(c): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" May 10 00:47:00.722806 ignition[1262]: INFO : files: op(b): op(c): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" May 10 00:47:00.722806 ignition[1262]: INFO : files: op(b): [finished] processing unit "amazon-ssm-agent.service" May 10 00:47:00.722806 ignition[1262]: INFO : files: op(d): [started] processing unit "nvidia.service" May 10 00:47:00.722806 ignition[1262]: INFO : files: op(d): [finished] processing unit "nvidia.service" May 10 00:47:00.722806 ignition[1262]: INFO : files: op(e): [started] processing unit "coreos-metadata-sshkeys@.service" May 10 00:47:00.722806 ignition[1262]: INFO : files: op(e): [finished] processing unit "coreos-metadata-sshkeys@.service" May 10 00:47:00.722806 ignition[1262]: INFO : files: op(f): [started] setting preset to enabled for "amazon-ssm-agent.service" May 10 00:47:00.722806 ignition[1262]: INFO : files: op(f): [finished] setting preset to enabled for "amazon-ssm-agent.service" May 10 00:47:00.722806 ignition[1262]: INFO : files: op(10): [started] setting preset to enabled for "nvidia.service" May 10 00:47:00.722806 ignition[1262]: INFO : files: op(10): [finished] setting preset to enabled for "nvidia.service" May 10 00:47:00.722806 ignition[1262]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 00:47:00.722806 ignition[1262]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 00:47:00.794051 kernel: kauditd_printk_skb: 26 callbacks suppressed May 10 00:47:00.794084 kernel: audit: type=1130 audit(1746838020.727:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.794105 kernel: audit: type=1130 audit(1746838020.758:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.794125 kernel: audit: type=1131 audit(1746838020.758:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.794144 kernel: audit: type=1130 audit(1746838020.770:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.794373 ignition[1262]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:47:00.794373 ignition[1262]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:47:00.794373 ignition[1262]: INFO : files: files passed May 10 00:47:00.794373 ignition[1262]: INFO : Ignition finished successfully May 10 00:47:00.726042 systemd[1]: Finished ignition-files.service. May 10 00:47:00.816677 kernel: audit: type=1130 audit(1746838020.803:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.816718 kernel: audit: type=1131 audit(1746838020.803:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.738294 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 10 00:47:00.818621 initrd-setup-root-after-ignition[1287]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:47:00.745066 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 10 00:47:00.746313 systemd[1]: Starting ignition-quench.service... May 10 00:47:00.753617 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:47:00.754382 systemd[1]: Finished ignition-quench.service. May 10 00:47:00.760250 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 10 00:47:00.771866 systemd[1]: Reached target ignition-complete.target. May 10 00:47:00.781256 systemd[1]: Starting initrd-parse-etc.service... May 10 00:47:00.803456 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:47:00.848541 kernel: audit: type=1130 audit(1746838020.838:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.803590 systemd[1]: Finished initrd-parse-etc.service. May 10 00:47:00.805170 systemd[1]: Reached target initrd-fs.target. May 10 00:47:00.817801 systemd[1]: Reached target initrd.target. May 10 00:47:00.819576 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 10 00:47:00.820942 systemd[1]: Starting dracut-pre-pivot.service... May 10 00:47:00.838361 systemd[1]: Finished dracut-pre-pivot.service. May 10 00:47:00.841206 systemd[1]: Starting initrd-cleanup.service... May 10 00:47:00.857889 systemd[1]: Stopped target nss-lookup.target. May 10 00:47:00.869881 kernel: audit: type=1131 audit(1746838020.861:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.859496 systemd[1]: Stopped target remote-cryptsetup.target. May 10 00:47:00.860498 systemd[1]: Stopped target timers.target. May 10 00:47:00.862067 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:47:00.862267 systemd[1]: Stopped dracut-pre-pivot.service. May 10 00:47:00.863808 systemd[1]: Stopped target initrd.target. May 10 00:47:00.870857 systemd[1]: Stopped target basic.target. May 10 00:47:00.872389 systemd[1]: Stopped target ignition-complete.target. May 10 00:47:00.873818 systemd[1]: Stopped target ignition-diskful.target. May 10 00:47:00.875211 systemd[1]: Stopped target initrd-root-device.target. May 10 00:47:00.876674 systemd[1]: Stopped target remote-fs.target. May 10 00:47:00.878028 systemd[1]: Stopped target remote-fs-pre.target. May 10 00:47:00.879407 systemd[1]: Stopped target sysinit.target. May 10 00:47:00.891712 kernel: audit: type=1131 audit(1746838020.884:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.880857 systemd[1]: Stopped target local-fs.target. May 10 00:47:00.882227 systemd[1]: Stopped target local-fs-pre.target. May 10 00:47:00.899971 kernel: audit: type=1131 audit(1746838020.892:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.883560 systemd[1]: Stopped target swap.target. May 10 00:47:00.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.884930 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:47:00.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.885131 systemd[1]: Stopped dracut-pre-mount.service. May 10 00:47:00.886559 systemd[1]: Stopped target cryptsetup.target. May 10 00:47:00.892731 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:47:00.892948 systemd[1]: Stopped dracut-initqueue.service. May 10 00:47:00.894399 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:47:00.894601 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 10 00:47:00.901092 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:47:00.901293 systemd[1]: Stopped ignition-files.service. May 10 00:47:00.903953 systemd[1]: Stopping ignition-mount.service... May 10 00:47:00.916709 ignition[1300]: INFO : Ignition 2.14.0 May 10 00:47:00.916709 ignition[1300]: INFO : Stage: umount May 10 00:47:00.916709 ignition[1300]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:47:00.916709 ignition[1300]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:47:00.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.922261 systemd[1]: Stopping sysroot-boot.service... May 10 00:47:00.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.923503 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:47:00.941945 ignition[1300]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:47:00.923806 systemd[1]: Stopped systemd-udev-trigger.service. May 10 00:47:00.944678 ignition[1300]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:47:00.925456 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:47:00.925698 systemd[1]: Stopped dracut-pre-trigger.service. May 10 00:47:00.934355 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:47:00.934506 systemd[1]: Finished initrd-cleanup.service. May 10 00:47:00.951970 ignition[1300]: INFO : PUT result: OK May 10 00:47:00.952411 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:47:00.955237 ignition[1300]: INFO : umount: umount passed May 10 00:47:00.955237 ignition[1300]: INFO : Ignition finished successfully May 10 00:47:00.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.956369 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:47:00.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.956562 systemd[1]: Stopped ignition-mount.service. May 10 00:47:00.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.957402 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:47:00.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.957463 systemd[1]: Stopped ignition-disks.service. May 10 00:47:00.958975 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:47:00.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.959031 systemd[1]: Stopped ignition-kargs.service. May 10 00:47:00.960420 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 00:47:00.960474 systemd[1]: Stopped ignition-fetch.service. May 10 00:47:00.962195 systemd[1]: Stopped target network.target. May 10 00:47:00.963504 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:47:00.963568 systemd[1]: Stopped ignition-fetch-offline.service. May 10 00:47:00.965011 systemd[1]: Stopped target paths.target. May 10 00:47:00.966369 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:47:00.969749 systemd[1]: Stopped systemd-ask-password-console.path. May 10 00:47:00.970752 systemd[1]: Stopped target slices.target. May 10 00:47:00.972131 systemd[1]: Stopped target sockets.target. May 10 00:47:00.973560 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:47:00.973601 systemd[1]: Closed iscsid.socket. May 10 00:47:00.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.974961 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:47:00.974999 systemd[1]: Closed iscsiuio.socket. May 10 00:47:00.978792 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:47:00.978865 systemd[1]: Stopped ignition-setup.service. May 10 00:47:00.979864 systemd[1]: Stopping systemd-networkd.service... May 10 00:47:00.981649 systemd[1]: Stopping systemd-resolved.service... May 10 00:47:00.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.984716 systemd-networkd[1107]: eth0: DHCPv6 lease lost May 10 00:47:00.985930 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:47:00.986060 systemd[1]: Stopped systemd-networkd.service. May 10 00:47:00.989000 audit: BPF prog-id=9 op=UNLOAD May 10 00:47:00.987457 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:47:00.987501 systemd[1]: Closed systemd-networkd.socket. May 10 00:47:00.992850 systemd[1]: Stopping network-cleanup.service... May 10 00:47:00.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.994426 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:47:00.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.994500 systemd[1]: Stopped parse-ip-for-networkd.service. May 10 00:47:00.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:00.995782 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:47:00.995843 systemd[1]: Stopped systemd-sysctl.service. May 10 00:47:00.997256 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:47:00.997313 systemd[1]: Stopped systemd-modules-load.service. May 10 00:47:01.003937 systemd[1]: Stopping systemd-udevd.service... May 10 00:47:01.007007 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 00:47:01.009908 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:47:01.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:01.010051 systemd[1]: Stopped systemd-resolved.service. May 10 00:47:01.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:01.013000 audit: BPF prog-id=6 op=UNLOAD May 10 00:47:01.012636 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:47:01.012854 systemd[1]: Stopped systemd-udevd.service. May 10 00:47:01.016638 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:47:01.016739 systemd[1]: Closed systemd-udevd-control.socket. May 10 00:47:01.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:01.018340 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:47:01.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:01.018389 systemd[1]: Closed systemd-udevd-kernel.socket. May 10 00:47:01.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:01.019670 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:47:01.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:01.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:01.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:01.019742 systemd[1]: Stopped dracut-pre-udev.service. May 10 00:47:01.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:01.021203 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:47:01.021264 systemd[1]: Stopped dracut-cmdline.service. May 10 00:47:01.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:01.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:01.022568 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:47:01.022626 systemd[1]: Stopped dracut-cmdline-ask.service. May 10 00:47:01.025108 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 10 00:47:01.025953 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 00:47:01.026029 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 10 00:47:01.030165 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:47:01.030234 systemd[1]: Stopped kmod-static-nodes.service. May 10 00:47:01.031633 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:47:01.031730 systemd[1]: Stopped systemd-vconsole-setup.service. May 10 00:47:01.034513 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 10 00:47:01.035368 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:47:01.035479 systemd[1]: Stopped network-cleanup.service. May 10 00:47:01.038102 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:47:01.038223 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 10 00:47:01.084264 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:47:01.084409 systemd[1]: Stopped sysroot-boot.service. May 10 00:47:01.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:01.086442 systemd[1]: Reached target initrd-switch-root.target. May 10 00:47:01.087712 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:47:01.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:01.087791 systemd[1]: Stopped initrd-setup-root.service. May 10 00:47:01.090275 systemd[1]: Starting initrd-switch-root.service... May 10 00:47:01.104874 systemd[1]: Switching root. May 10 00:47:01.132592 iscsid[1112]: iscsid shutting down. May 10 00:47:01.133408 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). May 10 00:47:01.133467 systemd-journald[185]: Journal stopped May 10 00:47:06.350380 kernel: SELinux: Class mctp_socket not defined in policy. May 10 00:47:06.350465 kernel: SELinux: Class anon_inode not defined in policy. May 10 00:47:06.350488 kernel: SELinux: the above unknown classes and permissions will be allowed May 10 00:47:06.350508 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:47:06.350531 kernel: SELinux: policy capability open_perms=1 May 10 00:47:06.350551 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:47:06.350571 kernel: SELinux: policy capability always_check_network=0 May 10 00:47:06.350590 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:47:06.350610 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:47:06.350637 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:47:06.350667 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:47:06.350687 systemd[1]: Successfully loaded SELinux policy in 106.251ms. May 10 00:47:06.350729 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.916ms. May 10 00:47:06.350752 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:47:06.350771 systemd[1]: Detected virtualization amazon. May 10 00:47:06.350790 systemd[1]: Detected architecture x86-64. May 10 00:47:06.350809 systemd[1]: Detected first boot. May 10 00:47:06.350829 systemd[1]: Initializing machine ID from VM UUID. May 10 00:47:06.350849 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 10 00:47:06.350869 systemd[1]: Populated /etc with preset unit settings. May 10 00:47:06.350887 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:47:06.350917 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:47:06.350942 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:47:06.350964 kernel: kauditd_printk_skb: 47 callbacks suppressed May 10 00:47:06.350985 kernel: audit: type=1334 audit(1746838026.068:87): prog-id=12 op=LOAD May 10 00:47:06.351006 kernel: audit: type=1334 audit(1746838026.068:88): prog-id=3 op=UNLOAD May 10 00:47:06.351026 kernel: audit: type=1334 audit(1746838026.069:89): prog-id=13 op=LOAD May 10 00:47:06.351049 kernel: audit: type=1334 audit(1746838026.071:90): prog-id=14 op=LOAD May 10 00:47:06.351068 kernel: audit: type=1334 audit(1746838026.071:91): prog-id=4 op=UNLOAD May 10 00:47:06.351089 kernel: audit: type=1334 audit(1746838026.071:92): prog-id=5 op=UNLOAD May 10 00:47:06.351112 kernel: audit: type=1131 audit(1746838026.073:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.351132 systemd[1]: iscsiuio.service: Deactivated successfully. May 10 00:47:06.351154 systemd[1]: Stopped iscsiuio.service. May 10 00:47:06.351181 kernel: audit: type=1131 audit(1746838026.087:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.351202 systemd[1]: iscsid.service: Deactivated successfully. May 10 00:47:06.351226 kernel: audit: type=1334 audit(1746838026.097:95): prog-id=12 op=UNLOAD May 10 00:47:06.351246 systemd[1]: Stopped iscsid.service. May 10 00:47:06.351267 kernel: audit: type=1131 audit(1746838026.101:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.351288 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 00:47:06.351309 systemd[1]: Stopped initrd-switch-root.service. May 10 00:47:06.351331 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 00:47:06.351358 systemd[1]: Created slice system-addon\x2dconfig.slice. May 10 00:47:06.351383 systemd[1]: Created slice system-addon\x2drun.slice. May 10 00:47:06.351404 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 10 00:47:06.351426 systemd[1]: Created slice system-getty.slice. May 10 00:47:06.351448 systemd[1]: Created slice system-modprobe.slice. May 10 00:47:06.351470 systemd[1]: Created slice system-serial\x2dgetty.slice. May 10 00:47:06.351492 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 10 00:47:06.351514 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 10 00:47:06.351536 systemd[1]: Created slice user.slice. May 10 00:47:06.351558 systemd[1]: Started systemd-ask-password-console.path. May 10 00:47:06.351583 systemd[1]: Started systemd-ask-password-wall.path. May 10 00:47:06.351604 systemd[1]: Set up automount boot.automount. May 10 00:47:06.351626 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 10 00:47:06.351648 systemd[1]: Stopped target initrd-switch-root.target. May 10 00:47:06.351699 systemd[1]: Stopped target initrd-fs.target. May 10 00:47:06.351721 systemd[1]: Stopped target initrd-root-fs.target. May 10 00:47:06.351743 systemd[1]: Reached target integritysetup.target. May 10 00:47:06.351766 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:47:06.351787 systemd[1]: Reached target remote-fs.target. May 10 00:47:06.351811 systemd[1]: Reached target slices.target. May 10 00:47:06.351833 systemd[1]: Reached target swap.target. May 10 00:47:06.351854 systemd[1]: Reached target torcx.target. May 10 00:47:06.351876 systemd[1]: Reached target veritysetup.target. May 10 00:47:06.351898 systemd[1]: Listening on systemd-coredump.socket. May 10 00:47:06.351919 systemd[1]: Listening on systemd-initctl.socket. May 10 00:47:06.351941 systemd[1]: Listening on systemd-networkd.socket. May 10 00:47:06.351963 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:47:06.351984 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:47:06.352006 systemd[1]: Listening on systemd-userdbd.socket. May 10 00:47:06.352030 systemd[1]: Mounting dev-hugepages.mount... May 10 00:47:06.352052 systemd[1]: Mounting dev-mqueue.mount... May 10 00:47:06.352074 systemd[1]: Mounting media.mount... May 10 00:47:06.352095 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:47:06.352112 systemd[1]: Mounting sys-kernel-debug.mount... May 10 00:47:06.352140 systemd[1]: Mounting sys-kernel-tracing.mount... May 10 00:47:06.352162 systemd[1]: Mounting tmp.mount... May 10 00:47:06.352192 systemd[1]: Starting flatcar-tmpfiles.service... May 10 00:47:06.352212 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:47:06.352231 systemd[1]: Starting kmod-static-nodes.service... May 10 00:47:06.352250 systemd[1]: Starting modprobe@configfs.service... May 10 00:47:06.352272 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:47:06.352291 systemd[1]: Starting modprobe@drm.service... May 10 00:47:06.352311 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:47:06.352333 systemd[1]: Starting modprobe@fuse.service... May 10 00:47:06.352352 systemd[1]: Starting modprobe@loop.service... May 10 00:47:06.352371 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:47:06.352389 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 00:47:06.352408 systemd[1]: Stopped systemd-fsck-root.service. May 10 00:47:06.352427 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 00:47:06.352446 systemd[1]: Stopped systemd-fsck-usr.service. May 10 00:47:06.352466 systemd[1]: Stopped systemd-journald.service. May 10 00:47:06.352487 systemd[1]: Starting systemd-journald.service... May 10 00:47:06.352505 systemd[1]: Starting systemd-modules-load.service... May 10 00:47:06.352525 systemd[1]: Starting systemd-network-generator.service... May 10 00:47:06.352543 systemd[1]: Starting systemd-remount-fs.service... May 10 00:47:06.352562 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:47:06.352580 systemd[1]: verity-setup.service: Deactivated successfully. May 10 00:47:06.352598 systemd[1]: Stopped verity-setup.service. May 10 00:47:06.352618 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:47:06.352636 systemd[1]: Mounted dev-hugepages.mount. May 10 00:47:06.352667 systemd[1]: Mounted dev-mqueue.mount. May 10 00:47:06.352690 systemd[1]: Mounted media.mount. May 10 00:47:06.352709 systemd[1]: Mounted sys-kernel-debug.mount. May 10 00:47:06.352727 kernel: loop: module loaded May 10 00:47:06.352748 systemd[1]: Mounted sys-kernel-tracing.mount. May 10 00:47:06.352772 systemd-journald[1416]: Journal started May 10 00:47:06.352842 systemd-journald[1416]: Runtime Journal (/run/log/journal/ec2e23b9fb308774bc744cb0bc23a8f5) is 4.8M, max 38.3M, 33.5M free. May 10 00:47:01.849000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:47:02.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:47:02.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:47:02.024000 audit: BPF prog-id=10 op=LOAD May 10 00:47:02.024000 audit: BPF prog-id=10 op=UNLOAD May 10 00:47:02.024000 audit: BPF prog-id=11 op=LOAD May 10 00:47:02.024000 audit: BPF prog-id=11 op=UNLOAD May 10 00:47:02.283000 audit[1334]: AVC avc: denied { associate } for pid=1334 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 10 00:47:02.283000 audit[1334]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=1317 pid=1334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:47:02.283000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:47:02.286000 audit[1334]: AVC avc: denied { associate } for pid=1334 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 10 00:47:02.286000 audit[1334]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=1317 pid=1334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:47:02.286000 audit: CWD cwd="/" May 10 00:47:02.286000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:02.286000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:02.286000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:47:06.357767 systemd[1]: Started systemd-journald.service. May 10 00:47:06.357811 kernel: fuse: init (API version 7.34) May 10 00:47:06.068000 audit: BPF prog-id=12 op=LOAD May 10 00:47:06.068000 audit: BPF prog-id=3 op=UNLOAD May 10 00:47:06.069000 audit: BPF prog-id=13 op=LOAD May 10 00:47:06.071000 audit: BPF prog-id=14 op=LOAD May 10 00:47:06.071000 audit: BPF prog-id=4 op=UNLOAD May 10 00:47:06.071000 audit: BPF prog-id=5 op=UNLOAD May 10 00:47:06.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.097000 audit: BPF prog-id=12 op=UNLOAD May 10 00:47:06.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.278000 audit: BPF prog-id=15 op=LOAD May 10 00:47:06.278000 audit: BPF prog-id=16 op=LOAD May 10 00:47:06.278000 audit: BPF prog-id=17 op=LOAD May 10 00:47:06.278000 audit: BPF prog-id=13 op=UNLOAD May 10 00:47:06.278000 audit: BPF prog-id=14 op=UNLOAD May 10 00:47:06.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.342000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 10 00:47:06.342000 audit[1416]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe41a346e0 a2=4000 a3=7ffe41a3477c items=0 ppid=1 pid=1416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:47:06.342000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 10 00:47:06.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.067649 systemd[1]: Queued start job for default target multi-user.target. May 10 00:47:02.265254 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:47:06.067678 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. May 10 00:47:02.266453 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:47:06.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.074443 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 00:47:02.266476 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:47:06.359504 systemd[1]: Mounted tmp.mount. May 10 00:47:02.266509 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 10 00:47:06.361026 systemd[1]: Finished kmod-static-nodes.service. May 10 00:47:02.266519 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=debug msg="skipped missing lower profile" missing profile=oem May 10 00:47:06.362555 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:47:02.266570 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 10 00:47:06.362769 systemd[1]: Finished modprobe@configfs.service. May 10 00:47:02.266587 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 10 00:47:06.364266 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:47:02.266819 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 10 00:47:06.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.364426 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:47:02.266860 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:47:06.368056 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:47:02.266874 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:47:06.369839 systemd[1]: Finished modprobe@drm.service. May 10 00:47:02.275796 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 10 00:47:06.371809 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:47:02.275845 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 10 00:47:06.371974 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:47:02.275867 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 10 00:47:06.373496 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:47:02.275883 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 10 00:47:02.275905 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 10 00:47:02.275920 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 10 00:47:05.471921 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:05Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:47:05.472168 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:05Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:47:05.472311 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:05Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:47:05.472505 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:05Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:47:05.472557 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:05Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 10 00:47:05.472620 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2025-05-10T00:47:05Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 10 00:47:06.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.376912 systemd[1]: Finished modprobe@fuse.service. May 10 00:47:06.378875 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:47:06.379033 systemd[1]: Finished modprobe@loop.service. May 10 00:47:06.380501 systemd[1]: Finished systemd-modules-load.service. May 10 00:47:06.381877 systemd[1]: Finished systemd-network-generator.service. May 10 00:47:06.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.385322 systemd[1]: Finished systemd-remount-fs.service. May 10 00:47:06.388056 systemd[1]: Reached target network-pre.target. May 10 00:47:06.392504 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 10 00:47:06.398075 systemd[1]: Mounting sys-kernel-config.mount... May 10 00:47:06.399843 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:47:06.404382 systemd[1]: Starting systemd-hwdb-update.service... May 10 00:47:06.407329 systemd[1]: Starting systemd-journal-flush.service... May 10 00:47:06.408811 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:47:06.410735 systemd[1]: Starting systemd-random-seed.service... May 10 00:47:06.413252 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:47:06.414837 systemd[1]: Starting systemd-sysctl.service... May 10 00:47:06.419504 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 10 00:47:06.420783 systemd[1]: Mounted sys-kernel-config.mount. May 10 00:47:06.434363 systemd-journald[1416]: Time spent on flushing to /var/log/journal/ec2e23b9fb308774bc744cb0bc23a8f5 is 48.879ms for 1201 entries. May 10 00:47:06.434363 systemd-journald[1416]: System Journal (/var/log/journal/ec2e23b9fb308774bc744cb0bc23a8f5) is 8.0M, max 195.6M, 187.6M free. May 10 00:47:06.492086 systemd-journald[1416]: Received client request to flush runtime journal. May 10 00:47:06.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.443106 systemd[1]: Finished systemd-random-seed.service. May 10 00:47:06.444193 systemd[1]: Reached target first-boot-complete.target. May 10 00:47:06.448403 systemd[1]: Finished flatcar-tmpfiles.service. May 10 00:47:06.450968 systemd[1]: Starting systemd-sysusers.service... May 10 00:47:06.472261 systemd[1]: Finished systemd-sysctl.service. May 10 00:47:06.493323 systemd[1]: Finished systemd-journal-flush.service. May 10 00:47:06.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.517297 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:47:06.519854 systemd[1]: Starting systemd-udev-settle.service... May 10 00:47:06.531127 udevadm[1452]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 10 00:47:06.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.599474 systemd[1]: Finished systemd-sysusers.service. May 10 00:47:06.602088 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:47:06.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:06.701190 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:47:07.068358 systemd[1]: Finished systemd-hwdb-update.service. May 10 00:47:07.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.067000 audit: BPF prog-id=18 op=LOAD May 10 00:47:07.067000 audit: BPF prog-id=19 op=LOAD May 10 00:47:07.067000 audit: BPF prog-id=7 op=UNLOAD May 10 00:47:07.067000 audit: BPF prog-id=8 op=UNLOAD May 10 00:47:07.070365 systemd[1]: Starting systemd-udevd.service... May 10 00:47:07.087915 systemd-udevd[1455]: Using default interface naming scheme 'v252'. May 10 00:47:07.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.152000 audit: BPF prog-id=20 op=LOAD May 10 00:47:07.152359 systemd[1]: Started systemd-udevd.service. May 10 00:47:07.154524 systemd[1]: Starting systemd-networkd.service... May 10 00:47:07.182000 audit: BPF prog-id=21 op=LOAD May 10 00:47:07.182000 audit: BPF prog-id=22 op=LOAD May 10 00:47:07.182000 audit: BPF prog-id=23 op=LOAD May 10 00:47:07.185005 systemd[1]: Starting systemd-userdbd.service... May 10 00:47:07.187730 (udev-worker)[1459]: Network interface NamePolicy= disabled on kernel command line. May 10 00:47:07.189128 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 10 00:47:07.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.222483 systemd[1]: Started systemd-userdbd.service. May 10 00:47:07.252699 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 10 00:47:07.262677 kernel: ACPI: button: Power Button [PWRF] May 10 00:47:07.266513 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 10 00:47:07.266591 kernel: ACPI: button: Sleep Button [SLPF] May 10 00:47:07.274000 audit[1459]: AVC avc: denied { confidentiality } for pid=1459 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 00:47:07.274000 audit[1459]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561bd62045b0 a1=338ac a2=7f48cdc1ebc5 a3=5 items=110 ppid=1455 pid=1459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:47:07.274000 audit: CWD cwd="/" May 10 00:47:07.274000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=1 name=(null) inode=14159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=2 name=(null) inode=14159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=3 name=(null) inode=14160 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=4 name=(null) inode=14159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=5 name=(null) inode=14161 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=6 name=(null) inode=14159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=7 name=(null) inode=14162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=8 name=(null) inode=14162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=9 name=(null) inode=14163 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=10 name=(null) inode=14162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=11 name=(null) inode=14164 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=12 name=(null) inode=14162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=13 name=(null) inode=14165 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=14 name=(null) inode=14162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=15 name=(null) inode=14166 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=16 name=(null) inode=14162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=17 name=(null) inode=14167 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=18 name=(null) inode=14159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=19 name=(null) inode=14168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=20 name=(null) inode=14168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=21 name=(null) inode=14169 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=22 name=(null) inode=14168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=23 name=(null) inode=14170 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=24 name=(null) inode=14168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=25 name=(null) inode=14171 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=26 name=(null) inode=14168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=27 name=(null) inode=14172 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=28 name=(null) inode=14168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=29 name=(null) inode=14173 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=30 name=(null) inode=14159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=31 name=(null) inode=14174 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=32 name=(null) inode=14174 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=33 name=(null) inode=14175 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=34 name=(null) inode=14174 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=35 name=(null) inode=14176 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=36 name=(null) inode=14174 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=37 name=(null) inode=14177 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=38 name=(null) inode=14174 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=39 name=(null) inode=14178 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=40 name=(null) inode=14174 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=41 name=(null) inode=14179 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=42 name=(null) inode=14159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=43 name=(null) inode=14180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=44 name=(null) inode=14180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=45 name=(null) inode=14181 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=46 name=(null) inode=14180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=47 name=(null) inode=14182 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=48 name=(null) inode=14180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=49 name=(null) inode=14183 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=50 name=(null) inode=14180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=51 name=(null) inode=14184 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=52 name=(null) inode=14180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=53 name=(null) inode=14185 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=55 name=(null) inode=14186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=56 name=(null) inode=14186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=57 name=(null) inode=14187 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=58 name=(null) inode=14186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=59 name=(null) inode=14188 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=60 name=(null) inode=14186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=61 name=(null) inode=14189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=62 name=(null) inode=14189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=63 name=(null) inode=14190 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=64 name=(null) inode=14189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=65 name=(null) inode=14191 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=66 name=(null) inode=14189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=67 name=(null) inode=14192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=68 name=(null) inode=14189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=69 name=(null) inode=14193 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=70 name=(null) inode=14189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=71 name=(null) inode=14194 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=72 name=(null) inode=14186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=73 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=74 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=75 name=(null) inode=14196 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=76 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=77 name=(null) inode=14197 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=78 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=79 name=(null) inode=14198 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=80 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=81 name=(null) inode=14199 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=82 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=83 name=(null) inode=14200 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=84 name=(null) inode=14186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=85 name=(null) inode=14201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=86 name=(null) inode=14201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=87 name=(null) inode=14202 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=88 name=(null) inode=14201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=89 name=(null) inode=14203 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=90 name=(null) inode=14201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=91 name=(null) inode=14204 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=92 name=(null) inode=14201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=93 name=(null) inode=14205 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=94 name=(null) inode=14201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=95 name=(null) inode=14206 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=96 name=(null) inode=14186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=97 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=98 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=99 name=(null) inode=14208 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=100 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=101 name=(null) inode=14209 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=102 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=103 name=(null) inode=14210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=104 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=105 name=(null) inode=14211 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=106 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=107 name=(null) inode=14212 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PATH item=109 name=(null) inode=14213 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:47:07.274000 audit: PROCTITLE proctitle="(udev-worker)" May 10 00:47:07.313886 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 10 00:47:07.329681 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 May 10 00:47:07.330498 systemd-networkd[1463]: lo: Link UP May 10 00:47:07.330510 systemd-networkd[1463]: lo: Gained carrier May 10 00:47:07.331368 systemd-networkd[1463]: Enumeration completed May 10 00:47:07.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.331553 systemd[1]: Started systemd-networkd.service. May 10 00:47:07.333455 systemd[1]: Starting systemd-networkd-wait-online.service... May 10 00:47:07.334609 systemd-networkd[1463]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:47:07.336686 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:47:07.339105 systemd-networkd[1463]: eth0: Link UP May 10 00:47:07.339358 systemd-networkd[1463]: eth0: Gained carrier May 10 00:47:07.339704 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 00:47:07.346861 systemd-networkd[1463]: eth0: DHCPv4 address 172.31.20.182/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 10 00:47:07.416222 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:47:07.421070 systemd[1]: Finished systemd-udev-settle.service. May 10 00:47:07.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.422852 systemd[1]: Starting lvm2-activation-early.service... May 10 00:47:07.492130 lvm[1569]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:47:07.522909 systemd[1]: Finished lvm2-activation-early.service. May 10 00:47:07.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.523617 systemd[1]: Reached target cryptsetup.target. May 10 00:47:07.525395 systemd[1]: Starting lvm2-activation.service... May 10 00:47:07.530033 lvm[1570]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:47:07.557088 systemd[1]: Finished lvm2-activation.service. May 10 00:47:07.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.557849 systemd[1]: Reached target local-fs-pre.target. May 10 00:47:07.558404 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:47:07.558436 systemd[1]: Reached target local-fs.target. May 10 00:47:07.558960 systemd[1]: Reached target machines.target. May 10 00:47:07.560765 systemd[1]: Starting ldconfig.service... May 10 00:47:07.562245 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:47:07.562313 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:47:07.563482 systemd[1]: Starting systemd-boot-update.service... May 10 00:47:07.565302 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 10 00:47:07.567220 systemd[1]: Starting systemd-machine-id-commit.service... May 10 00:47:07.569996 systemd[1]: Starting systemd-sysext.service... May 10 00:47:07.573884 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1572 (bootctl) May 10 00:47:07.575007 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 10 00:47:07.585903 systemd[1]: Unmounting usr-share-oem.mount... May 10 00:47:07.591052 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 10 00:47:07.591231 systemd[1]: Unmounted usr-share-oem.mount. May 10 00:47:07.603757 kernel: loop0: detected capacity change from 0 to 218376 May 10 00:47:07.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.606760 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 10 00:47:07.720827 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:47:07.738717 kernel: loop1: detected capacity change from 0 to 218376 May 10 00:47:07.758418 (sd-sysext)[1585]: Using extensions 'kubernetes'. May 10 00:47:07.758858 (sd-sysext)[1585]: Merged extensions into '/usr'. May 10 00:47:07.759238 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:47:07.761499 systemd[1]: Finished systemd-machine-id-commit.service. May 10 00:47:07.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.779940 systemd-fsck[1582]: fsck.fat 4.2 (2021-01-31) May 10 00:47:07.779940 systemd-fsck[1582]: /dev/nvme0n1p1: 790 files, 120688/258078 clusters May 10 00:47:07.780230 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 10 00:47:07.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.782464 systemd[1]: Mounting boot.mount... May 10 00:47:07.783082 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:47:07.784577 systemd[1]: Mounting usr-share-oem.mount... May 10 00:47:07.785406 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:47:07.787273 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:47:07.791680 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:47:07.793333 systemd[1]: Starting modprobe@loop.service... May 10 00:47:07.793993 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:47:07.794140 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:47:07.794262 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:47:07.797255 systemd[1]: Mounted usr-share-oem.mount. May 10 00:47:07.798194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:47:07.798325 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:47:07.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.799247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:47:07.799364 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:47:07.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.800327 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:47:07.800430 systemd[1]: Finished modprobe@loop.service. May 10 00:47:07.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.801406 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:47:07.801506 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:47:07.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:07.802837 systemd[1]: Finished systemd-sysext.service. May 10 00:47:07.804988 systemd[1]: Starting ensure-sysext.service... May 10 00:47:07.806604 systemd[1]: Starting systemd-tmpfiles-setup.service... May 10 00:47:07.813860 systemd[1]: Mounted boot.mount. May 10 00:47:07.819762 systemd[1]: Reloading. May 10 00:47:07.833802 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 10 00:47:07.836296 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:47:07.839944 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:47:07.893622 /usr/lib/systemd/system-generators/torcx-generator[1624]: time="2025-05-10T00:47:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:47:07.893992 /usr/lib/systemd/system-generators/torcx-generator[1624]: time="2025-05-10T00:47:07Z" level=info msg="torcx already run" May 10 00:47:07.992617 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:47:07.992824 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:47:08.012565 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:47:08.082000 audit: BPF prog-id=24 op=LOAD May 10 00:47:08.082000 audit: BPF prog-id=15 op=UNLOAD May 10 00:47:08.082000 audit: BPF prog-id=25 op=LOAD May 10 00:47:08.082000 audit: BPF prog-id=26 op=LOAD May 10 00:47:08.082000 audit: BPF prog-id=16 op=UNLOAD May 10 00:47:08.082000 audit: BPF prog-id=17 op=UNLOAD May 10 00:47:08.083000 audit: BPF prog-id=27 op=LOAD May 10 00:47:08.083000 audit: BPF prog-id=21 op=UNLOAD May 10 00:47:08.083000 audit: BPF prog-id=28 op=LOAD May 10 00:47:08.083000 audit: BPF prog-id=29 op=LOAD May 10 00:47:08.083000 audit: BPF prog-id=22 op=UNLOAD May 10 00:47:08.083000 audit: BPF prog-id=23 op=UNLOAD May 10 00:47:08.085000 audit: BPF prog-id=30 op=LOAD May 10 00:47:08.085000 audit: BPF prog-id=20 op=UNLOAD May 10 00:47:08.086000 audit: BPF prog-id=31 op=LOAD May 10 00:47:08.086000 audit: BPF prog-id=32 op=LOAD May 10 00:47:08.086000 audit: BPF prog-id=18 op=UNLOAD May 10 00:47:08.086000 audit: BPF prog-id=19 op=UNLOAD May 10 00:47:08.093282 systemd[1]: Finished systemd-boot-update.service. May 10 00:47:08.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.094553 systemd[1]: Finished systemd-tmpfiles-setup.service. May 10 00:47:08.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.105339 systemd[1]: Starting audit-rules.service... May 10 00:47:08.108146 systemd[1]: Starting clean-ca-certificates.service... May 10 00:47:08.113000 audit: BPF prog-id=33 op=LOAD May 10 00:47:08.116000 audit: BPF prog-id=34 op=LOAD May 10 00:47:08.112803 systemd[1]: Starting systemd-journal-catalog-update.service... May 10 00:47:08.116037 systemd[1]: Starting systemd-resolved.service... May 10 00:47:08.119621 systemd[1]: Starting systemd-timesyncd.service... May 10 00:47:08.122193 systemd[1]: Starting systemd-update-utmp.service... May 10 00:47:08.134884 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:47:08.136780 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:47:08.140878 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:47:08.144429 systemd[1]: Starting modprobe@loop.service... May 10 00:47:08.145391 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:47:08.145584 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:47:08.149000 audit[1686]: SYSTEM_BOOT pid=1686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 10 00:47:08.150590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:47:08.150809 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:47:08.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.156470 systemd[1]: Finished clean-ca-certificates.service. May 10 00:47:08.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.157909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:47:08.158078 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:47:08.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.159415 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:47:08.159571 systemd[1]: Finished modprobe@loop.service. May 10 00:47:08.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.163591 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:47:08.163867 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:47:08.164068 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:47:08.168742 systemd[1]: Finished systemd-update-utmp.service. May 10 00:47:08.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.175917 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:47:08.177925 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:47:08.181738 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:47:08.184852 systemd[1]: Starting modprobe@loop.service... May 10 00:47:08.185897 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:47:08.186112 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:47:08.186311 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:47:08.187552 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:47:08.189667 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:47:08.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.191179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:47:08.191352 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:47:08.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.192797 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:47:08.197621 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:47:08.197801 systemd[1]: Finished modprobe@loop.service. May 10 00:47:08.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.200221 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:47:08.202323 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:47:08.206390 systemd[1]: Starting modprobe@drm.service... May 10 00:47:08.208725 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:47:08.211572 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:47:08.211772 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:47:08.211959 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:47:08.213438 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:47:08.213623 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:47:08.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.214956 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:47:08.215110 systemd[1]: Finished modprobe@drm.service. May 10 00:47:08.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.216420 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:47:08.216577 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:47:08.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.219426 systemd[1]: Finished ensure-sysext.service. May 10 00:47:08.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.222554 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:47:08.222609 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:47:08.269506 systemd[1]: Finished systemd-journal-catalog-update.service. May 10 00:47:08.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:47:08.293000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 10 00:47:08.293000 audit[1709]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff1c84c560 a2=420 a3=0 items=0 ppid=1680 pid=1709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:47:08.293000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 10 00:47:08.295771 augenrules[1709]: No rules May 10 00:47:08.296990 systemd[1]: Finished audit-rules.service. May 10 00:47:08.301752 systemd-resolved[1684]: Positive Trust Anchors: May 10 00:47:08.301768 systemd-resolved[1684]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:47:08.301809 systemd-resolved[1684]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:47:08.317054 systemd[1]: Started systemd-timesyncd.service. May 10 00:47:08.317637 systemd[1]: Reached target time-set.target. May 10 00:47:08.340724 systemd-resolved[1684]: Defaulting to hostname 'linux'. May 10 00:47:08.342590 systemd[1]: Started systemd-resolved.service. May 10 00:47:08.343069 systemd[1]: Reached target network.target. May 10 00:47:08.343448 systemd[1]: Reached target nss-lookup.target. May 10 00:47:09.231168 systemd-resolved[1684]: Clock change detected. Flushing caches. May 10 00:47:09.231236 systemd-timesyncd[1685]: Contacted time server 104.152.220.5:123 (0.flatcar.pool.ntp.org). May 10 00:47:09.231359 systemd-timesyncd[1685]: Initial clock synchronization to Sat 2025-05-10 00:47:09.231012 UTC. May 10 00:47:09.316787 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:47:09.316816 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:47:09.492685 ldconfig[1571]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:47:09.502637 systemd[1]: Finished ldconfig.service. May 10 00:47:09.504454 systemd[1]: Starting systemd-update-done.service... May 10 00:47:09.512734 systemd[1]: Finished systemd-update-done.service. May 10 00:47:09.513266 systemd[1]: Reached target sysinit.target. May 10 00:47:09.513711 systemd[1]: Started motdgen.path. May 10 00:47:09.514082 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 10 00:47:09.514609 systemd[1]: Started logrotate.timer. May 10 00:47:09.515083 systemd[1]: Started mdadm.timer. May 10 00:47:09.515445 systemd[1]: Started systemd-tmpfiles-clean.timer. May 10 00:47:09.515795 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:47:09.515830 systemd[1]: Reached target paths.target. May 10 00:47:09.516174 systemd[1]: Reached target timers.target. May 10 00:47:09.516805 systemd[1]: Listening on dbus.socket. May 10 00:47:09.518147 systemd[1]: Starting docker.socket... May 10 00:47:09.521845 systemd[1]: Listening on sshd.socket. May 10 00:47:09.522378 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:47:09.522884 systemd[1]: Listening on docker.socket. May 10 00:47:09.523339 systemd[1]: Reached target sockets.target. May 10 00:47:09.523696 systemd[1]: Reached target basic.target. May 10 00:47:09.524069 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:47:09.524097 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:47:09.525077 systemd[1]: Starting containerd.service... May 10 00:47:09.526366 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 10 00:47:09.527897 systemd[1]: Starting dbus.service... May 10 00:47:09.529478 systemd[1]: Starting enable-oem-cloudinit.service... May 10 00:47:09.531687 systemd[1]: Starting extend-filesystems.service... May 10 00:47:09.533639 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 10 00:47:09.535339 systemd[1]: Starting motdgen.service... May 10 00:47:09.541351 systemd[1]: Starting ssh-key-proc-cmdline.service... May 10 00:47:09.569213 jq[1721]: false May 10 00:47:09.545295 systemd[1]: Starting sshd-keygen.service... May 10 00:47:09.550399 systemd[1]: Starting systemd-logind.service... May 10 00:47:09.552278 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:47:09.552362 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:47:09.553569 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:47:09.575553 jq[1731]: true May 10 00:47:09.554349 systemd[1]: Starting update-engine.service... May 10 00:47:09.558465 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 10 00:47:09.563326 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:47:09.563570 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 10 00:47:09.588101 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:47:09.588375 systemd[1]: Finished ssh-key-proc-cmdline.service. May 10 00:47:09.606182 systemd[1]: Created slice system-sshd.slice. May 10 00:47:09.630263 jq[1733]: true May 10 00:47:09.644236 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:47:09.644455 systemd[1]: Finished motdgen.service. May 10 00:47:09.650225 extend-filesystems[1722]: Found loop1 May 10 00:47:09.654085 extend-filesystems[1722]: Found nvme0n1 May 10 00:47:09.654628 dbus-daemon[1720]: [system] SELinux support is enabled May 10 00:47:09.654837 systemd[1]: Started dbus.service. May 10 00:47:09.659722 dbus-daemon[1720]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1463 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 10 00:47:09.658759 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:47:09.658797 systemd[1]: Reached target system-config.target. May 10 00:47:09.659596 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:47:09.659623 systemd[1]: Reached target user-config.target. May 10 00:47:09.662786 extend-filesystems[1722]: Found nvme0n1p1 May 10 00:47:09.663329 dbus-daemon[1720]: [system] Successfully activated service 'org.freedesktop.systemd1' May 10 00:47:09.667901 systemd[1]: Starting systemd-hostnamed.service... May 10 00:47:09.669329 extend-filesystems[1722]: Found nvme0n1p2 May 10 00:47:09.670343 systemd-networkd[1463]: eth0: Gained IPv6LL May 10 00:47:09.670694 extend-filesystems[1722]: Found nvme0n1p3 May 10 00:47:09.679314 extend-filesystems[1722]: Found usr May 10 00:47:09.679314 extend-filesystems[1722]: Found nvme0n1p4 May 10 00:47:09.679314 extend-filesystems[1722]: Found nvme0n1p6 May 10 00:47:09.679314 extend-filesystems[1722]: Found nvme0n1p7 May 10 00:47:09.679314 extend-filesystems[1722]: Found nvme0n1p9 May 10 00:47:09.679314 extend-filesystems[1722]: Checking size of /dev/nvme0n1p9 May 10 00:47:09.673365 systemd[1]: Finished systemd-networkd-wait-online.service. May 10 00:47:09.674331 systemd[1]: Reached target network-online.target. May 10 00:47:09.676763 systemd[1]: Started amazon-ssm-agent.service. May 10 00:47:09.680704 systemd[1]: Starting kubelet.service... May 10 00:47:09.683647 systemd[1]: Started nvidia.service. May 10 00:47:09.825646 extend-filesystems[1722]: Resized partition /dev/nvme0n1p9 May 10 00:47:09.839585 extend-filesystems[1780]: resize2fs 1.46.5 (30-Dec-2021) May 10 00:47:09.851450 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 10 00:47:09.935171 update_engine[1730]: I0510 00:47:09.934649 1730 main.cc:92] Flatcar Update Engine starting May 10 00:47:09.969872 systemd-logind[1728]: Watching system buttons on /dev/input/event1 (Power Button) May 10 00:47:09.970429 systemd-logind[1728]: Watching system buttons on /dev/input/event2 (Sleep Button) May 10 00:47:09.970460 systemd-logind[1728]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 00:47:09.971842 systemd-logind[1728]: New seat seat0. May 10 00:47:09.980202 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 10 00:47:09.980289 env[1734]: time="2025-05-10T00:47:09.978311157Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 10 00:47:09.975398 systemd[1]: Started systemd-logind.service. May 10 00:47:09.994158 update_engine[1730]: I0510 00:47:09.993706 1730 update_check_scheduler.cc:74] Next update check in 7m0s May 10 00:47:09.987819 systemd[1]: Started update-engine.service. May 10 00:47:09.991565 systemd[1]: Started locksmithd.service. May 10 00:47:09.996484 amazon-ssm-agent[1758]: 2025/05/10 00:47:09 Failed to load instance info from vault. RegistrationKey does not exist. May 10 00:47:09.998544 amazon-ssm-agent[1758]: Initializing new seelog logger May 10 00:47:09.998544 amazon-ssm-agent[1758]: New Seelog Logger Creation Complete May 10 00:47:09.998721 amazon-ssm-agent[1758]: 2025/05/10 00:47:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 10 00:47:09.998721 amazon-ssm-agent[1758]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 10 00:47:09.999984 amazon-ssm-agent[1758]: 2025/05/10 00:47:09 processing appconfig overrides May 10 00:47:10.003347 extend-filesystems[1780]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 10 00:47:10.003347 extend-filesystems[1780]: old_desc_blocks = 1, new_desc_blocks = 1 May 10 00:47:10.003347 extend-filesystems[1780]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 10 00:47:10.010394 extend-filesystems[1722]: Resized filesystem in /dev/nvme0n1p9 May 10 00:47:10.005509 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:47:10.011629 bash[1781]: Updated "/home/core/.ssh/authorized_keys" May 10 00:47:10.005739 systemd[1]: Finished extend-filesystems.service. May 10 00:47:10.007709 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 10 00:47:10.159505 env[1734]: time="2025-05-10T00:47:10.159453009Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:47:10.159650 env[1734]: time="2025-05-10T00:47:10.159630432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:47:10.164926 env[1734]: time="2025-05-10T00:47:10.164879350Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:47:10.164926 env[1734]: time="2025-05-10T00:47:10.164923527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:47:10.165280 env[1734]: time="2025-05-10T00:47:10.165246987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:47:10.165349 env[1734]: time="2025-05-10T00:47:10.165280268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:47:10.165349 env[1734]: time="2025-05-10T00:47:10.165298788Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 10 00:47:10.165349 env[1734]: time="2025-05-10T00:47:10.165312622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:47:10.165494 env[1734]: time="2025-05-10T00:47:10.165419782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:47:10.165732 env[1734]: time="2025-05-10T00:47:10.165703754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:47:10.165959 env[1734]: time="2025-05-10T00:47:10.165929561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:47:10.166015 env[1734]: time="2025-05-10T00:47:10.165961465Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:47:10.166064 env[1734]: time="2025-05-10T00:47:10.166027625Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 10 00:47:10.166064 env[1734]: time="2025-05-10T00:47:10.166046004Z" level=info msg="metadata content store policy set" policy=shared May 10 00:47:10.184732 dbus-daemon[1720]: [system] Successfully activated service 'org.freedesktop.hostname1' May 10 00:47:10.184919 systemd[1]: Started systemd-hostnamed.service. May 10 00:47:10.188724 dbus-daemon[1720]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1756 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 10 00:47:10.193168 systemd[1]: Starting polkit.service... May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.198768072Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.198822223Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.198843163Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.198994336Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.199021748Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.199044355Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.199065905Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.199086217Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.199106783Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.199127123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.199156605Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.199177153Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.199328152Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:47:10.202171 env[1734]: time="2025-05-10T00:47:10.199435333Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.199909635Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.199951523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.199975686Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.200050768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.200073754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.200173680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.200193061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.200212400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.200231053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.200246920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.200263110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.200283891Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.200440461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.200458861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:47:10.202765 env[1734]: time="2025-05-10T00:47:10.200476013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:47:10.202274 systemd[1]: Started containerd.service. May 10 00:47:10.203445 env[1734]: time="2025-05-10T00:47:10.200491518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:47:10.203445 env[1734]: time="2025-05-10T00:47:10.200512855Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 10 00:47:10.203445 env[1734]: time="2025-05-10T00:47:10.200529474Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:47:10.203445 env[1734]: time="2025-05-10T00:47:10.200552622Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 10 00:47:10.203445 env[1734]: time="2025-05-10T00:47:10.200594043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:47:10.203646 env[1734]: time="2025-05-10T00:47:10.200859389Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:47:10.203646 env[1734]: time="2025-05-10T00:47:10.200960059Z" level=info msg="Connect containerd service" May 10 00:47:10.203646 env[1734]: time="2025-05-10T00:47:10.200999025Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:47:10.203646 env[1734]: time="2025-05-10T00:47:10.201789406Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:47:10.203646 env[1734]: time="2025-05-10T00:47:10.202080238Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:47:10.203646 env[1734]: time="2025-05-10T00:47:10.202128974Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:47:10.208337 env[1734]: time="2025-05-10T00:47:10.206159685Z" level=info msg="containerd successfully booted in 0.299836s" May 10 00:47:10.215953 systemd[1]: nvidia.service: Deactivated successfully. May 10 00:47:10.225292 polkitd[1830]: Started polkitd version 121 May 10 00:47:10.231475 env[1734]: time="2025-05-10T00:47:10.231419734Z" level=info msg="Start subscribing containerd event" May 10 00:47:10.231601 env[1734]: time="2025-05-10T00:47:10.231501931Z" level=info msg="Start recovering state" May 10 00:47:10.231601 env[1734]: time="2025-05-10T00:47:10.231589795Z" level=info msg="Start event monitor" May 10 00:47:10.231684 env[1734]: time="2025-05-10T00:47:10.231615933Z" level=info msg="Start snapshots syncer" May 10 00:47:10.231684 env[1734]: time="2025-05-10T00:47:10.231639784Z" level=info msg="Start cni network conf syncer for default" May 10 00:47:10.231684 env[1734]: time="2025-05-10T00:47:10.231652621Z" level=info msg="Start streaming server" May 10 00:47:10.246279 polkitd[1830]: Loading rules from directory /etc/polkit-1/rules.d May 10 00:47:10.246501 polkitd[1830]: Loading rules from directory /usr/share/polkit-1/rules.d May 10 00:47:10.249121 polkitd[1830]: Finished loading, compiling and executing 2 rules May 10 00:47:10.249841 dbus-daemon[1720]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 10 00:47:10.250108 systemd[1]: Started polkit.service. May 10 00:47:10.250516 polkitd[1830]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 10 00:47:10.280786 systemd-hostnamed[1756]: Hostname set to (transient) May 10 00:47:10.280787 systemd-resolved[1684]: System hostname changed to 'ip-172-31-20-182'. May 10 00:47:10.322855 coreos-metadata[1719]: May 10 00:47:10.322 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 10 00:47:10.325596 coreos-metadata[1719]: May 10 00:47:10.325 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 May 10 00:47:10.327374 coreos-metadata[1719]: May 10 00:47:10.327 INFO Fetch successful May 10 00:47:10.327528 coreos-metadata[1719]: May 10 00:47:10.327 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 May 10 00:47:10.328583 coreos-metadata[1719]: May 10 00:47:10.328 INFO Fetch successful May 10 00:47:10.332496 unknown[1719]: wrote ssh authorized keys file for user: core May 10 00:47:10.357668 update-ssh-keys[1867]: Updated "/home/core/.ssh/authorized_keys" May 10 00:47:10.358176 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 10 00:47:10.511925 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO Create new startup processor May 10 00:47:10.512336 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [LongRunningPluginsManager] registered plugins: {} May 10 00:47:10.512595 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO Initializing bookkeeping folders May 10 00:47:10.512705 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO removing the completed state files May 10 00:47:10.512800 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO Initializing bookkeeping folders for long running plugins May 10 00:47:10.512893 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO Initializing replies folder for MDS reply requests that couldn't reach the service May 10 00:47:10.512983 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO Initializing healthcheck folders for long running plugins May 10 00:47:10.513073 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO Initializing locations for inventory plugin May 10 00:47:10.513241 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO Initializing default location for custom inventory May 10 00:47:10.513338 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO Initializing default location for file inventory May 10 00:47:10.514899 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO Initializing default location for role inventory May 10 00:47:10.515011 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO Init the cloudwatchlogs publisher May 10 00:47:10.515101 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [instanceID=i-0730ec02a03a334c6] Successfully loaded platform independent plugin aws:configurePackage May 10 00:47:10.515192 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [instanceID=i-0730ec02a03a334c6] Successfully loaded platform independent plugin aws:runDocument May 10 00:47:10.515272 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [instanceID=i-0730ec02a03a334c6] Successfully loaded platform independent plugin aws:updateSsmAgent May 10 00:47:10.515350 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [instanceID=i-0730ec02a03a334c6] Successfully loaded platform independent plugin aws:configureDocker May 10 00:47:10.515430 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [instanceID=i-0730ec02a03a334c6] Successfully loaded platform independent plugin aws:refreshAssociation May 10 00:47:10.515506 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [instanceID=i-0730ec02a03a334c6] Successfully loaded platform independent plugin aws:downloadContent May 10 00:47:10.515599 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [instanceID=i-0730ec02a03a334c6] Successfully loaded platform independent plugin aws:softwareInventory May 10 00:47:10.515684 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [instanceID=i-0730ec02a03a334c6] Successfully loaded platform independent plugin aws:runPowerShellScript May 10 00:47:10.515761 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [instanceID=i-0730ec02a03a334c6] Successfully loaded platform independent plugin aws:runDockerAction May 10 00:47:10.515836 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [instanceID=i-0730ec02a03a334c6] Successfully loaded platform dependent plugin aws:runShellScript May 10 00:47:10.515913 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 May 10 00:47:10.515990 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO OS: linux, Arch: amd64 May 10 00:47:10.517049 amazon-ssm-agent[1758]: datastore file /var/lib/amazon/ssm/i-0730ec02a03a334c6/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute May 10 00:47:10.519982 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessagingDeliveryService] Starting document processing engine... May 10 00:47:10.619832 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessagingDeliveryService] [EngineProcessor] Starting May 10 00:47:10.714445 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing May 10 00:47:10.809008 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessagingDeliveryService] Starting message polling May 10 00:47:10.851251 locksmithd[1797]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:47:10.903666 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessagingDeliveryService] Starting send replies to MDS May 10 00:47:10.998931 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [instanceID=i-0730ec02a03a334c6] Starting association polling May 10 00:47:11.094072 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting May 10 00:47:11.189291 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessagingDeliveryService] [Association] Launching response handler May 10 00:47:11.285502 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing May 10 00:47:11.381244 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service May 10 00:47:11.477663 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized May 10 00:47:11.573741 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [OfflineService] Starting document processing engine... May 10 00:47:11.670234 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [OfflineService] [EngineProcessor] Starting May 10 00:47:11.767039 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [OfflineService] [EngineProcessor] Initial processing May 10 00:47:11.863829 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [OfflineService] Starting message polling May 10 00:47:11.960747 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [OfflineService] Starting send replies to MDS May 10 00:47:12.058176 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [LongRunningPluginsManager] starting long running plugin manager May 10 00:47:12.146100 sshd_keygen[1750]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:47:12.155474 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute May 10 00:47:12.173539 systemd[1]: Finished sshd-keygen.service. May 10 00:47:12.176043 systemd[1]: Starting issuegen.service... May 10 00:47:12.178830 systemd[1]: Started sshd@0-172.31.20.182:22-139.178.89.65:46588.service. May 10 00:47:12.186809 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:47:12.187023 systemd[1]: Finished issuegen.service. May 10 00:47:12.189879 systemd[1]: Starting systemd-user-sessions.service... May 10 00:47:12.200686 systemd[1]: Finished systemd-user-sessions.service. May 10 00:47:12.202857 systemd[1]: Started getty@tty1.service. May 10 00:47:12.205558 systemd[1]: Started serial-getty@ttyS0.service. May 10 00:47:12.206951 systemd[1]: Reached target getty.target. May 10 00:47:12.252944 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [HealthCheck] HealthCheck reporting agent health. May 10 00:47:12.321315 systemd[1]: Started kubelet.service. May 10 00:47:12.322421 systemd[1]: Reached target multi-user.target. May 10 00:47:12.324413 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 10 00:47:12.333479 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 10 00:47:12.333643 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 10 00:47:12.334346 systemd[1]: Startup finished in 578ms (kernel) + 5.963s (initrd) + 9.888s (userspace) = 16.430s. May 10 00:47:12.350555 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessageGatewayService] Starting session document processing engine... May 10 00:47:12.396046 sshd[1924]: Accepted publickey for core from 139.178.89.65 port 46588 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:47:12.399179 sshd[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:12.411830 systemd[1]: Created slice user-500.slice. May 10 00:47:12.412959 systemd[1]: Starting user-runtime-dir@500.service... May 10 00:47:12.416553 systemd-logind[1728]: New session 1 of user core. May 10 00:47:12.424296 systemd[1]: Finished user-runtime-dir@500.service. May 10 00:47:12.426460 systemd[1]: Starting user@500.service... May 10 00:47:12.430263 (systemd)[1936]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:12.448378 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessageGatewayService] [EngineProcessor] Starting May 10 00:47:12.542893 systemd[1936]: Queued start job for default target default.target. May 10 00:47:12.544465 systemd[1936]: Reached target paths.target. May 10 00:47:12.544497 systemd[1936]: Reached target sockets.target. May 10 00:47:12.544516 systemd[1936]: Reached target timers.target. May 10 00:47:12.544532 systemd[1936]: Reached target basic.target. May 10 00:47:12.544652 systemd[1]: Started user@500.service. May 10 00:47:12.545872 systemd[1]: Started session-1.scope. May 10 00:47:12.546507 systemd[1936]: Reached target default.target. May 10 00:47:12.547408 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. May 10 00:47:12.546881 systemd[1936]: Startup finished in 109ms. May 10 00:47:12.645512 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0730ec02a03a334c6, requestId: dca1df7d-f6c1-45f8-ad0c-11652da1b054 May 10 00:47:12.688164 systemd[1]: Started sshd@1-172.31.20.182:22-139.178.89.65:46592.service. May 10 00:47:12.743909 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessageGatewayService] listening reply. May 10 00:47:12.842588 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck May 10 00:47:12.853124 sshd[1949]: Accepted publickey for core from 139.178.89.65 port 46592 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:47:12.854411 sshd[1949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:12.858956 systemd-logind[1728]: New session 2 of user core. May 10 00:47:12.859476 systemd[1]: Started session-2.scope. May 10 00:47:12.941390 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [StartupProcessor] Executing startup processor tasks May 10 00:47:12.988412 sshd[1949]: pam_unix(sshd:session): session closed for user core May 10 00:47:12.991317 systemd[1]: sshd@1-172.31.20.182:22-139.178.89.65:46592.service: Deactivated successfully. May 10 00:47:12.992010 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:47:12.992595 systemd-logind[1728]: Session 2 logged out. Waiting for processes to exit. May 10 00:47:12.993715 systemd-logind[1728]: Removed session 2. May 10 00:47:13.013426 systemd[1]: Started sshd@2-172.31.20.182:22-139.178.89.65:46604.service. May 10 00:47:13.040417 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running May 10 00:47:13.139636 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk May 10 00:47:13.176244 sshd[1955]: Accepted publickey for core from 139.178.89.65 port 46604 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:47:13.177578 sshd[1955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:13.182014 systemd-logind[1728]: New session 3 of user core. May 10 00:47:13.182465 systemd[1]: Started session-3.scope. May 10 00:47:13.238978 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 May 10 00:47:13.305429 sshd[1955]: pam_unix(sshd:session): session closed for user core May 10 00:47:13.307865 systemd[1]: sshd@2-172.31.20.182:22-139.178.89.65:46604.service: Deactivated successfully. May 10 00:47:13.308939 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:47:13.309309 systemd-logind[1728]: Session 3 logged out. Waiting for processes to exit. May 10 00:47:13.309975 systemd-logind[1728]: Removed session 3. May 10 00:47:13.330482 systemd[1]: Started sshd@3-172.31.20.182:22-139.178.89.65:46614.service. May 10 00:47:13.338987 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0730ec02a03a334c6?role=subscribe&stream=input May 10 00:47:13.438855 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0730ec02a03a334c6?role=subscribe&stream=input May 10 00:47:13.474886 kubelet[1933]: E0510 00:47:13.474836 1933 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:47:13.476380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:47:13.476511 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:47:13.476733 systemd[1]: kubelet.service: Consumed 1.158s CPU time. May 10 00:47:13.495262 sshd[1961]: Accepted publickey for core from 139.178.89.65 port 46614 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:47:13.496579 sshd[1961]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:13.501432 systemd[1]: Started session-4.scope. May 10 00:47:13.501767 systemd-logind[1728]: New session 4 of user core. May 10 00:47:13.538489 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessageGatewayService] Starting receiving message from control channel May 10 00:47:13.631438 sshd[1961]: pam_unix(sshd:session): session closed for user core May 10 00:47:13.634238 systemd[1]: sshd@3-172.31.20.182:22-139.178.89.65:46614.service: Deactivated successfully. May 10 00:47:13.635088 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:47:13.635735 systemd-logind[1728]: Session 4 logged out. Waiting for processes to exit. May 10 00:47:13.636651 systemd-logind[1728]: Removed session 4. May 10 00:47:13.638750 amazon-ssm-agent[1758]: 2025-05-10 00:47:10 INFO [MessageGatewayService] [EngineProcessor] Initial processing May 10 00:47:13.656849 systemd[1]: Started sshd@4-172.31.20.182:22-139.178.89.65:46620.service. May 10 00:47:13.822199 sshd[1967]: Accepted publickey for core from 139.178.89.65 port 46620 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:47:13.823278 sshd[1967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:13.828645 systemd[1]: Started session-5.scope. May 10 00:47:13.829206 systemd-logind[1728]: New session 5 of user core. May 10 00:47:13.976951 sudo[1970]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:47:13.977216 sudo[1970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 00:47:13.991672 systemd[1]: Starting coreos-metadata.service... May 10 00:47:14.075287 coreos-metadata[1974]: May 10 00:47:14.075 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 10 00:47:14.076249 coreos-metadata[1974]: May 10 00:47:14.076 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 May 10 00:47:14.076697 coreos-metadata[1974]: May 10 00:47:14.076 INFO Fetch successful May 10 00:47:14.076697 coreos-metadata[1974]: May 10 00:47:14.076 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 May 10 00:47:14.077281 coreos-metadata[1974]: May 10 00:47:14.077 INFO Fetch successful May 10 00:47:14.077281 coreos-metadata[1974]: May 10 00:47:14.077 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 May 10 00:47:14.077892 coreos-metadata[1974]: May 10 00:47:14.077 INFO Fetch successful May 10 00:47:14.077950 coreos-metadata[1974]: May 10 00:47:14.077 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 May 10 00:47:14.078532 coreos-metadata[1974]: May 10 00:47:14.078 INFO Fetch successful May 10 00:47:14.078532 coreos-metadata[1974]: May 10 00:47:14.078 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 May 10 00:47:14.079195 coreos-metadata[1974]: May 10 00:47:14.079 INFO Fetch successful May 10 00:47:14.079251 coreos-metadata[1974]: May 10 00:47:14.079 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 May 10 00:47:14.079712 coreos-metadata[1974]: May 10 00:47:14.079 INFO Fetch successful May 10 00:47:14.079782 coreos-metadata[1974]: May 10 00:47:14.079 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 May 10 00:47:14.080286 coreos-metadata[1974]: May 10 00:47:14.080 INFO Fetch successful May 10 00:47:14.080341 coreos-metadata[1974]: May 10 00:47:14.080 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 May 10 00:47:14.081115 coreos-metadata[1974]: May 10 00:47:14.081 INFO Fetch successful May 10 00:47:14.091708 systemd[1]: Finished coreos-metadata.service. May 10 00:47:15.134561 systemd[1]: Stopped kubelet.service. May 10 00:47:15.135189 systemd[1]: kubelet.service: Consumed 1.158s CPU time. May 10 00:47:15.137967 systemd[1]: Starting kubelet.service... May 10 00:47:15.172442 systemd[1]: Reloading. May 10 00:47:15.262655 /usr/lib/systemd/system-generators/torcx-generator[2028]: time="2025-05-10T00:47:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:47:15.262690 /usr/lib/systemd/system-generators/torcx-generator[2028]: time="2025-05-10T00:47:15Z" level=info msg="torcx already run" May 10 00:47:15.370251 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:47:15.370274 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:47:15.388525 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:47:15.488468 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 10 00:47:15.488675 systemd[1]: kubelet.service: Failed with result 'signal'. May 10 00:47:15.489037 systemd[1]: Stopped kubelet.service. May 10 00:47:15.491346 systemd[1]: Starting kubelet.service... May 10 00:47:15.933094 systemd[1]: Started kubelet.service. May 10 00:47:15.980464 kubelet[2088]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:47:15.980464 kubelet[2088]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 10 00:47:15.980464 kubelet[2088]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:47:15.980930 kubelet[2088]: I0510 00:47:15.980564 2088 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:47:16.324175 kubelet[2088]: I0510 00:47:16.323765 2088 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 10 00:47:16.324175 kubelet[2088]: I0510 00:47:16.323800 2088 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:47:16.324454 kubelet[2088]: I0510 00:47:16.324378 2088 server.go:954] "Client rotation is on, will bootstrap in background" May 10 00:47:16.397623 kubelet[2088]: I0510 00:47:16.397567 2088 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:47:16.411863 kubelet[2088]: E0510 00:47:16.411822 2088 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:47:16.412058 kubelet[2088]: I0510 00:47:16.412034 2088 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:47:16.414348 kubelet[2088]: I0510 00:47:16.414321 2088 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:47:16.415745 kubelet[2088]: I0510 00:47:16.415681 2088 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:47:16.415929 kubelet[2088]: I0510 00:47:16.415732 2088 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.20.182","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:47:16.415929 kubelet[2088]: I0510 00:47:16.415917 2088 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:47:16.415929 kubelet[2088]: I0510 00:47:16.415927 2088 container_manager_linux.go:304] "Creating device plugin manager" May 10 00:47:16.416174 kubelet[2088]: I0510 00:47:16.416041 2088 state_mem.go:36] "Initialized new in-memory state store" May 10 00:47:16.438109 kubelet[2088]: I0510 00:47:16.438068 2088 kubelet.go:446] "Attempting to sync node with API server" May 10 00:47:16.438109 kubelet[2088]: I0510 00:47:16.438103 2088 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:47:16.438281 kubelet[2088]: I0510 00:47:16.438126 2088 kubelet.go:352] "Adding apiserver pod source" May 10 00:47:16.438281 kubelet[2088]: I0510 00:47:16.438150 2088 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:47:16.439694 kubelet[2088]: E0510 00:47:16.439656 2088 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:16.439873 kubelet[2088]: E0510 00:47:16.439861 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:16.445099 kubelet[2088]: I0510 00:47:16.445063 2088 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:47:16.446427 kubelet[2088]: W0510 00:47:16.446398 2088 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 10 00:47:16.446575 kubelet[2088]: E0510 00:47:16.446436 2088 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 10 00:47:16.446575 kubelet[2088]: W0510 00:47:16.446557 2088 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.20.182" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 10 00:47:16.446768 kubelet[2088]: E0510 00:47:16.446577 2088 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.20.182\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 10 00:47:16.447156 kubelet[2088]: I0510 00:47:16.447120 2088 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:47:16.448637 kubelet[2088]: W0510 00:47:16.448608 2088 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:47:16.455623 kubelet[2088]: I0510 00:47:16.455579 2088 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 10 00:47:16.455733 kubelet[2088]: I0510 00:47:16.455642 2088 server.go:1287] "Started kubelet" May 10 00:47:16.459048 kubelet[2088]: I0510 00:47:16.458997 2088 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:47:16.460019 kubelet[2088]: I0510 00:47:16.459988 2088 server.go:490] "Adding debug handlers to kubelet server" May 10 00:47:16.464222 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 10 00:47:16.464315 kubelet[2088]: I0510 00:47:16.463973 2088 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:47:16.466474 kubelet[2088]: I0510 00:47:16.466412 2088 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:47:16.466757 kubelet[2088]: I0510 00:47:16.466735 2088 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:47:16.467391 kubelet[2088]: I0510 00:47:16.467212 2088 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:47:16.469370 kubelet[2088]: E0510 00:47:16.469096 2088 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.20.182\" not found" May 10 00:47:16.469370 kubelet[2088]: I0510 00:47:16.469117 2088 volume_manager.go:297] "Starting Kubelet Volume Manager" May 10 00:47:16.469370 kubelet[2088]: I0510 00:47:16.469287 2088 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:47:16.469370 kubelet[2088]: I0510 00:47:16.469330 2088 reconciler.go:26] "Reconciler: start to sync state" May 10 00:47:16.479114 kubelet[2088]: I0510 00:47:16.479088 2088 factory.go:221] Registration of the containerd container factory successfully May 10 00:47:16.479114 kubelet[2088]: I0510 00:47:16.479107 2088 factory.go:221] Registration of the systemd container factory successfully May 10 00:47:16.479270 kubelet[2088]: I0510 00:47:16.479255 2088 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:47:16.485211 kubelet[2088]: E0510 00:47:16.485182 2088 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:47:16.489866 kubelet[2088]: I0510 00:47:16.489831 2088 cpu_manager.go:221] "Starting CPU manager" policy="none" May 10 00:47:16.489866 kubelet[2088]: I0510 00:47:16.489846 2088 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 10 00:47:16.489866 kubelet[2088]: I0510 00:47:16.489862 2088 state_mem.go:36] "Initialized new in-memory state store" May 10 00:47:16.492334 kubelet[2088]: E0510 00:47:16.491265 2088 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.20.182\" not found" node="172.31.20.182" May 10 00:47:16.493626 kubelet[2088]: I0510 00:47:16.493608 2088 policy_none.go:49] "None policy: Start" May 10 00:47:16.493755 kubelet[2088]: I0510 00:47:16.493744 2088 memory_manager.go:186] "Starting memorymanager" policy="None" May 10 00:47:16.493845 kubelet[2088]: I0510 00:47:16.493836 2088 state_mem.go:35] "Initializing new in-memory state store" May 10 00:47:16.516628 systemd[1]: Created slice kubepods.slice. May 10 00:47:16.525380 systemd[1]: Created slice kubepods-burstable.slice. May 10 00:47:16.528474 systemd[1]: Created slice kubepods-besteffort.slice. May 10 00:47:16.536996 kubelet[2088]: I0510 00:47:16.536969 2088 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:47:16.537856 kubelet[2088]: I0510 00:47:16.537837 2088 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:47:16.539125 kubelet[2088]: E0510 00:47:16.539105 2088 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 10 00:47:16.539274 kubelet[2088]: E0510 00:47:16.539261 2088 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.20.182\" not found" May 10 00:47:16.539459 kubelet[2088]: I0510 00:47:16.538039 2088 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:47:16.540015 kubelet[2088]: I0510 00:47:16.540003 2088 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:47:16.641271 kubelet[2088]: I0510 00:47:16.641189 2088 kubelet_node_status.go:76] "Attempting to register node" node="172.31.20.182" May 10 00:47:16.655088 kubelet[2088]: I0510 00:47:16.655056 2088 kubelet_node_status.go:79] "Successfully registered node" node="172.31.20.182" May 10 00:47:16.655088 kubelet[2088]: E0510 00:47:16.655093 2088 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.20.182\": node \"172.31.20.182\" not found" May 10 00:47:16.661608 kubelet[2088]: I0510 00:47:16.661579 2088 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 10 00:47:16.662298 env[1734]: time="2025-05-10T00:47:16.662260123Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:47:16.662764 kubelet[2088]: I0510 00:47:16.662749 2088 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 10 00:47:16.671065 kubelet[2088]: E0510 00:47:16.671027 2088 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.20.182\" not found" May 10 00:47:16.672798 kubelet[2088]: I0510 00:47:16.672752 2088 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:47:16.674323 kubelet[2088]: I0510 00:47:16.674298 2088 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:47:16.674448 kubelet[2088]: I0510 00:47:16.674333 2088 status_manager.go:227] "Starting to sync pod status with apiserver" May 10 00:47:16.674448 kubelet[2088]: I0510 00:47:16.674357 2088 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 10 00:47:16.674448 kubelet[2088]: I0510 00:47:16.674366 2088 kubelet.go:2388] "Starting kubelet main sync loop" May 10 00:47:16.674448 kubelet[2088]: E0510 00:47:16.674421 2088 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 10 00:47:16.771906 kubelet[2088]: E0510 00:47:16.771863 2088 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.20.182\" not found" May 10 00:47:16.872970 kubelet[2088]: E0510 00:47:16.872916 2088 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.20.182\" not found" May 10 00:47:16.965326 sudo[1970]: pam_unix(sudo:session): session closed for user root May 10 00:47:16.973170 kubelet[2088]: E0510 00:47:16.973114 2088 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.20.182\" not found" May 10 00:47:16.990412 sshd[1967]: pam_unix(sshd:session): session closed for user core May 10 00:47:16.993281 systemd[1]: sshd@4-172.31.20.182:22-139.178.89.65:46620.service: Deactivated successfully. May 10 00:47:16.993965 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:47:16.994584 systemd-logind[1728]: Session 5 logged out. Waiting for processes to exit. May 10 00:47:16.995578 systemd-logind[1728]: Removed session 5. May 10 00:47:17.073700 kubelet[2088]: E0510 00:47:17.073663 2088 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.20.182\" not found" May 10 00:47:17.174383 kubelet[2088]: E0510 00:47:17.174328 2088 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.20.182\" not found" May 10 00:47:17.274689 kubelet[2088]: E0510 00:47:17.274578 2088 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.20.182\" not found" May 10 00:47:17.326052 kubelet[2088]: I0510 00:47:17.326005 2088 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 10 00:47:17.326270 kubelet[2088]: W0510 00:47:17.326245 2088 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 10 00:47:17.326346 kubelet[2088]: W0510 00:47:17.326296 2088 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 10 00:47:17.326460 kubelet[2088]: W0510 00:47:17.326352 2088 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 10 00:47:17.375444 kubelet[2088]: E0510 00:47:17.375397 2088 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.20.182\" not found" May 10 00:47:17.441101 kubelet[2088]: E0510 00:47:17.441026 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:17.476178 kubelet[2088]: E0510 00:47:17.476113 2088 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.20.182\" not found" May 10 00:47:17.511775 amazon-ssm-agent[1758]: 2025-05-10 00:47:17 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. May 10 00:47:17.577159 kubelet[2088]: E0510 00:47:17.577032 2088 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.20.182\" not found" May 10 00:47:18.441591 kubelet[2088]: E0510 00:47:18.441549 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:18.442723 kubelet[2088]: I0510 00:47:18.442686 2088 apiserver.go:52] "Watching apiserver" May 10 00:47:18.449920 systemd[1]: Created slice kubepods-besteffort-podf22755c3_ba5d_4780_bed2_81fc9c91728e.slice. May 10 00:47:18.458837 systemd[1]: Created slice kubepods-burstable-poded3ae639_be96_484e_8d1f_b6b82b6253a1.slice. May 10 00:47:18.471476 kubelet[2088]: I0510 00:47:18.471439 2088 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:47:18.482847 kubelet[2088]: I0510 00:47:18.482790 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-xtables-lock\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.482847 kubelet[2088]: I0510 00:47:18.482839 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-host-proc-sys-kernel\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.483055 kubelet[2088]: I0510 00:47:18.482863 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f22755c3-ba5d-4780-bed2-81fc9c91728e-lib-modules\") pod \"kube-proxy-s6vml\" (UID: \"f22755c3-ba5d-4780-bed2-81fc9c91728e\") " pod="kube-system/kube-proxy-s6vml" May 10 00:47:18.483055 kubelet[2088]: I0510 00:47:18.482885 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzspc\" (UniqueName: \"kubernetes.io/projected/f22755c3-ba5d-4780-bed2-81fc9c91728e-kube-api-access-xzspc\") pod \"kube-proxy-s6vml\" (UID: \"f22755c3-ba5d-4780-bed2-81fc9c91728e\") " pod="kube-system/kube-proxy-s6vml" May 10 00:47:18.483055 kubelet[2088]: I0510 00:47:18.482909 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cilium-run\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.483055 kubelet[2088]: I0510 00:47:18.482929 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-hostproc\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.483055 kubelet[2088]: I0510 00:47:18.482949 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cilium-config-path\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.483319 kubelet[2088]: I0510 00:47:18.482972 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-host-proc-sys-net\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.483319 kubelet[2088]: I0510 00:47:18.482997 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-etc-cni-netd\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.483319 kubelet[2088]: I0510 00:47:18.483020 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28s69\" (UniqueName: \"kubernetes.io/projected/ed3ae639-be96-484e-8d1f-b6b82b6253a1-kube-api-access-28s69\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.483319 kubelet[2088]: I0510 00:47:18.483044 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f22755c3-ba5d-4780-bed2-81fc9c91728e-kube-proxy\") pod \"kube-proxy-s6vml\" (UID: \"f22755c3-ba5d-4780-bed2-81fc9c91728e\") " pod="kube-system/kube-proxy-s6vml" May 10 00:47:18.483319 kubelet[2088]: I0510 00:47:18.483067 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed3ae639-be96-484e-8d1f-b6b82b6253a1-clustermesh-secrets\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.483319 kubelet[2088]: I0510 00:47:18.483089 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed3ae639-be96-484e-8d1f-b6b82b6253a1-hubble-tls\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.483529 kubelet[2088]: I0510 00:47:18.483110 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f22755c3-ba5d-4780-bed2-81fc9c91728e-xtables-lock\") pod \"kube-proxy-s6vml\" (UID: \"f22755c3-ba5d-4780-bed2-81fc9c91728e\") " pod="kube-system/kube-proxy-s6vml" May 10 00:47:18.483529 kubelet[2088]: I0510 00:47:18.483132 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-bpf-maps\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.483529 kubelet[2088]: I0510 00:47:18.483176 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cilium-cgroup\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.483529 kubelet[2088]: I0510 00:47:18.483199 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cni-path\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.483529 kubelet[2088]: I0510 00:47:18.483224 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-lib-modules\") pod \"cilium-4htpg\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " pod="kube-system/cilium-4htpg" May 10 00:47:18.584186 kubelet[2088]: I0510 00:47:18.584126 2088 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 10 00:47:18.757318 env[1734]: time="2025-05-10T00:47:18.757193453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s6vml,Uid:f22755c3-ba5d-4780-bed2-81fc9c91728e,Namespace:kube-system,Attempt:0,}" May 10 00:47:18.765363 env[1734]: time="2025-05-10T00:47:18.764827259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4htpg,Uid:ed3ae639-be96-484e-8d1f-b6b82b6253a1,Namespace:kube-system,Attempt:0,}" May 10 00:47:19.248675 env[1734]: time="2025-05-10T00:47:19.248565914Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:19.249953 env[1734]: time="2025-05-10T00:47:19.249917080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:19.253617 env[1734]: time="2025-05-10T00:47:19.253582196Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:19.254469 env[1734]: time="2025-05-10T00:47:19.254435625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:19.256020 env[1734]: time="2025-05-10T00:47:19.255997280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:19.257437 env[1734]: time="2025-05-10T00:47:19.257413197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:19.259969 env[1734]: time="2025-05-10T00:47:19.259940593Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:19.261599 env[1734]: time="2025-05-10T00:47:19.261570532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:19.283598 env[1734]: time="2025-05-10T00:47:19.282746517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:19.283598 env[1734]: time="2025-05-10T00:47:19.282787180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:19.283598 env[1734]: time="2025-05-10T00:47:19.282820462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:19.283598 env[1734]: time="2025-05-10T00:47:19.283205106Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3bc74e5404e934f212ddc90d78beb046c209f9d8d9086797a4996bde1ad8977 pid=2142 runtime=io.containerd.runc.v2 May 10 00:47:19.292135 env[1734]: time="2025-05-10T00:47:19.292080179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:19.292326 env[1734]: time="2025-05-10T00:47:19.292304775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:19.292413 env[1734]: time="2025-05-10T00:47:19.292395438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:19.292698 env[1734]: time="2025-05-10T00:47:19.292643814Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5 pid=2160 runtime=io.containerd.runc.v2 May 10 00:47:19.302502 systemd[1]: Started cri-containerd-c3bc74e5404e934f212ddc90d78beb046c209f9d8d9086797a4996bde1ad8977.scope. May 10 00:47:19.315567 systemd[1]: Started cri-containerd-9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5.scope. May 10 00:47:19.342715 env[1734]: time="2025-05-10T00:47:19.342677238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4htpg,Uid:ed3ae639-be96-484e-8d1f-b6b82b6253a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\"" May 10 00:47:19.345590 env[1734]: time="2025-05-10T00:47:19.345559314Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 00:47:19.345886 env[1734]: time="2025-05-10T00:47:19.345818049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s6vml,Uid:f22755c3-ba5d-4780-bed2-81fc9c91728e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3bc74e5404e934f212ddc90d78beb046c209f9d8d9086797a4996bde1ad8977\"" May 10 00:47:19.442395 kubelet[2088]: E0510 00:47:19.442355 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:19.593183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount467601061.mount: Deactivated successfully. May 10 00:47:20.443160 kubelet[2088]: E0510 00:47:20.443096 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:21.444153 kubelet[2088]: E0510 00:47:21.444100 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:22.445005 kubelet[2088]: E0510 00:47:22.444927 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:23.445986 kubelet[2088]: E0510 00:47:23.445909 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:24.446985 kubelet[2088]: E0510 00:47:24.446939 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:25.447680 kubelet[2088]: E0510 00:47:25.447607 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:26.448091 kubelet[2088]: E0510 00:47:26.447773 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:27.448606 kubelet[2088]: E0510 00:47:27.448545 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:28.449071 kubelet[2088]: E0510 00:47:28.449018 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:28.976779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2722252582.mount: Deactivated successfully. May 10 00:47:29.449774 kubelet[2088]: E0510 00:47:29.449476 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:30.449985 kubelet[2088]: E0510 00:47:30.449928 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:31.450246 kubelet[2088]: E0510 00:47:31.450206 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:31.849662 env[1734]: time="2025-05-10T00:47:31.849392194Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:31.852056 env[1734]: time="2025-05-10T00:47:31.852015198Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:31.854160 env[1734]: time="2025-05-10T00:47:31.854118980Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:31.854713 env[1734]: time="2025-05-10T00:47:31.854685756Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 10 00:47:31.856980 env[1734]: time="2025-05-10T00:47:31.856691896Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 10 00:47:31.857347 env[1734]: time="2025-05-10T00:47:31.857296062Z" level=info msg="CreateContainer within sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:47:31.872546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796736396.mount: Deactivated successfully. May 10 00:47:31.880776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302473916.mount: Deactivated successfully. May 10 00:47:31.888042 env[1734]: time="2025-05-10T00:47:31.887989786Z" level=info msg="CreateContainer within sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1\"" May 10 00:47:31.888987 env[1734]: time="2025-05-10T00:47:31.888951092Z" level=info msg="StartContainer for \"cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1\"" May 10 00:47:31.910927 systemd[1]: Started cri-containerd-cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1.scope. May 10 00:47:31.951933 env[1734]: time="2025-05-10T00:47:31.951850423Z" level=info msg="StartContainer for \"cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1\" returns successfully" May 10 00:47:31.955636 systemd[1]: cri-containerd-cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1.scope: Deactivated successfully. May 10 00:47:32.153258 env[1734]: time="2025-05-10T00:47:32.152506449Z" level=info msg="shim disconnected" id=cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1 May 10 00:47:32.153258 env[1734]: time="2025-05-10T00:47:32.152552971Z" level=warning msg="cleaning up after shim disconnected" id=cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1 namespace=k8s.io May 10 00:47:32.153258 env[1734]: time="2025-05-10T00:47:32.152563497Z" level=info msg="cleaning up dead shim" May 10 00:47:32.172916 env[1734]: time="2025-05-10T00:47:32.172868559Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2273 runtime=io.containerd.runc.v2\n" May 10 00:47:32.451003 kubelet[2088]: E0510 00:47:32.450883 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:32.715362 env[1734]: time="2025-05-10T00:47:32.715257290Z" level=info msg="CreateContainer within sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:47:32.732391 env[1734]: time="2025-05-10T00:47:32.732342240Z" level=info msg="CreateContainer within sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581\"" May 10 00:47:32.732924 env[1734]: time="2025-05-10T00:47:32.732897211Z" level=info msg="StartContainer for \"5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581\"" May 10 00:47:32.754334 systemd[1]: Started cri-containerd-5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581.scope. May 10 00:47:32.813356 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:47:32.813560 systemd[1]: Stopped systemd-sysctl.service. May 10 00:47:32.814608 systemd[1]: Stopping systemd-sysctl.service... May 10 00:47:32.817558 systemd[1]: Starting systemd-sysctl.service... May 10 00:47:32.817825 systemd[1]: cri-containerd-5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581.scope: Deactivated successfully. May 10 00:47:32.828972 systemd[1]: Finished systemd-sysctl.service. May 10 00:47:32.830218 env[1734]: time="2025-05-10T00:47:32.830185118Z" level=info msg="StartContainer for \"5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581\" returns successfully" May 10 00:47:32.868720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1-rootfs.mount: Deactivated successfully. May 10 00:47:32.904360 env[1734]: time="2025-05-10T00:47:32.904305360Z" level=info msg="shim disconnected" id=5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581 May 10 00:47:32.904360 env[1734]: time="2025-05-10T00:47:32.904361954Z" level=warning msg="cleaning up after shim disconnected" id=5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581 namespace=k8s.io May 10 00:47:32.904788 env[1734]: time="2025-05-10T00:47:32.904375063Z" level=info msg="cleaning up dead shim" May 10 00:47:32.959637 env[1734]: time="2025-05-10T00:47:32.959590675Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2339 runtime=io.containerd.runc.v2\n" May 10 00:47:33.262052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3579871464.mount: Deactivated successfully. May 10 00:47:33.451226 kubelet[2088]: E0510 00:47:33.451189 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:33.717428 env[1734]: time="2025-05-10T00:47:33.717270012Z" level=info msg="CreateContainer within sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:47:33.738148 env[1734]: time="2025-05-10T00:47:33.738088263Z" level=info msg="CreateContainer within sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb\"" May 10 00:47:33.741584 env[1734]: time="2025-05-10T00:47:33.739566479Z" level=info msg="StartContainer for \"f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb\"" May 10 00:47:33.799332 systemd[1]: Started cri-containerd-f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb.scope. May 10 00:47:33.848888 systemd[1]: cri-containerd-f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb.scope: Deactivated successfully. May 10 00:47:33.850807 env[1734]: time="2025-05-10T00:47:33.850768960Z" level=info msg="StartContainer for \"f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb\" returns successfully" May 10 00:47:33.872842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb-rootfs.mount: Deactivated successfully. May 10 00:47:34.037080 env[1734]: time="2025-05-10T00:47:34.036640672Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.050983 env[1734]: time="2025-05-10T00:47:34.050925660Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.054800 env[1734]: time="2025-05-10T00:47:34.054763795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.056572 env[1734]: time="2025-05-10T00:47:34.056537096Z" level=info msg="shim disconnected" id=f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb May 10 00:47:34.056722 env[1734]: time="2025-05-10T00:47:34.056706636Z" level=warning msg="cleaning up after shim disconnected" id=f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb namespace=k8s.io May 10 00:47:34.056774 env[1734]: time="2025-05-10T00:47:34.056765357Z" level=info msg="cleaning up dead shim" May 10 00:47:34.056932 env[1734]: time="2025-05-10T00:47:34.056895912Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.057112 env[1734]: time="2025-05-10T00:47:34.057089589Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 10 00:47:34.059327 env[1734]: time="2025-05-10T00:47:34.059295131Z" level=info msg="CreateContainer within sandbox \"c3bc74e5404e934f212ddc90d78beb046c209f9d8d9086797a4996bde1ad8977\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:47:34.067237 env[1734]: time="2025-05-10T00:47:34.067192335Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2398 runtime=io.containerd.runc.v2\n" May 10 00:47:34.071800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount669659723.mount: Deactivated successfully. May 10 00:47:34.079961 env[1734]: time="2025-05-10T00:47:34.079891954Z" level=info msg="CreateContainer within sandbox \"c3bc74e5404e934f212ddc90d78beb046c209f9d8d9086797a4996bde1ad8977\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"739b8a785211a97c736d956742e189f4d07084b19c17a8100ad248ba7b4be4bf\"" May 10 00:47:34.080595 env[1734]: time="2025-05-10T00:47:34.080552849Z" level=info msg="StartContainer for \"739b8a785211a97c736d956742e189f4d07084b19c17a8100ad248ba7b4be4bf\"" May 10 00:47:34.097420 systemd[1]: Started cri-containerd-739b8a785211a97c736d956742e189f4d07084b19c17a8100ad248ba7b4be4bf.scope. May 10 00:47:34.130900 env[1734]: time="2025-05-10T00:47:34.130813429Z" level=info msg="StartContainer for \"739b8a785211a97c736d956742e189f4d07084b19c17a8100ad248ba7b4be4bf\" returns successfully" May 10 00:47:34.451891 kubelet[2088]: E0510 00:47:34.451747 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:34.721712 env[1734]: time="2025-05-10T00:47:34.721597173Z" level=info msg="CreateContainer within sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:47:34.733954 env[1734]: time="2025-05-10T00:47:34.733905405Z" level=info msg="CreateContainer within sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9\"" May 10 00:47:34.734716 env[1734]: time="2025-05-10T00:47:34.734689543Z" level=info msg="StartContainer for \"ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9\"" May 10 00:47:34.751029 systemd[1]: Started cri-containerd-ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9.scope. May 10 00:47:34.781089 systemd[1]: cri-containerd-ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9.scope: Deactivated successfully. May 10 00:47:34.784423 env[1734]: time="2025-05-10T00:47:34.783940287Z" level=info msg="StartContainer for \"ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9\" returns successfully" May 10 00:47:34.814001 env[1734]: time="2025-05-10T00:47:34.813947580Z" level=info msg="shim disconnected" id=ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9 May 10 00:47:34.814001 env[1734]: time="2025-05-10T00:47:34.813998121Z" level=warning msg="cleaning up after shim disconnected" id=ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9 namespace=k8s.io May 10 00:47:34.814001 env[1734]: time="2025-05-10T00:47:34.814008225Z" level=info msg="cleaning up dead shim" May 10 00:47:34.822079 env[1734]: time="2025-05-10T00:47:34.821972424Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2622 runtime=io.containerd.runc.v2\n" May 10 00:47:35.452652 kubelet[2088]: E0510 00:47:35.452612 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:35.729529 env[1734]: time="2025-05-10T00:47:35.729426512Z" level=info msg="CreateContainer within sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:47:35.745480 kubelet[2088]: I0510 00:47:35.745418 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s6vml" podStartSLOduration=5.034402383 podStartE2EDuration="19.745400861s" podCreationTimestamp="2025-05-10 00:47:16 +0000 UTC" firstStartedPulling="2025-05-10 00:47:19.347031331 +0000 UTC m=+3.410520094" lastFinishedPulling="2025-05-10 00:47:34.058029809 +0000 UTC m=+18.121518572" observedRunningTime="2025-05-10 00:47:34.752326362 +0000 UTC m=+18.815815147" watchObservedRunningTime="2025-05-10 00:47:35.745400861 +0000 UTC m=+19.808889646" May 10 00:47:35.748882 env[1734]: time="2025-05-10T00:47:35.748838505Z" level=info msg="CreateContainer within sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\"" May 10 00:47:35.749445 env[1734]: time="2025-05-10T00:47:35.749408268Z" level=info msg="StartContainer for \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\"" May 10 00:47:35.784262 systemd[1]: Started cri-containerd-5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c.scope. May 10 00:47:35.828100 env[1734]: time="2025-05-10T00:47:35.827998690Z" level=info msg="StartContainer for \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\" returns successfully" May 10 00:47:35.868398 systemd[1]: run-containerd-runc-k8s.io-5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c-runc.JzuhK7.mount: Deactivated successfully. May 10 00:47:35.938080 kubelet[2088]: I0510 00:47:35.936943 2088 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 10 00:47:36.289178 kernel: Initializing XFRM netlink socket May 10 00:47:36.439200 kubelet[2088]: E0510 00:47:36.439121 2088 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:36.453053 kubelet[2088]: E0510 00:47:36.452984 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:37.453404 kubelet[2088]: E0510 00:47:37.453347 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:37.934850 systemd-networkd[1463]: cilium_host: Link UP May 10 00:47:37.936958 systemd-networkd[1463]: cilium_net: Link UP May 10 00:47:37.936968 systemd-networkd[1463]: cilium_net: Gained carrier May 10 00:47:37.937165 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 10 00:47:37.937183 systemd-networkd[1463]: cilium_host: Gained carrier May 10 00:47:37.937821 (udev-worker)[2480]: Network interface NamePolicy= disabled on kernel command line. May 10 00:47:37.939723 (udev-worker)[2764]: Network interface NamePolicy= disabled on kernel command line. May 10 00:47:38.042186 (udev-worker)[2786]: Network interface NamePolicy= disabled on kernel command line. May 10 00:47:38.053665 systemd-networkd[1463]: cilium_vxlan: Link UP May 10 00:47:38.053678 systemd-networkd[1463]: cilium_vxlan: Gained carrier May 10 00:47:38.292170 kernel: NET: Registered PF_ALG protocol family May 10 00:47:38.454053 kubelet[2088]: E0510 00:47:38.453941 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:38.664451 systemd-networkd[1463]: cilium_host: Gained IPv6LL May 10 00:47:38.918334 systemd-networkd[1463]: cilium_net: Gained IPv6LL May 10 00:47:38.962176 systemd-networkd[1463]: lxc_health: Link UP May 10 00:47:38.968331 (udev-worker)[2785]: Network interface NamePolicy= disabled on kernel command line. May 10 00:47:38.970972 systemd-networkd[1463]: lxc_health: Gained carrier May 10 00:47:38.971161 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:47:39.174371 systemd-networkd[1463]: cilium_vxlan: Gained IPv6LL May 10 00:47:39.455062 kubelet[2088]: E0510 00:47:39.454939 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:39.731216 kubelet[2088]: I0510 00:47:39.728940 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4htpg" podStartSLOduration=11.218130105 podStartE2EDuration="23.728917372s" podCreationTimestamp="2025-05-10 00:47:16 +0000 UTC" firstStartedPulling="2025-05-10 00:47:19.344997478 +0000 UTC m=+3.408486240" lastFinishedPulling="2025-05-10 00:47:31.855784744 +0000 UTC m=+15.919273507" observedRunningTime="2025-05-10 00:47:36.751346682 +0000 UTC m=+20.814835461" watchObservedRunningTime="2025-05-10 00:47:39.728917372 +0000 UTC m=+23.792406155" May 10 00:47:39.740443 systemd[1]: Created slice kubepods-besteffort-pod92245fc9_ee76_4c36_8aaa_e7db11436682.slice. May 10 00:47:39.830693 kubelet[2088]: I0510 00:47:39.830648 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h6vz\" (UniqueName: \"kubernetes.io/projected/92245fc9-ee76-4c36-8aaa-e7db11436682-kube-api-access-4h6vz\") pod \"nginx-deployment-7fcdb87857-nnp85\" (UID: \"92245fc9-ee76-4c36-8aaa-e7db11436682\") " pod="default/nginx-deployment-7fcdb87857-nnp85" May 10 00:47:40.047119 env[1734]: time="2025-05-10T00:47:40.046730282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nnp85,Uid:92245fc9-ee76-4c36-8aaa-e7db11436682,Namespace:default,Attempt:0,}" May 10 00:47:40.133700 systemd-networkd[1463]: lxc4dffcc843684: Link UP May 10 00:47:40.141235 kernel: eth0: renamed from tmp7cd7e May 10 00:47:40.150883 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 00:47:40.151009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4dffcc843684: link becomes ready May 10 00:47:40.151226 systemd-networkd[1463]: lxc4dffcc843684: Gained carrier May 10 00:47:40.295303 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 10 00:47:40.456285 kubelet[2088]: E0510 00:47:40.455722 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:40.520600 systemd-networkd[1463]: lxc_health: Gained IPv6LL May 10 00:47:41.286328 systemd-networkd[1463]: lxc4dffcc843684: Gained IPv6LL May 10 00:47:41.456297 kubelet[2088]: E0510 00:47:41.456257 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:42.457501 kubelet[2088]: E0510 00:47:42.457461 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:43.458616 kubelet[2088]: E0510 00:47:43.458563 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:43.705771 env[1734]: time="2025-05-10T00:47:43.705576359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:43.705771 env[1734]: time="2025-05-10T00:47:43.705611454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:43.705771 env[1734]: time="2025-05-10T00:47:43.705622425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:43.709760 env[1734]: time="2025-05-10T00:47:43.705811514Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cd7e2a4996d4b0103fb70eceb330ea54e550947d2c040ff8a8abde9ad501633 pid=3143 runtime=io.containerd.runc.v2 May 10 00:47:43.721580 systemd[1]: Started cri-containerd-7cd7e2a4996d4b0103fb70eceb330ea54e550947d2c040ff8a8abde9ad501633.scope. May 10 00:47:43.764416 env[1734]: time="2025-05-10T00:47:43.764371176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nnp85,Uid:92245fc9-ee76-4c36-8aaa-e7db11436682,Namespace:default,Attempt:0,} returns sandbox id \"7cd7e2a4996d4b0103fb70eceb330ea54e550947d2c040ff8a8abde9ad501633\"" May 10 00:47:43.765685 env[1734]: time="2025-05-10T00:47:43.765595590Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 10 00:47:44.458905 kubelet[2088]: E0510 00:47:44.458860 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:45.459860 kubelet[2088]: E0510 00:47:45.459814 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:46.295251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2229001681.mount: Deactivated successfully. May 10 00:47:46.460430 kubelet[2088]: E0510 00:47:46.460389 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:47.461526 kubelet[2088]: E0510 00:47:47.461486 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:47.541608 amazon-ssm-agent[1758]: 2025-05-10 00:47:47 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated May 10 00:47:47.861978 env[1734]: time="2025-05-10T00:47:47.861655522Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:47.864394 env[1734]: time="2025-05-10T00:47:47.864354890Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:47.866900 env[1734]: time="2025-05-10T00:47:47.866858639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:47.868715 env[1734]: time="2025-05-10T00:47:47.868677839Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:47.869518 env[1734]: time="2025-05-10T00:47:47.869480146Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 10 00:47:47.872319 env[1734]: time="2025-05-10T00:47:47.872282564Z" level=info msg="CreateContainer within sandbox \"7cd7e2a4996d4b0103fb70eceb330ea54e550947d2c040ff8a8abde9ad501633\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 10 00:47:47.893455 env[1734]: time="2025-05-10T00:47:47.893400198Z" level=info msg="CreateContainer within sandbox \"7cd7e2a4996d4b0103fb70eceb330ea54e550947d2c040ff8a8abde9ad501633\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d4a6f58e8d7d69ac403100d6422914f5a482056d0473ad826ea09d6c90b43d51\"" May 10 00:47:47.894179 env[1734]: time="2025-05-10T00:47:47.894148921Z" level=info msg="StartContainer for \"d4a6f58e8d7d69ac403100d6422914f5a482056d0473ad826ea09d6c90b43d51\"" May 10 00:47:47.913459 systemd[1]: Started cri-containerd-d4a6f58e8d7d69ac403100d6422914f5a482056d0473ad826ea09d6c90b43d51.scope. May 10 00:47:47.952426 env[1734]: time="2025-05-10T00:47:47.951307286Z" level=info msg="StartContainer for \"d4a6f58e8d7d69ac403100d6422914f5a482056d0473ad826ea09d6c90b43d51\" returns successfully" May 10 00:47:48.461670 kubelet[2088]: E0510 00:47:48.461630 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:48.761387 kubelet[2088]: I0510 00:47:48.761227 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-nnp85" podStartSLOduration=5.655329381 podStartE2EDuration="9.761209631s" podCreationTimestamp="2025-05-10 00:47:39 +0000 UTC" firstStartedPulling="2025-05-10 00:47:43.765132845 +0000 UTC m=+27.828621608" lastFinishedPulling="2025-05-10 00:47:47.871013083 +0000 UTC m=+31.934501858" observedRunningTime="2025-05-10 00:47:48.760809267 +0000 UTC m=+32.824298050" watchObservedRunningTime="2025-05-10 00:47:48.761209631 +0000 UTC m=+32.824698416" May 10 00:47:49.462700 kubelet[2088]: E0510 00:47:49.462644 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:50.463636 kubelet[2088]: E0510 00:47:50.463582 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:51.464710 kubelet[2088]: E0510 00:47:51.464666 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:52.465650 kubelet[2088]: E0510 00:47:52.465602 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:53.465828 kubelet[2088]: E0510 00:47:53.465784 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:54.466382 kubelet[2088]: E0510 00:47:54.466325 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:54.911773 update_engine[1730]: I0510 00:47:54.911430 1730 update_attempter.cc:509] Updating boot flags... May 10 00:47:55.466786 kubelet[2088]: E0510 00:47:55.466743 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:56.298237 systemd[1]: Created slice kubepods-besteffort-pod16aa1706_8daf_48e7_8d4c_57e1d0d1a520.slice. May 10 00:47:56.339994 kubelet[2088]: I0510 00:47:56.339942 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf8s5\" (UniqueName: \"kubernetes.io/projected/16aa1706-8daf-48e7-8d4c-57e1d0d1a520-kube-api-access-qf8s5\") pod \"nfs-server-provisioner-0\" (UID: \"16aa1706-8daf-48e7-8d4c-57e1d0d1a520\") " pod="default/nfs-server-provisioner-0" May 10 00:47:56.339994 kubelet[2088]: I0510 00:47:56.339988 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/16aa1706-8daf-48e7-8d4c-57e1d0d1a520-data\") pod \"nfs-server-provisioner-0\" (UID: \"16aa1706-8daf-48e7-8d4c-57e1d0d1a520\") " pod="default/nfs-server-provisioner-0" May 10 00:47:56.438724 kubelet[2088]: E0510 00:47:56.438662 2088 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:56.467370 kubelet[2088]: E0510 00:47:56.467330 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:56.602033 env[1734]: time="2025-05-10T00:47:56.601642127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:16aa1706-8daf-48e7-8d4c-57e1d0d1a520,Namespace:default,Attempt:0,}" May 10 00:47:56.710768 (udev-worker)[3247]: Network interface NamePolicy= disabled on kernel command line. May 10 00:47:56.711572 systemd-networkd[1463]: lxc7cda3550bb51: Link UP May 10 00:47:56.717031 (udev-worker)[3252]: Network interface NamePolicy= disabled on kernel command line. May 10 00:47:56.718189 kernel: eth0: renamed from tmp876c1 May 10 00:47:56.722860 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 00:47:56.722959 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7cda3550bb51: link becomes ready May 10 00:47:56.723164 systemd-networkd[1463]: lxc7cda3550bb51: Gained carrier May 10 00:47:56.967471 env[1734]: time="2025-05-10T00:47:56.967392488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:56.967471 env[1734]: time="2025-05-10T00:47:56.967436011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:56.967696 env[1734]: time="2025-05-10T00:47:56.967452422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:56.967953 env[1734]: time="2025-05-10T00:47:56.967910507Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/876c1fb211eb087111b6e5cd8c8728f717d7c56f7cf3fef272266fc2ce7a583f pid=3529 runtime=io.containerd.runc.v2 May 10 00:47:56.990773 systemd[1]: run-containerd-runc-k8s.io-876c1fb211eb087111b6e5cd8c8728f717d7c56f7cf3fef272266fc2ce7a583f-runc.VtJmNl.mount: Deactivated successfully. May 10 00:47:56.998446 systemd[1]: Started cri-containerd-876c1fb211eb087111b6e5cd8c8728f717d7c56f7cf3fef272266fc2ce7a583f.scope. May 10 00:47:57.042096 env[1734]: time="2025-05-10T00:47:57.042045814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:16aa1706-8daf-48e7-8d4c-57e1d0d1a520,Namespace:default,Attempt:0,} returns sandbox id \"876c1fb211eb087111b6e5cd8c8728f717d7c56f7cf3fef272266fc2ce7a583f\"" May 10 00:47:57.044015 env[1734]: time="2025-05-10T00:47:57.043982219Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 10 00:47:57.468480 kubelet[2088]: E0510 00:47:57.468442 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:57.798380 systemd-networkd[1463]: lxc7cda3550bb51: Gained IPv6LL May 10 00:47:58.469410 kubelet[2088]: E0510 00:47:58.469358 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:59.469563 kubelet[2088]: E0510 00:47:59.469498 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:47:59.917795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3786564644.mount: Deactivated successfully. May 10 00:48:00.469622 kubelet[2088]: E0510 00:48:00.469568 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:01.470625 kubelet[2088]: E0510 00:48:01.470579 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:02.211672 env[1734]: time="2025-05-10T00:48:02.211614270Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:02.214427 env[1734]: time="2025-05-10T00:48:02.214378891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:02.217184 env[1734]: time="2025-05-10T00:48:02.217119179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:02.220858 env[1734]: time="2025-05-10T00:48:02.220814190Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:02.221812 env[1734]: time="2025-05-10T00:48:02.221762063Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 10 00:48:02.240531 env[1734]: time="2025-05-10T00:48:02.240480080Z" level=info msg="CreateContainer within sandbox \"876c1fb211eb087111b6e5cd8c8728f717d7c56f7cf3fef272266fc2ce7a583f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 10 00:48:02.253697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782111709.mount: Deactivated successfully. May 10 00:48:02.263290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2797628076.mount: Deactivated successfully. May 10 00:48:02.267331 env[1734]: time="2025-05-10T00:48:02.267287368Z" level=info msg="CreateContainer within sandbox \"876c1fb211eb087111b6e5cd8c8728f717d7c56f7cf3fef272266fc2ce7a583f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f7245ce721ca81c81955269b72db1b4ac2eaa2c828e9101637e81405b1b8bcd8\"" May 10 00:48:02.268470 env[1734]: time="2025-05-10T00:48:02.268424799Z" level=info msg="StartContainer for \"f7245ce721ca81c81955269b72db1b4ac2eaa2c828e9101637e81405b1b8bcd8\"" May 10 00:48:02.294357 systemd[1]: Started cri-containerd-f7245ce721ca81c81955269b72db1b4ac2eaa2c828e9101637e81405b1b8bcd8.scope. May 10 00:48:02.362169 env[1734]: time="2025-05-10T00:48:02.361764953Z" level=info msg="StartContainer for \"f7245ce721ca81c81955269b72db1b4ac2eaa2c828e9101637e81405b1b8bcd8\" returns successfully" May 10 00:48:02.471039 kubelet[2088]: E0510 00:48:02.470898 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:02.809391 kubelet[2088]: I0510 00:48:02.809233 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.6293627339999999 podStartE2EDuration="6.809216536s" podCreationTimestamp="2025-05-10 00:47:56 +0000 UTC" firstStartedPulling="2025-05-10 00:47:57.043648306 +0000 UTC m=+41.107137073" lastFinishedPulling="2025-05-10 00:48:02.223502105 +0000 UTC m=+46.286990875" observedRunningTime="2025-05-10 00:48:02.808799613 +0000 UTC m=+46.872288399" watchObservedRunningTime="2025-05-10 00:48:02.809216536 +0000 UTC m=+46.872705321" May 10 00:48:03.471196 kubelet[2088]: E0510 00:48:03.471127 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:04.471958 kubelet[2088]: E0510 00:48:04.471919 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:05.473017 kubelet[2088]: E0510 00:48:05.472972 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:06.473624 kubelet[2088]: E0510 00:48:06.473582 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:07.474593 kubelet[2088]: E0510 00:48:07.474497 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:08.475240 kubelet[2088]: E0510 00:48:08.475197 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:09.475878 kubelet[2088]: E0510 00:48:09.475829 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:10.476570 kubelet[2088]: E0510 00:48:10.476526 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:11.477518 kubelet[2088]: E0510 00:48:11.477460 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:12.462034 systemd[1]: Created slice kubepods-besteffort-pod3ad65874_ffed_4c1f_a007_19531796735c.slice. May 10 00:48:12.477910 kubelet[2088]: E0510 00:48:12.477869 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:12.548723 kubelet[2088]: I0510 00:48:12.548682 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9cbdbf7a-d445-4bf3-8b81-c8dc5ad3c98d\" (UniqueName: \"kubernetes.io/nfs/3ad65874-ffed-4c1f-a007-19531796735c-pvc-9cbdbf7a-d445-4bf3-8b81-c8dc5ad3c98d\") pod \"test-pod-1\" (UID: \"3ad65874-ffed-4c1f-a007-19531796735c\") " pod="default/test-pod-1" May 10 00:48:12.548723 kubelet[2088]: I0510 00:48:12.548729 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tb9b\" (UniqueName: \"kubernetes.io/projected/3ad65874-ffed-4c1f-a007-19531796735c-kube-api-access-7tb9b\") pod \"test-pod-1\" (UID: \"3ad65874-ffed-4c1f-a007-19531796735c\") " pod="default/test-pod-1" May 10 00:48:12.717180 kernel: FS-Cache: Loaded May 10 00:48:12.786987 kernel: RPC: Registered named UNIX socket transport module. May 10 00:48:12.787153 kernel: RPC: Registered udp transport module. May 10 00:48:12.787186 kernel: RPC: Registered tcp transport module. May 10 00:48:12.787215 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 10 00:48:12.855435 kernel: FS-Cache: Netfs 'nfs' registered for caching May 10 00:48:13.028728 kernel: NFS: Registering the id_resolver key type May 10 00:48:13.028870 kernel: Key type id_resolver registered May 10 00:48:13.028908 kernel: Key type id_legacy registered May 10 00:48:13.106568 nfsidmap[3653]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' May 10 00:48:13.110695 nfsidmap[3654]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' May 10 00:48:13.367445 env[1734]: time="2025-05-10T00:48:13.367112610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3ad65874-ffed-4c1f-a007-19531796735c,Namespace:default,Attempt:0,}" May 10 00:48:13.397449 (udev-worker)[3646]: Network interface NamePolicy= disabled on kernel command line. May 10 00:48:13.398258 systemd-networkd[1463]: lxc2ca29df72185: Link UP May 10 00:48:13.402024 (udev-worker)[3650]: Network interface NamePolicy= disabled on kernel command line. May 10 00:48:13.403274 kernel: eth0: renamed from tmp56e16 May 10 00:48:13.407841 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 00:48:13.407941 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2ca29df72185: link becomes ready May 10 00:48:13.408096 systemd-networkd[1463]: lxc2ca29df72185: Gained carrier May 10 00:48:13.478720 kubelet[2088]: E0510 00:48:13.478666 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:13.559498 env[1734]: time="2025-05-10T00:48:13.559409296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:48:13.559498 env[1734]: time="2025-05-10T00:48:13.559455607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:48:13.559677 env[1734]: time="2025-05-10T00:48:13.559485131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:48:13.559882 env[1734]: time="2025-05-10T00:48:13.559841294Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56e16dd9ac00d4078c3f5278c20a974a1d91e7ed866b724f6d6f5eb5f1f5ff33 pid=3680 runtime=io.containerd.runc.v2 May 10 00:48:13.575548 systemd[1]: Started cri-containerd-56e16dd9ac00d4078c3f5278c20a974a1d91e7ed866b724f6d6f5eb5f1f5ff33.scope. May 10 00:48:13.627450 env[1734]: time="2025-05-10T00:48:13.627352190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3ad65874-ffed-4c1f-a007-19531796735c,Namespace:default,Attempt:0,} returns sandbox id \"56e16dd9ac00d4078c3f5278c20a974a1d91e7ed866b724f6d6f5eb5f1f5ff33\"" May 10 00:48:13.629022 env[1734]: time="2025-05-10T00:48:13.628990427Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 10 00:48:13.906967 env[1734]: time="2025-05-10T00:48:13.906856353Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:13.908809 env[1734]: time="2025-05-10T00:48:13.908767485Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:13.910907 env[1734]: time="2025-05-10T00:48:13.910877909Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:13.912482 env[1734]: time="2025-05-10T00:48:13.912452366Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:13.913165 env[1734]: time="2025-05-10T00:48:13.913119220Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 10 00:48:13.915729 env[1734]: time="2025-05-10T00:48:13.915694584Z" level=info msg="CreateContainer within sandbox \"56e16dd9ac00d4078c3f5278c20a974a1d91e7ed866b724f6d6f5eb5f1f5ff33\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 10 00:48:13.933601 env[1734]: time="2025-05-10T00:48:13.933557650Z" level=info msg="CreateContainer within sandbox \"56e16dd9ac00d4078c3f5278c20a974a1d91e7ed866b724f6d6f5eb5f1f5ff33\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b21fa3afc694695d7ebb1268cae566f74e5e922f0e436e9eaba7b7dc1d60ecdb\"" May 10 00:48:13.934412 env[1734]: time="2025-05-10T00:48:13.934372695Z" level=info msg="StartContainer for \"b21fa3afc694695d7ebb1268cae566f74e5e922f0e436e9eaba7b7dc1d60ecdb\"" May 10 00:48:13.962370 systemd[1]: Started cri-containerd-b21fa3afc694695d7ebb1268cae566f74e5e922f0e436e9eaba7b7dc1d60ecdb.scope. May 10 00:48:14.007091 env[1734]: time="2025-05-10T00:48:14.007026961Z" level=info msg="StartContainer for \"b21fa3afc694695d7ebb1268cae566f74e5e922f0e436e9eaba7b7dc1d60ecdb\" returns successfully" May 10 00:48:14.479166 kubelet[2088]: E0510 00:48:14.479092 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:14.828500 kubelet[2088]: I0510 00:48:14.828273 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.54272207 podStartE2EDuration="18.828256213s" podCreationTimestamp="2025-05-10 00:47:56 +0000 UTC" firstStartedPulling="2025-05-10 00:48:13.628644468 +0000 UTC m=+57.692133231" lastFinishedPulling="2025-05-10 00:48:13.914178608 +0000 UTC m=+57.977667374" observedRunningTime="2025-05-10 00:48:14.828250766 +0000 UTC m=+58.891739549" watchObservedRunningTime="2025-05-10 00:48:14.828256213 +0000 UTC m=+58.891744980" May 10 00:48:15.270371 systemd-networkd[1463]: lxc2ca29df72185: Gained IPv6LL May 10 00:48:15.480060 kubelet[2088]: E0510 00:48:15.480002 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:16.438927 kubelet[2088]: E0510 00:48:16.438883 2088 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:16.481158 kubelet[2088]: E0510 00:48:16.481098 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:17.481306 kubelet[2088]: E0510 00:48:17.481264 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:18.481713 kubelet[2088]: E0510 00:48:18.481658 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:19.482306 kubelet[2088]: E0510 00:48:19.482246 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:20.482771 kubelet[2088]: E0510 00:48:20.482729 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:21.307529 env[1734]: time="2025-05-10T00:48:21.307388887Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:48:21.350256 env[1734]: time="2025-05-10T00:48:21.350214074Z" level=info msg="StopContainer for \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\" with timeout 2 (s)" May 10 00:48:21.350568 env[1734]: time="2025-05-10T00:48:21.350440445Z" level=info msg="Stop container \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\" with signal terminated" May 10 00:48:21.357172 systemd-networkd[1463]: lxc_health: Link DOWN May 10 00:48:21.357180 systemd-networkd[1463]: lxc_health: Lost carrier May 10 00:48:21.376586 systemd[1]: cri-containerd-5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c.scope: Deactivated successfully. May 10 00:48:21.376845 systemd[1]: cri-containerd-5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c.scope: Consumed 6.893s CPU time. May 10 00:48:21.397963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c-rootfs.mount: Deactivated successfully. May 10 00:48:21.414411 env[1734]: time="2025-05-10T00:48:21.414354056Z" level=info msg="shim disconnected" id=5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c May 10 00:48:21.414411 env[1734]: time="2025-05-10T00:48:21.414395507Z" level=warning msg="cleaning up after shim disconnected" id=5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c namespace=k8s.io May 10 00:48:21.414411 env[1734]: time="2025-05-10T00:48:21.414405050Z" level=info msg="cleaning up dead shim" May 10 00:48:21.422191 env[1734]: time="2025-05-10T00:48:21.422134010Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3813 runtime=io.containerd.runc.v2\n" May 10 00:48:21.424352 env[1734]: time="2025-05-10T00:48:21.424316316Z" level=info msg="StopContainer for \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\" returns successfully" May 10 00:48:21.424937 env[1734]: time="2025-05-10T00:48:21.424905507Z" level=info msg="StopPodSandbox for \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\"" May 10 00:48:21.425033 env[1734]: time="2025-05-10T00:48:21.424959614Z" level=info msg="Container to stop \"ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:48:21.425033 env[1734]: time="2025-05-10T00:48:21.424973267Z" level=info msg="Container to stop \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:48:21.425033 env[1734]: time="2025-05-10T00:48:21.424984422Z" level=info msg="Container to stop \"f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:48:21.425033 env[1734]: time="2025-05-10T00:48:21.424994891Z" level=info msg="Container to stop \"cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:48:21.425033 env[1734]: time="2025-05-10T00:48:21.425004898Z" level=info msg="Container to stop \"5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:48:21.426863 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5-shm.mount: Deactivated successfully. May 10 00:48:21.433910 systemd[1]: cri-containerd-9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5.scope: Deactivated successfully. May 10 00:48:21.454934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5-rootfs.mount: Deactivated successfully. May 10 00:48:21.461955 env[1734]: time="2025-05-10T00:48:21.461903695Z" level=info msg="shim disconnected" id=9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5 May 10 00:48:21.461955 env[1734]: time="2025-05-10T00:48:21.461954108Z" level=warning msg="cleaning up after shim disconnected" id=9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5 namespace=k8s.io May 10 00:48:21.462253 env[1734]: time="2025-05-10T00:48:21.461965827Z" level=info msg="cleaning up dead shim" May 10 00:48:21.470770 env[1734]: time="2025-05-10T00:48:21.470715592Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3845 runtime=io.containerd.runc.v2\n" May 10 00:48:21.471843 env[1734]: time="2025-05-10T00:48:21.471806384Z" level=info msg="TearDown network for sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" successfully" May 10 00:48:21.471843 env[1734]: time="2025-05-10T00:48:21.471837844Z" level=info msg="StopPodSandbox for \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" returns successfully" May 10 00:48:21.483352 kubelet[2088]: E0510 00:48:21.483319 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:21.506945 kubelet[2088]: I0510 00:48:21.506882 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cilium-run\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.506945 kubelet[2088]: I0510 00:48:21.506923 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-hostproc\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.506945 kubelet[2088]: I0510 00:48:21.506939 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-etc-cni-netd\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.506945 kubelet[2088]: I0510 00:48:21.506958 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cilium-cgroup\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.507298 kubelet[2088]: I0510 00:48:21.506975 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-lib-modules\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.507298 kubelet[2088]: I0510 00:48:21.506996 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cilium-config-path\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.507298 kubelet[2088]: I0510 00:48:21.507013 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-host-proc-sys-net\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.507298 kubelet[2088]: I0510 00:48:21.507031 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28s69\" (UniqueName: \"kubernetes.io/projected/ed3ae639-be96-484e-8d1f-b6b82b6253a1-kube-api-access-28s69\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.507298 kubelet[2088]: I0510 00:48:21.507047 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed3ae639-be96-484e-8d1f-b6b82b6253a1-hubble-tls\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.507298 kubelet[2088]: I0510 00:48:21.507061 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-xtables-lock\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.507508 kubelet[2088]: I0510 00:48:21.507077 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed3ae639-be96-484e-8d1f-b6b82b6253a1-clustermesh-secrets\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.507508 kubelet[2088]: I0510 00:48:21.507091 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-bpf-maps\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.507508 kubelet[2088]: I0510 00:48:21.507111 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cni-path\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.507508 kubelet[2088]: I0510 00:48:21.507125 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-host-proc-sys-kernel\") pod \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\" (UID: \"ed3ae639-be96-484e-8d1f-b6b82b6253a1\") " May 10 00:48:21.507508 kubelet[2088]: I0510 00:48:21.507210 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:21.507636 kubelet[2088]: I0510 00:48:21.507241 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:21.507636 kubelet[2088]: I0510 00:48:21.507255 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-hostproc" (OuterVolumeSpecName: "hostproc") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:21.507636 kubelet[2088]: I0510 00:48:21.507269 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:21.507636 kubelet[2088]: I0510 00:48:21.507281 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:21.507636 kubelet[2088]: I0510 00:48:21.507294 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:21.508287 kubelet[2088]: I0510 00:48:21.507816 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:21.508287 kubelet[2088]: I0510 00:48:21.507867 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:21.509491 kubelet[2088]: I0510 00:48:21.509194 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 10 00:48:21.509741 kubelet[2088]: I0510 00:48:21.509717 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:21.509817 kubelet[2088]: I0510 00:48:21.509718 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cni-path" (OuterVolumeSpecName: "cni-path") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:21.513772 systemd[1]: var-lib-kubelet-pods-ed3ae639\x2dbe96\x2d484e\x2d8d1f\x2db6b82b6253a1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d28s69.mount: Deactivated successfully. May 10 00:48:21.515064 kubelet[2088]: I0510 00:48:21.515024 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed3ae639-be96-484e-8d1f-b6b82b6253a1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 00:48:21.515585 kubelet[2088]: I0510 00:48:21.515532 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed3ae639-be96-484e-8d1f-b6b82b6253a1-kube-api-access-28s69" (OuterVolumeSpecName: "kube-api-access-28s69") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "kube-api-access-28s69". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 00:48:21.517617 kubelet[2088]: I0510 00:48:21.517578 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed3ae639-be96-484e-8d1f-b6b82b6253a1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ed3ae639-be96-484e-8d1f-b6b82b6253a1" (UID: "ed3ae639-be96-484e-8d1f-b6b82b6253a1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 10 00:48:21.552405 kubelet[2088]: E0510 00:48:21.552345 2088 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:48:21.608303 kubelet[2088]: I0510 00:48:21.608184 2088 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-xtables-lock\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.608303 kubelet[2088]: I0510 00:48:21.608216 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cilium-config-path\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.608303 kubelet[2088]: I0510 00:48:21.608226 2088 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-host-proc-sys-net\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.608303 kubelet[2088]: I0510 00:48:21.608234 2088 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-28s69\" (UniqueName: \"kubernetes.io/projected/ed3ae639-be96-484e-8d1f-b6b82b6253a1-kube-api-access-28s69\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.608303 kubelet[2088]: I0510 00:48:21.608244 2088 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed3ae639-be96-484e-8d1f-b6b82b6253a1-hubble-tls\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.608303 kubelet[2088]: I0510 00:48:21.608255 2088 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-host-proc-sys-kernel\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.608303 kubelet[2088]: I0510 00:48:21.608265 2088 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed3ae639-be96-484e-8d1f-b6b82b6253a1-clustermesh-secrets\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.608303 kubelet[2088]: I0510 00:48:21.608276 2088 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-bpf-maps\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.608621 kubelet[2088]: I0510 00:48:21.608284 2088 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cni-path\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.609232 kubelet[2088]: I0510 00:48:21.609199 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cilium-run\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.609388 kubelet[2088]: I0510 00:48:21.609377 2088 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-lib-modules\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.609470 kubelet[2088]: I0510 00:48:21.609461 2088 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-hostproc\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.609533 kubelet[2088]: I0510 00:48:21.609525 2088 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-etc-cni-netd\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.609593 kubelet[2088]: I0510 00:48:21.609585 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed3ae639-be96-484e-8d1f-b6b82b6253a1-cilium-cgroup\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:21.847724 kubelet[2088]: I0510 00:48:21.847686 2088 scope.go:117] "RemoveContainer" containerID="5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c" May 10 00:48:21.849273 systemd[1]: Removed slice kubepods-burstable-poded3ae639_be96_484e_8d1f_b6b82b6253a1.slice. May 10 00:48:21.849411 systemd[1]: kubepods-burstable-poded3ae639_be96_484e_8d1f_b6b82b6253a1.slice: Consumed 6.997s CPU time. May 10 00:48:21.851587 env[1734]: time="2025-05-10T00:48:21.851251414Z" level=info msg="RemoveContainer for \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\"" May 10 00:48:21.858954 env[1734]: time="2025-05-10T00:48:21.858846835Z" level=info msg="RemoveContainer for \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\" returns successfully" May 10 00:48:21.863681 kubelet[2088]: I0510 00:48:21.863504 2088 scope.go:117] "RemoveContainer" containerID="ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9" May 10 00:48:21.879771 env[1734]: time="2025-05-10T00:48:21.879731349Z" level=info msg="RemoveContainer for \"ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9\"" May 10 00:48:21.886775 env[1734]: time="2025-05-10T00:48:21.886715772Z" level=info msg="RemoveContainer for \"ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9\" returns successfully" May 10 00:48:21.886971 kubelet[2088]: I0510 00:48:21.886948 2088 scope.go:117] "RemoveContainer" containerID="f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb" May 10 00:48:21.887982 env[1734]: time="2025-05-10T00:48:21.887950549Z" level=info msg="RemoveContainer for \"f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb\"" May 10 00:48:21.891034 env[1734]: time="2025-05-10T00:48:21.890991745Z" level=info msg="RemoveContainer for \"f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb\" returns successfully" May 10 00:48:21.891304 kubelet[2088]: I0510 00:48:21.891277 2088 scope.go:117] "RemoveContainer" containerID="5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581" May 10 00:48:21.892306 env[1734]: time="2025-05-10T00:48:21.892274901Z" level=info msg="RemoveContainer for \"5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581\"" May 10 00:48:21.894933 env[1734]: time="2025-05-10T00:48:21.894896553Z" level=info msg="RemoveContainer for \"5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581\" returns successfully" May 10 00:48:21.895809 kubelet[2088]: I0510 00:48:21.895786 2088 scope.go:117] "RemoveContainer" containerID="cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1" May 10 00:48:21.897246 env[1734]: time="2025-05-10T00:48:21.897201077Z" level=info msg="RemoveContainer for \"cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1\"" May 10 00:48:21.903085 env[1734]: time="2025-05-10T00:48:21.903032066Z" level=info msg="RemoveContainer for \"cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1\" returns successfully" May 10 00:48:21.903321 kubelet[2088]: I0510 00:48:21.903290 2088 scope.go:117] "RemoveContainer" containerID="5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c" May 10 00:48:21.903662 env[1734]: time="2025-05-10T00:48:21.903592194Z" level=error msg="ContainerStatus for \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\": not found" May 10 00:48:21.905440 kubelet[2088]: E0510 00:48:21.905407 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\": not found" containerID="5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c" May 10 00:48:21.905539 kubelet[2088]: I0510 00:48:21.905453 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c"} err="failed to get container status \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5bdc88c5c5f17fe342e7c81d0d1e6c435b960c042578f03790d803541d237e6c\": not found" May 10 00:48:21.905587 kubelet[2088]: I0510 00:48:21.905548 2088 scope.go:117] "RemoveContainer" containerID="ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9" May 10 00:48:21.905863 env[1734]: time="2025-05-10T00:48:21.905808919Z" level=error msg="ContainerStatus for \"ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9\": not found" May 10 00:48:21.905992 kubelet[2088]: E0510 00:48:21.905968 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9\": not found" containerID="ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9" May 10 00:48:21.906053 kubelet[2088]: I0510 00:48:21.905997 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9"} err="failed to get container status \"ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae0f6ab43c785321541b84d4c171e189e68b6414a62b752164870e5b3656e0d9\": not found" May 10 00:48:21.906053 kubelet[2088]: I0510 00:48:21.906021 2088 scope.go:117] "RemoveContainer" containerID="f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb" May 10 00:48:21.906284 env[1734]: time="2025-05-10T00:48:21.906237922Z" level=error msg="ContainerStatus for \"f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb\": not found" May 10 00:48:21.906395 kubelet[2088]: E0510 00:48:21.906369 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb\": not found" containerID="f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb" May 10 00:48:21.906451 kubelet[2088]: I0510 00:48:21.906397 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb"} err="failed to get container status \"f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb\": rpc error: code = NotFound desc = an error occurred when try to find container \"f27c647a9faa8604587ffb2ef76b503e026dfb03ead90cc63c9a4051005a4ffb\": not found" May 10 00:48:21.906451 kubelet[2088]: I0510 00:48:21.906419 2088 scope.go:117] "RemoveContainer" containerID="5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581" May 10 00:48:21.906723 env[1734]: time="2025-05-10T00:48:21.906665987Z" level=error msg="ContainerStatus for \"5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581\": not found" May 10 00:48:21.906828 kubelet[2088]: E0510 00:48:21.906805 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581\": not found" containerID="5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581" May 10 00:48:21.906889 kubelet[2088]: I0510 00:48:21.906830 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581"} err="failed to get container status \"5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f918331c96c54e0ec05aaefd0b558e293f3682ad81bfa9ccc4fdb49add7a581\": not found" May 10 00:48:21.906889 kubelet[2088]: I0510 00:48:21.906849 2088 scope.go:117] "RemoveContainer" containerID="cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1" May 10 00:48:21.907086 env[1734]: time="2025-05-10T00:48:21.907038023Z" level=error msg="ContainerStatus for \"cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1\": not found" May 10 00:48:21.907201 kubelet[2088]: E0510 00:48:21.907180 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1\": not found" containerID="cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1" May 10 00:48:21.907259 kubelet[2088]: I0510 00:48:21.907204 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1"} err="failed to get container status \"cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"cfc73399161b521023602a15511d88340dbf746f100ccb88b738901b5f4ba0a1\": not found" May 10 00:48:22.071691 systemd[1]: var-lib-kubelet-pods-ed3ae639\x2dbe96\x2d484e\x2d8d1f\x2db6b82b6253a1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:48:22.071801 systemd[1]: var-lib-kubelet-pods-ed3ae639\x2dbe96\x2d484e\x2d8d1f\x2db6b82b6253a1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:48:22.484413 kubelet[2088]: E0510 00:48:22.484358 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:22.677057 kubelet[2088]: I0510 00:48:22.677013 2088 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed3ae639-be96-484e-8d1f-b6b82b6253a1" path="/var/lib/kubelet/pods/ed3ae639-be96-484e-8d1f-b6b82b6253a1/volumes" May 10 00:48:23.485307 kubelet[2088]: E0510 00:48:23.485256 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:24.011379 kubelet[2088]: I0510 00:48:24.011337 2088 memory_manager.go:355] "RemoveStaleState removing state" podUID="ed3ae639-be96-484e-8d1f-b6b82b6253a1" containerName="cilium-agent" May 10 00:48:24.016210 systemd[1]: Created slice kubepods-besteffort-poddfe7b232_6e0f_4b45_9a61_3b2f7c2d3df4.slice. May 10 00:48:24.024895 kubelet[2088]: I0510 00:48:24.024841 2088 status_manager.go:890] "Failed to get status for pod" podUID="dfe7b232-6e0f-4b45-9a61-3b2f7c2d3df4" pod="kube-system/cilium-operator-6c4d7847fc-j2tmg" err="pods \"cilium-operator-6c4d7847fc-j2tmg\" is forbidden: User \"system:node:172.31.20.182\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.31.20.182' and this object" May 10 00:48:24.025047 kubelet[2088]: W0510 00:48:24.024930 2088 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.20.182" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.20.182' and this object May 10 00:48:24.025047 kubelet[2088]: E0510 00:48:24.024954 2088 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172.31.20.182\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.31.20.182' and this object" logger="UnhandledError" May 10 00:48:24.050236 systemd[1]: Created slice kubepods-burstable-pod6fbf5ad9_de2a_4d2f_8af7_6a0604f3bd2a.slice. May 10 00:48:24.126031 kubelet[2088]: I0510 00:48:24.125978 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-etc-cni-netd\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126031 kubelet[2088]: I0510 00:48:24.126024 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-host-proc-sys-net\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126253 kubelet[2088]: I0510 00:48:24.126047 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dfe7b232-6e0f-4b45-9a61-3b2f7c2d3df4-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-j2tmg\" (UID: \"dfe7b232-6e0f-4b45-9a61-3b2f7c2d3df4\") " pod="kube-system/cilium-operator-6c4d7847fc-j2tmg" May 10 00:48:24.126253 kubelet[2088]: I0510 00:48:24.126067 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2r4k\" (UniqueName: \"kubernetes.io/projected/dfe7b232-6e0f-4b45-9a61-3b2f7c2d3df4-kube-api-access-s2r4k\") pod \"cilium-operator-6c4d7847fc-j2tmg\" (UID: \"dfe7b232-6e0f-4b45-9a61-3b2f7c2d3df4\") " pod="kube-system/cilium-operator-6c4d7847fc-j2tmg" May 10 00:48:24.126253 kubelet[2088]: I0510 00:48:24.126084 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-ipsec-secrets\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126253 kubelet[2088]: I0510 00:48:24.126099 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-hubble-tls\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126253 kubelet[2088]: I0510 00:48:24.126114 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-run\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126388 kubelet[2088]: I0510 00:48:24.126131 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-hostproc\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126388 kubelet[2088]: I0510 00:48:24.126161 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-cgroup\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126388 kubelet[2088]: I0510 00:48:24.126176 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-lib-modules\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126388 kubelet[2088]: I0510 00:48:24.126190 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-clustermesh-secrets\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126388 kubelet[2088]: I0510 00:48:24.126205 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-config-path\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126388 kubelet[2088]: I0510 00:48:24.126220 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-host-proc-sys-kernel\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126669 kubelet[2088]: I0510 00:48:24.126234 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfz9h\" (UniqueName: \"kubernetes.io/projected/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-kube-api-access-hfz9h\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126669 kubelet[2088]: I0510 00:48:24.126249 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-bpf-maps\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126669 kubelet[2088]: I0510 00:48:24.126265 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cni-path\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.126669 kubelet[2088]: I0510 00:48:24.126279 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-xtables-lock\") pod \"cilium-xcs2q\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " pod="kube-system/cilium-xcs2q" May 10 00:48:24.485747 kubelet[2088]: E0510 00:48:24.485697 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:24.516743 amazon-ssm-agent[1758]: 2025-05-10 00:48:24 INFO [HealthCheck] HealthCheck reporting agent health. May 10 00:48:24.585945 kubelet[2088]: E0510 00:48:24.585897 2088 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-xcs2q" podUID="6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" May 10 00:48:24.931920 kubelet[2088]: I0510 00:48:24.931865 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-etc-cni-netd\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.931920 kubelet[2088]: I0510 00:48:24.931922 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-ipsec-secrets\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.932195 kubelet[2088]: I0510 00:48:24.931951 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-host-proc-sys-net\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.932195 kubelet[2088]: I0510 00:48:24.931971 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-run\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.932195 kubelet[2088]: I0510 00:48:24.931993 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-hostproc\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.932195 kubelet[2088]: I0510 00:48:24.932014 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-cgroup\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.932195 kubelet[2088]: I0510 00:48:24.932041 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfz9h\" (UniqueName: \"kubernetes.io/projected/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-kube-api-access-hfz9h\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.932195 kubelet[2088]: I0510 00:48:24.932060 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cni-path\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.932435 kubelet[2088]: I0510 00:48:24.932084 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-hubble-tls\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.932435 kubelet[2088]: I0510 00:48:24.932109 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-clustermesh-secrets\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.932435 kubelet[2088]: I0510 00:48:24.932131 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-bpf-maps\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.932435 kubelet[2088]: I0510 00:48:24.932183 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-xtables-lock\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.932435 kubelet[2088]: I0510 00:48:24.932207 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-lib-modules\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.932435 kubelet[2088]: I0510 00:48:24.932230 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-host-proc-sys-kernel\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:24.932687 kubelet[2088]: I0510 00:48:24.932319 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:24.932687 kubelet[2088]: I0510 00:48:24.932355 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:24.933219 kubelet[2088]: I0510 00:48:24.932833 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cni-path" (OuterVolumeSpecName: "cni-path") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:24.933219 kubelet[2088]: I0510 00:48:24.932872 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:24.933219 kubelet[2088]: I0510 00:48:24.932897 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:24.933219 kubelet[2088]: I0510 00:48:24.932917 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-hostproc" (OuterVolumeSpecName: "hostproc") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:24.933219 kubelet[2088]: I0510 00:48:24.932939 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:24.936514 kubelet[2088]: I0510 00:48:24.936475 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 10 00:48:24.936803 kubelet[2088]: I0510 00:48:24.936768 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-kube-api-access-hfz9h" (OuterVolumeSpecName: "kube-api-access-hfz9h") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "kube-api-access-hfz9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 00:48:24.936946 kubelet[2088]: I0510 00:48:24.936929 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:24.940232 kubelet[2088]: I0510 00:48:24.940122 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 00:48:24.940347 kubelet[2088]: I0510 00:48:24.940269 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:24.940347 kubelet[2088]: I0510 00:48:24.940297 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:48:24.940461 kubelet[2088]: I0510 00:48:24.940377 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 10 00:48:25.032969 kubelet[2088]: I0510 00:48:25.032925 2088 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-host-proc-sys-net\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.032969 kubelet[2088]: I0510 00:48:25.032961 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-run\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.032969 kubelet[2088]: I0510 00:48:25.032974 2088 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-hostproc\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.033260 kubelet[2088]: I0510 00:48:25.032991 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-cgroup\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.033260 kubelet[2088]: I0510 00:48:25.033003 2088 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hfz9h\" (UniqueName: \"kubernetes.io/projected/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-kube-api-access-hfz9h\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.033260 kubelet[2088]: I0510 00:48:25.033013 2088 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cni-path\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.033260 kubelet[2088]: I0510 00:48:25.033024 2088 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-hubble-tls\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.033260 kubelet[2088]: I0510 00:48:25.033034 2088 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-clustermesh-secrets\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.033260 kubelet[2088]: I0510 00:48:25.033046 2088 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-bpf-maps\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.033260 kubelet[2088]: I0510 00:48:25.033056 2088 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-xtables-lock\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.033260 kubelet[2088]: I0510 00:48:25.033068 2088 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-lib-modules\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.033468 kubelet[2088]: I0510 00:48:25.033079 2088 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-host-proc-sys-kernel\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.033468 kubelet[2088]: I0510 00:48:25.033089 2088 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-etc-cni-netd\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.033468 kubelet[2088]: I0510 00:48:25.033101 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-ipsec-secrets\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:25.228594 kubelet[2088]: E0510 00:48:25.228475 2088 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 10 00:48:25.228594 kubelet[2088]: E0510 00:48:25.228567 2088 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dfe7b232-6e0f-4b45-9a61-3b2f7c2d3df4-cilium-config-path podName:dfe7b232-6e0f-4b45-9a61-3b2f7c2d3df4 nodeName:}" failed. No retries permitted until 2025-05-10 00:48:25.728547433 +0000 UTC m=+69.792036196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/dfe7b232-6e0f-4b45-9a61-3b2f7c2d3df4-cilium-config-path") pod "cilium-operator-6c4d7847fc-j2tmg" (UID: "dfe7b232-6e0f-4b45-9a61-3b2f7c2d3df4") : failed to sync configmap cache: timed out waiting for the condition May 10 00:48:25.230746 kubelet[2088]: E0510 00:48:25.230682 2088 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 10 00:48:25.230932 kubelet[2088]: E0510 00:48:25.230921 2088 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-config-path podName:6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a nodeName:}" failed. No retries permitted until 2025-05-10 00:48:25.730903136 +0000 UTC m=+69.794391903 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-config-path") pod "cilium-xcs2q" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a") : failed to sync configmap cache: timed out waiting for the condition May 10 00:48:25.232464 systemd[1]: var-lib-kubelet-pods-6fbf5ad9\x2dde2a\x2d4d2f\x2d8af7\x2d6a0604f3bd2a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhfz9h.mount: Deactivated successfully. May 10 00:48:25.232564 systemd[1]: var-lib-kubelet-pods-6fbf5ad9\x2dde2a\x2d4d2f\x2d8af7\x2d6a0604f3bd2a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:48:25.232619 systemd[1]: var-lib-kubelet-pods-6fbf5ad9\x2dde2a\x2d4d2f\x2d8af7\x2d6a0604f3bd2a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 10 00:48:25.232674 systemd[1]: var-lib-kubelet-pods-6fbf5ad9\x2dde2a\x2d4d2f\x2d8af7\x2d6a0604f3bd2a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:48:25.485986 kubelet[2088]: E0510 00:48:25.485876 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:25.819823 env[1734]: time="2025-05-10T00:48:25.819708694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-j2tmg,Uid:dfe7b232-6e0f-4b45-9a61-3b2f7c2d3df4,Namespace:kube-system,Attempt:0,}" May 10 00:48:25.837251 env[1734]: time="2025-05-10T00:48:25.837174202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:48:25.837251 env[1734]: time="2025-05-10T00:48:25.837210338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:48:25.837251 env[1734]: time="2025-05-10T00:48:25.837226239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:48:25.837621 env[1734]: time="2025-05-10T00:48:25.837550916Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9b387df8c8b4132c525d810a677550293bae8e9ac354e8d5669b8eb3ff1e81f pid=3880 runtime=io.containerd.runc.v2 May 10 00:48:25.840948 kubelet[2088]: I0510 00:48:25.837901 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-config-path\") pod \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\" (UID: \"6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a\") " May 10 00:48:25.842585 kubelet[2088]: I0510 00:48:25.842539 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" (UID: "6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 10 00:48:25.854081 systemd[1]: Removed slice kubepods-burstable-pod6fbf5ad9_de2a_4d2f_8af7_6a0604f3bd2a.slice. May 10 00:48:25.864535 systemd[1]: Started cri-containerd-a9b387df8c8b4132c525d810a677550293bae8e9ac354e8d5669b8eb3ff1e81f.scope. May 10 00:48:25.910805 systemd[1]: Created slice kubepods-burstable-pod13c5eb7e_b390_4e4c_99a8_48061374bf5a.slice. May 10 00:48:25.917678 env[1734]: time="2025-05-10T00:48:25.917626433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-j2tmg,Uid:dfe7b232-6e0f-4b45-9a61-3b2f7c2d3df4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9b387df8c8b4132c525d810a677550293bae8e9ac354e8d5669b8eb3ff1e81f\"" May 10 00:48:25.920785 env[1734]: time="2025-05-10T00:48:25.920755013Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 00:48:25.938568 kubelet[2088]: I0510 00:48:25.938457 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/13c5eb7e-b390-4e4c-99a8-48061374bf5a-cilium-cgroup\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.938568 kubelet[2088]: I0510 00:48:25.938564 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/13c5eb7e-b390-4e4c-99a8-48061374bf5a-host-proc-sys-net\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.938754 kubelet[2088]: I0510 00:48:25.938583 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/13c5eb7e-b390-4e4c-99a8-48061374bf5a-hubble-tls\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.938754 kubelet[2088]: I0510 00:48:25.938602 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdmch\" (UniqueName: \"kubernetes.io/projected/13c5eb7e-b390-4e4c-99a8-48061374bf5a-kube-api-access-bdmch\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.938754 kubelet[2088]: I0510 00:48:25.938636 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/13c5eb7e-b390-4e4c-99a8-48061374bf5a-hostproc\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.938754 kubelet[2088]: I0510 00:48:25.938651 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13c5eb7e-b390-4e4c-99a8-48061374bf5a-etc-cni-netd\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.938754 kubelet[2088]: I0510 00:48:25.938673 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13c5eb7e-b390-4e4c-99a8-48061374bf5a-xtables-lock\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.938754 kubelet[2088]: I0510 00:48:25.938695 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13c5eb7e-b390-4e4c-99a8-48061374bf5a-cilium-config-path\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.938918 kubelet[2088]: I0510 00:48:25.938723 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/13c5eb7e-b390-4e4c-99a8-48061374bf5a-cilium-ipsec-secrets\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.938918 kubelet[2088]: I0510 00:48:25.938743 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/13c5eb7e-b390-4e4c-99a8-48061374bf5a-cni-path\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.938918 kubelet[2088]: I0510 00:48:25.938758 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/13c5eb7e-b390-4e4c-99a8-48061374bf5a-clustermesh-secrets\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.938918 kubelet[2088]: I0510 00:48:25.938774 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/13c5eb7e-b390-4e4c-99a8-48061374bf5a-cilium-run\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.938918 kubelet[2088]: I0510 00:48:25.938790 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/13c5eb7e-b390-4e4c-99a8-48061374bf5a-bpf-maps\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.938918 kubelet[2088]: I0510 00:48:25.938803 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13c5eb7e-b390-4e4c-99a8-48061374bf5a-lib-modules\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.939074 kubelet[2088]: I0510 00:48:25.938818 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/13c5eb7e-b390-4e4c-99a8-48061374bf5a-host-proc-sys-kernel\") pod \"cilium-bxxpd\" (UID: \"13c5eb7e-b390-4e4c-99a8-48061374bf5a\") " pod="kube-system/cilium-bxxpd" May 10 00:48:25.939074 kubelet[2088]: I0510 00:48:25.938834 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a-cilium-config-path\") on node \"172.31.20.182\" DevicePath \"\"" May 10 00:48:26.218629 env[1734]: time="2025-05-10T00:48:26.218516033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxxpd,Uid:13c5eb7e-b390-4e4c-99a8-48061374bf5a,Namespace:kube-system,Attempt:0,}" May 10 00:48:26.230929 env[1734]: time="2025-05-10T00:48:26.230873408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:48:26.231098 env[1734]: time="2025-05-10T00:48:26.231077562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:48:26.231195 env[1734]: time="2025-05-10T00:48:26.231177129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:48:26.231379 env[1734]: time="2025-05-10T00:48:26.231358285Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322 pid=3923 runtime=io.containerd.runc.v2 May 10 00:48:26.255530 systemd[1]: Started cri-containerd-3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322.scope. May 10 00:48:26.259789 systemd[1]: run-containerd-runc-k8s.io-3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322-runc.lBR2aL.mount: Deactivated successfully. May 10 00:48:26.287320 env[1734]: time="2025-05-10T00:48:26.286940384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxxpd,Uid:13c5eb7e-b390-4e4c-99a8-48061374bf5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322\"" May 10 00:48:26.289315 env[1734]: time="2025-05-10T00:48:26.289280340Z" level=info msg="CreateContainer within sandbox \"3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:48:26.303629 env[1734]: time="2025-05-10T00:48:26.303588228Z" level=info msg="CreateContainer within sandbox \"3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8469c636ffe788924f34e2de0b798c8cb60d5c2f3381eaa7df8bbe937a959aa5\"" May 10 00:48:26.304392 env[1734]: time="2025-05-10T00:48:26.304363759Z" level=info msg="StartContainer for \"8469c636ffe788924f34e2de0b798c8cb60d5c2f3381eaa7df8bbe937a959aa5\"" May 10 00:48:26.324574 systemd[1]: Started cri-containerd-8469c636ffe788924f34e2de0b798c8cb60d5c2f3381eaa7df8bbe937a959aa5.scope. May 10 00:48:26.355009 env[1734]: time="2025-05-10T00:48:26.354964137Z" level=info msg="StartContainer for \"8469c636ffe788924f34e2de0b798c8cb60d5c2f3381eaa7df8bbe937a959aa5\" returns successfully" May 10 00:48:26.486735 kubelet[2088]: E0510 00:48:26.486609 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:26.553210 kubelet[2088]: E0510 00:48:26.553176 2088 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:48:26.684343 kubelet[2088]: I0510 00:48:26.684304 2088 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a" path="/var/lib/kubelet/pods/6fbf5ad9-de2a-4d2f-8af7-6a0604f3bd2a/volumes" May 10 00:48:26.724478 systemd[1]: cri-containerd-8469c636ffe788924f34e2de0b798c8cb60d5c2f3381eaa7df8bbe937a959aa5.scope: Deactivated successfully. May 10 00:48:26.768577 env[1734]: time="2025-05-10T00:48:26.768466481Z" level=info msg="shim disconnected" id=8469c636ffe788924f34e2de0b798c8cb60d5c2f3381eaa7df8bbe937a959aa5 May 10 00:48:26.768577 env[1734]: time="2025-05-10T00:48:26.768511946Z" level=warning msg="cleaning up after shim disconnected" id=8469c636ffe788924f34e2de0b798c8cb60d5c2f3381eaa7df8bbe937a959aa5 namespace=k8s.io May 10 00:48:26.768577 env[1734]: time="2025-05-10T00:48:26.768524658Z" level=info msg="cleaning up dead shim" May 10 00:48:26.776462 env[1734]: time="2025-05-10T00:48:26.776413668Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4008 runtime=io.containerd.runc.v2\n" May 10 00:48:26.855758 env[1734]: time="2025-05-10T00:48:26.855714200Z" level=info msg="CreateContainer within sandbox \"3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:48:26.870797 env[1734]: time="2025-05-10T00:48:26.870725725Z" level=info msg="CreateContainer within sandbox \"3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1c8b958f8ad6c2c32efc70914d0972eb569b4a33a5a0d01edf7f6fec960da58e\"" May 10 00:48:26.871479 env[1734]: time="2025-05-10T00:48:26.871426804Z" level=info msg="StartContainer for \"1c8b958f8ad6c2c32efc70914d0972eb569b4a33a5a0d01edf7f6fec960da58e\"" May 10 00:48:26.887430 systemd[1]: Started cri-containerd-1c8b958f8ad6c2c32efc70914d0972eb569b4a33a5a0d01edf7f6fec960da58e.scope. May 10 00:48:26.919440 env[1734]: time="2025-05-10T00:48:26.919400292Z" level=info msg="StartContainer for \"1c8b958f8ad6c2c32efc70914d0972eb569b4a33a5a0d01edf7f6fec960da58e\" returns successfully" May 10 00:48:27.131443 systemd[1]: cri-containerd-1c8b958f8ad6c2c32efc70914d0972eb569b4a33a5a0d01edf7f6fec960da58e.scope: Deactivated successfully. May 10 00:48:27.181449 env[1734]: time="2025-05-10T00:48:27.181404414Z" level=info msg="shim disconnected" id=1c8b958f8ad6c2c32efc70914d0972eb569b4a33a5a0d01edf7f6fec960da58e May 10 00:48:27.181644 env[1734]: time="2025-05-10T00:48:27.181464005Z" level=warning msg="cleaning up after shim disconnected" id=1c8b958f8ad6c2c32efc70914d0972eb569b4a33a5a0d01edf7f6fec960da58e namespace=k8s.io May 10 00:48:27.181644 env[1734]: time="2025-05-10T00:48:27.181482345Z" level=info msg="cleaning up dead shim" May 10 00:48:27.189428 env[1734]: time="2025-05-10T00:48:27.189385792Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4070 runtime=io.containerd.runc.v2\n" May 10 00:48:27.233169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638028339.mount: Deactivated successfully. May 10 00:48:27.353655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243015561.mount: Deactivated successfully. May 10 00:48:27.487582 kubelet[2088]: E0510 00:48:27.487531 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:27.861754 env[1734]: time="2025-05-10T00:48:27.861643409Z" level=info msg="CreateContainer within sandbox \"3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:48:27.883318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2075577698.mount: Deactivated successfully. May 10 00:48:27.891736 env[1734]: time="2025-05-10T00:48:27.891688630Z" level=info msg="CreateContainer within sandbox \"3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c4b9a33151312d48c7178883740143d81650a4c16c4e11ad11b096400fd740c0\"" May 10 00:48:27.892361 env[1734]: time="2025-05-10T00:48:27.892335666Z" level=info msg="StartContainer for \"c4b9a33151312d48c7178883740143d81650a4c16c4e11ad11b096400fd740c0\"" May 10 00:48:27.939115 systemd[1]: Started cri-containerd-c4b9a33151312d48c7178883740143d81650a4c16c4e11ad11b096400fd740c0.scope. May 10 00:48:28.010077 env[1734]: time="2025-05-10T00:48:28.010020574Z" level=info msg="StartContainer for \"c4b9a33151312d48c7178883740143d81650a4c16c4e11ad11b096400fd740c0\" returns successfully" May 10 00:48:28.164981 systemd[1]: cri-containerd-c4b9a33151312d48c7178883740143d81650a4c16c4e11ad11b096400fd740c0.scope: Deactivated successfully. May 10 00:48:28.202165 kubelet[2088]: I0510 00:48:28.200262 2088 setters.go:602] "Node became not ready" node="172.31.20.182" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:48:28Z","lastTransitionTime":"2025-05-10T00:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:48:28.298137 env[1734]: time="2025-05-10T00:48:28.298083973Z" level=info msg="shim disconnected" id=c4b9a33151312d48c7178883740143d81650a4c16c4e11ad11b096400fd740c0 May 10 00:48:28.298137 env[1734]: time="2025-05-10T00:48:28.298124810Z" level=warning msg="cleaning up after shim disconnected" id=c4b9a33151312d48c7178883740143d81650a4c16c4e11ad11b096400fd740c0 namespace=k8s.io May 10 00:48:28.298137 env[1734]: time="2025-05-10T00:48:28.298133507Z" level=info msg="cleaning up dead shim" May 10 00:48:28.311709 env[1734]: time="2025-05-10T00:48:28.311659062Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4127 runtime=io.containerd.runc.v2\n" May 10 00:48:28.312565 env[1734]: time="2025-05-10T00:48:28.312525297Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:28.316047 env[1734]: time="2025-05-10T00:48:28.316006761Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:28.318160 env[1734]: time="2025-05-10T00:48:28.318108889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:28.318607 env[1734]: time="2025-05-10T00:48:28.318580470Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 10 00:48:28.321428 env[1734]: time="2025-05-10T00:48:28.321390842Z" level=info msg="CreateContainer within sandbox \"a9b387df8c8b4132c525d810a677550293bae8e9ac354e8d5669b8eb3ff1e81f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 00:48:28.336951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821021590.mount: Deactivated successfully. May 10 00:48:28.355423 env[1734]: time="2025-05-10T00:48:28.355325938Z" level=info msg="CreateContainer within sandbox \"a9b387df8c8b4132c525d810a677550293bae8e9ac354e8d5669b8eb3ff1e81f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"02e1c6c097fb7184ed1dd68ce02d8ca087357acff6438255067af70272cd7ebd\"" May 10 00:48:28.356122 env[1734]: time="2025-05-10T00:48:28.356087636Z" level=info msg="StartContainer for \"02e1c6c097fb7184ed1dd68ce02d8ca087357acff6438255067af70272cd7ebd\"" May 10 00:48:28.375751 systemd[1]: Started cri-containerd-02e1c6c097fb7184ed1dd68ce02d8ca087357acff6438255067af70272cd7ebd.scope. May 10 00:48:28.414206 env[1734]: time="2025-05-10T00:48:28.413902788Z" level=info msg="StartContainer for \"02e1c6c097fb7184ed1dd68ce02d8ca087357acff6438255067af70272cd7ebd\" returns successfully" May 10 00:48:28.488390 kubelet[2088]: E0510 00:48:28.488291 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:28.865734 env[1734]: time="2025-05-10T00:48:28.865644513Z" level=info msg="CreateContainer within sandbox \"3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:48:28.887670 env[1734]: time="2025-05-10T00:48:28.887620535Z" level=info msg="CreateContainer within sandbox \"3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4e7336713a4a15c7b0a7eb2195182a8a4a522fef7fc9b3f118a425b6dad78920\"" May 10 00:48:28.888400 env[1734]: time="2025-05-10T00:48:28.888353861Z" level=info msg="StartContainer for \"4e7336713a4a15c7b0a7eb2195182a8a4a522fef7fc9b3f118a425b6dad78920\"" May 10 00:48:28.896065 kubelet[2088]: I0510 00:48:28.895793 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-j2tmg" podStartSLOduration=3.496182272 podStartE2EDuration="5.895769761s" podCreationTimestamp="2025-05-10 00:48:23 +0000 UTC" firstStartedPulling="2025-05-10 00:48:25.920240162 +0000 UTC m=+69.983728924" lastFinishedPulling="2025-05-10 00:48:28.319827627 +0000 UTC m=+72.383316413" observedRunningTime="2025-05-10 00:48:28.895282456 +0000 UTC m=+72.958771243" watchObservedRunningTime="2025-05-10 00:48:28.895769761 +0000 UTC m=+72.959258550" May 10 00:48:28.906995 systemd[1]: Started cri-containerd-4e7336713a4a15c7b0a7eb2195182a8a4a522fef7fc9b3f118a425b6dad78920.scope. May 10 00:48:28.944521 env[1734]: time="2025-05-10T00:48:28.944472333Z" level=info msg="StartContainer for \"4e7336713a4a15c7b0a7eb2195182a8a4a522fef7fc9b3f118a425b6dad78920\" returns successfully" May 10 00:48:28.945958 systemd[1]: cri-containerd-4e7336713a4a15c7b0a7eb2195182a8a4a522fef7fc9b3f118a425b6dad78920.scope: Deactivated successfully. May 10 00:48:28.984085 env[1734]: time="2025-05-10T00:48:28.984035274Z" level=info msg="shim disconnected" id=4e7336713a4a15c7b0a7eb2195182a8a4a522fef7fc9b3f118a425b6dad78920 May 10 00:48:28.984085 env[1734]: time="2025-05-10T00:48:28.984080468Z" level=warning msg="cleaning up after shim disconnected" id=4e7336713a4a15c7b0a7eb2195182a8a4a522fef7fc9b3f118a425b6dad78920 namespace=k8s.io May 10 00:48:28.984085 env[1734]: time="2025-05-10T00:48:28.984090584Z" level=info msg="cleaning up dead shim" May 10 00:48:28.993040 env[1734]: time="2025-05-10T00:48:28.992990692Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4221 runtime=io.containerd.runc.v2\n" May 10 00:48:29.488960 kubelet[2088]: E0510 00:48:29.488915 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:29.875454 env[1734]: time="2025-05-10T00:48:29.875337968Z" level=info msg="CreateContainer within sandbox \"3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:48:29.899195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2213814007.mount: Deactivated successfully. May 10 00:48:29.912004 env[1734]: time="2025-05-10T00:48:29.911921669Z" level=info msg="CreateContainer within sandbox \"3f9e01133196a60e31f4f108c5cbd8f09826c820c874561f6b60df8dac932322\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c20e4d060634aaf3c92126f31f35e9cff59dfb49afb21a5272ee7f09594da9ae\"" May 10 00:48:29.912706 env[1734]: time="2025-05-10T00:48:29.912624622Z" level=info msg="StartContainer for \"c20e4d060634aaf3c92126f31f35e9cff59dfb49afb21a5272ee7f09594da9ae\"" May 10 00:48:29.934694 systemd[1]: Started cri-containerd-c20e4d060634aaf3c92126f31f35e9cff59dfb49afb21a5272ee7f09594da9ae.scope. May 10 00:48:29.978855 env[1734]: time="2025-05-10T00:48:29.978802821Z" level=info msg="StartContainer for \"c20e4d060634aaf3c92126f31f35e9cff59dfb49afb21a5272ee7f09594da9ae\" returns successfully" May 10 00:48:30.490093 kubelet[2088]: E0510 00:48:30.490049 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:31.491160 kubelet[2088]: E0510 00:48:31.491095 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:32.492007 kubelet[2088]: E0510 00:48:32.491953 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:32.829181 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 10 00:48:33.174293 systemd[1]: run-containerd-runc-k8s.io-c20e4d060634aaf3c92126f31f35e9cff59dfb49afb21a5272ee7f09594da9ae-runc.rmWKnF.mount: Deactivated successfully. May 10 00:48:33.492964 kubelet[2088]: E0510 00:48:33.492803 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:34.493514 kubelet[2088]: E0510 00:48:34.493460 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:35.494608 kubelet[2088]: E0510 00:48:35.494554 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:35.671646 systemd[1]: run-containerd-runc-k8s.io-c20e4d060634aaf3c92126f31f35e9cff59dfb49afb21a5272ee7f09594da9ae-runc.29mEeR.mount: Deactivated successfully. May 10 00:48:36.200257 systemd-networkd[1463]: lxc_health: Link UP May 10 00:48:36.207259 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:48:36.207534 systemd-networkd[1463]: lxc_health: Gained carrier May 10 00:48:36.209577 (udev-worker)[4814]: Network interface NamePolicy= disabled on kernel command line. May 10 00:48:36.270581 kubelet[2088]: I0510 00:48:36.270519 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bxxpd" podStartSLOduration=11.270499899 podStartE2EDuration="11.270499899s" podCreationTimestamp="2025-05-10 00:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:48:31.899051201 +0000 UTC m=+75.962539986" watchObservedRunningTime="2025-05-10 00:48:36.270499899 +0000 UTC m=+80.333988686" May 10 00:48:36.439288 kubelet[2088]: E0510 00:48:36.439238 2088 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:36.495444 kubelet[2088]: E0510 00:48:36.495324 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:37.496896 kubelet[2088]: E0510 00:48:37.496797 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:37.544750 systemd-networkd[1463]: lxc_health: Gained IPv6LL May 10 00:48:37.934918 systemd[1]: run-containerd-runc-k8s.io-c20e4d060634aaf3c92126f31f35e9cff59dfb49afb21a5272ee7f09594da9ae-runc.4NxUdx.mount: Deactivated successfully. May 10 00:48:38.498988 kubelet[2088]: E0510 00:48:38.498955 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:39.499688 kubelet[2088]: E0510 00:48:39.499647 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:40.168554 systemd[1]: run-containerd-runc-k8s.io-c20e4d060634aaf3c92126f31f35e9cff59dfb49afb21a5272ee7f09594da9ae-runc.eftW0z.mount: Deactivated successfully. May 10 00:48:40.500580 kubelet[2088]: E0510 00:48:40.500468 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:41.500954 kubelet[2088]: E0510 00:48:41.500909 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:42.374297 systemd[1]: run-containerd-runc-k8s.io-c20e4d060634aaf3c92126f31f35e9cff59dfb49afb21a5272ee7f09594da9ae-runc.2LQN1n.mount: Deactivated successfully. May 10 00:48:42.501460 kubelet[2088]: E0510 00:48:42.501412 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:42.542621 systemd[1]: Started sshd@5-172.31.20.182:22-47.253.133.176:56254.service. May 10 00:48:42.899882 sshd[4919]: Invalid user from 47.253.133.176 port 56254 May 10 00:48:43.502748 kubelet[2088]: E0510 00:48:43.502689 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:44.503704 kubelet[2088]: E0510 00:48:44.503660 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:45.504735 kubelet[2088]: E0510 00:48:45.504688 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:46.505035 kubelet[2088]: E0510 00:48:46.504994 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:47.505626 kubelet[2088]: E0510 00:48:47.505573 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:48.506283 kubelet[2088]: E0510 00:48:48.506227 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:49.506895 kubelet[2088]: E0510 00:48:49.506820 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:50.507467 kubelet[2088]: E0510 00:48:50.507410 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:50.539802 sshd[4919]: Connection closed by invalid user 47.253.133.176 port 56254 [preauth] May 10 00:48:50.541335 systemd[1]: sshd@5-172.31.20.182:22-47.253.133.176:56254.service: Deactivated successfully. May 10 00:48:51.508600 kubelet[2088]: E0510 00:48:51.508540 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:52.508792 kubelet[2088]: E0510 00:48:52.508738 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:53.509811 kubelet[2088]: E0510 00:48:53.509772 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:54.510533 kubelet[2088]: E0510 00:48:54.510486 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:55.511129 kubelet[2088]: E0510 00:48:55.511086 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:56.438728 kubelet[2088]: E0510 00:48:56.438685 2088 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:56.511742 kubelet[2088]: E0510 00:48:56.511690 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:57.512554 kubelet[2088]: E0510 00:48:57.512510 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:58.352655 kubelet[2088]: E0510 00:48:58.352600 2088 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.182?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" May 10 00:48:58.513231 kubelet[2088]: E0510 00:48:58.513188 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:48:58.735975 kubelet[2088]: E0510 00:48:58.735888 2088 kubelet_node_status.go:549] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-05-10T00:48:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-10T00:48:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-10T00:48:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-10T00:48:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":73306098},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\\\",\\\"registry.k8s.io/kube-proxy:v1.32.4\\\"],\\\"sizeBytes\\\":30916875},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"172.31.20.182\": Patch \"https://172.31.17.3:6443/api/v1/nodes/172.31.20.182/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 10 00:48:59.514092 kubelet[2088]: E0510 00:48:59.513958 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:00.514961 kubelet[2088]: E0510 00:49:00.514912 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:01.515984 kubelet[2088]: E0510 00:49:01.515913 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:02.516491 kubelet[2088]: E0510 00:49:02.516440 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:03.517266 kubelet[2088]: E0510 00:49:03.517215 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:04.518429 kubelet[2088]: E0510 00:49:04.518383 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:05.519240 kubelet[2088]: E0510 00:49:05.519198 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:06.519565 kubelet[2088]: E0510 00:49:06.519521 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:07.520097 kubelet[2088]: E0510 00:49:07.520040 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:08.353941 kubelet[2088]: E0510 00:49:08.353873 2088 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.182?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 10 00:49:08.520588 kubelet[2088]: E0510 00:49:08.520515 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:08.736494 kubelet[2088]: E0510 00:49:08.736437 2088 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.20.182\": Get \"https://172.31.17.3:6443/api/v1/nodes/172.31.20.182?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 10 00:49:09.521398 kubelet[2088]: E0510 00:49:09.521356 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:10.522536 kubelet[2088]: E0510 00:49:10.522437 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:11.523369 kubelet[2088]: E0510 00:49:11.523299 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:12.523483 kubelet[2088]: E0510 00:49:12.523428 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:13.524293 kubelet[2088]: E0510 00:49:13.524246 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:14.524713 kubelet[2088]: E0510 00:49:14.524640 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:15.525394 kubelet[2088]: E0510 00:49:15.525351 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:16.438669 kubelet[2088]: E0510 00:49:16.438616 2088 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:16.479162 env[1734]: time="2025-05-10T00:49:16.479118424Z" level=info msg="StopPodSandbox for \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\"" May 10 00:49:16.479528 env[1734]: time="2025-05-10T00:49:16.479331752Z" level=info msg="TearDown network for sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" successfully" May 10 00:49:16.479528 env[1734]: time="2025-05-10T00:49:16.479373639Z" level=info msg="StopPodSandbox for \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" returns successfully" May 10 00:49:16.479905 env[1734]: time="2025-05-10T00:49:16.479863613Z" level=info msg="RemovePodSandbox for \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\"" May 10 00:49:16.479947 env[1734]: time="2025-05-10T00:49:16.479907390Z" level=info msg="Forcibly stopping sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\"" May 10 00:49:16.479995 env[1734]: time="2025-05-10T00:49:16.479974645Z" level=info msg="TearDown network for sandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" successfully" May 10 00:49:16.488641 env[1734]: time="2025-05-10T00:49:16.488589636Z" level=info msg="RemovePodSandbox \"9e034b62cf8b28bb25ce02bbae749cebe5222fcb315967f5cda25c7516a661c5\" returns successfully" May 10 00:49:16.526112 kubelet[2088]: E0510 00:49:16.526072 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:17.526658 kubelet[2088]: E0510 00:49:17.526613 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:18.354830 kubelet[2088]: E0510 00:49:18.354786 2088 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.182?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" May 10 00:49:18.526820 kubelet[2088]: E0510 00:49:18.526755 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:18.737477 kubelet[2088]: E0510 00:49:18.737441 2088 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.20.182\": Get \"https://172.31.17.3:6443/api/v1/nodes/172.31.20.182?timeout=10s\": context deadline exceeded" May 10 00:49:19.527772 kubelet[2088]: E0510 00:49:19.527731 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:20.528280 kubelet[2088]: E0510 00:49:20.528240 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:21.528934 kubelet[2088]: E0510 00:49:21.528882 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:22.529840 kubelet[2088]: E0510 00:49:22.529789 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:23.530854 kubelet[2088]: E0510 00:49:23.530798 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:24.197238 kubelet[2088]: E0510 00:49:24.196665 2088 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.182?timeout=10s\": unexpected EOF" May 10 00:49:24.207642 kubelet[2088]: E0510 00:49:24.207605 2088 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.182?timeout=10s\": read tcp 172.31.20.182:52600->172.31.17.3:6443: read: connection reset by peer" May 10 00:49:24.207642 kubelet[2088]: I0510 00:49:24.207645 2088 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" May 10 00:49:24.208241 kubelet[2088]: E0510 00:49:24.208185 2088 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.182?timeout=10s\": dial tcp 172.31.17.3:6443: connect: connection refused" interval="200ms" May 10 00:49:24.409263 kubelet[2088]: E0510 00:49:24.409227 2088 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.182?timeout=10s\": dial tcp 172.31.17.3:6443: connect: connection refused" interval="400ms" May 10 00:49:24.531629 kubelet[2088]: E0510 00:49:24.531503 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:24.810409 kubelet[2088]: E0510 00:49:24.810293 2088 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.182?timeout=10s\": dial tcp 172.31.17.3:6443: connect: connection refused" interval="800ms" May 10 00:49:25.198956 kubelet[2088]: E0510 00:49:25.198916 2088 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.20.182\": Get \"https://172.31.17.3:6443/api/v1/nodes/172.31.20.182?timeout=10s\": dial tcp 172.31.17.3:6443: connect: connection refused - error from a previous attempt: unexpected EOF" May 10 00:49:25.199407 kubelet[2088]: E0510 00:49:25.199378 2088 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.20.182\": Get \"https://172.31.17.3:6443/api/v1/nodes/172.31.20.182?timeout=10s\": dial tcp 172.31.17.3:6443: connect: connection refused" May 10 00:49:25.200655 kubelet[2088]: E0510 00:49:25.200623 2088 kubelet_node_status.go:536] "Unable to update node status" err="update node status exceeds retry count" May 10 00:49:25.532250 kubelet[2088]: E0510 00:49:25.532116 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:26.532666 kubelet[2088]: E0510 00:49:26.532619 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:27.533478 kubelet[2088]: E0510 00:49:27.533437 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:28.535008 kubelet[2088]: E0510 00:49:28.534693 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:29.536131 kubelet[2088]: E0510 00:49:29.536068 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:30.537284 kubelet[2088]: E0510 00:49:30.537211 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:31.537443 kubelet[2088]: E0510 00:49:31.537402 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:32.538437 kubelet[2088]: E0510 00:49:32.538383 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:33.539603 kubelet[2088]: E0510 00:49:33.539522 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:34.540607 kubelet[2088]: E0510 00:49:34.540488 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:35.541310 kubelet[2088]: E0510 00:49:35.541208 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:35.611339 kubelet[2088]: E0510 00:49:35.611285 2088 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.182?timeout=10s\": context deadline exceeded" interval="1.6s" May 10 00:49:36.439014 kubelet[2088]: E0510 00:49:36.438968 2088 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:36.542370 kubelet[2088]: E0510 00:49:36.542313 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:37.542924 kubelet[2088]: E0510 00:49:37.542880 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:38.543278 kubelet[2088]: E0510 00:49:38.543238 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:39.543707 kubelet[2088]: E0510 00:49:39.543657 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:40.544044 kubelet[2088]: E0510 00:49:40.544002 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:41.544131 kubelet[2088]: E0510 00:49:41.544089 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:42.545291 kubelet[2088]: E0510 00:49:42.545248 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:43.545789 kubelet[2088]: E0510 00:49:43.545746 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:44.546208 kubelet[2088]: E0510 00:49:44.546164 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:45.248133 kubelet[2088]: E0510 00:49:45.248055 2088 kubelet_node_status.go:549] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-05-10T00:49:35Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-10T00:49:35Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-10T00:49:35Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-10T00:49:35Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":73306098},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\\\",\\\"registry.k8s.io/kube-proxy:v1.32.4\\\"],\\\"sizeBytes\\\":30916875},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"172.31.20.182\": Patch \"https://172.31.17.3:6443/api/v1/nodes/172.31.20.182/status?timeout=10s\": context deadline exceeded" May 10 00:49:45.546819 kubelet[2088]: E0510 00:49:45.546686 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:46.547369 kubelet[2088]: E0510 00:49:46.547331 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:47.212215 kubelet[2088]: E0510 00:49:47.212166 2088 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.182?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" May 10 00:49:47.547859 kubelet[2088]: E0510 00:49:47.547742 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:49:48.548707 kubelet[2088]: E0510 00:49:48.548646 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"