May 10 00:43:48.983896 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 9 23:12:23 -00 2025 May 10 00:43:48.983926 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:43:48.983943 kernel: BIOS-provided physical RAM map: May 10 00:43:48.983953 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 10 00:43:48.983962 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 10 00:43:48.983972 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 10 00:43:48.987055 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 10 00:43:48.987086 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 10 00:43:48.987104 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 10 00:43:48.987116 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 10 00:43:48.987129 kernel: NX (Execute Disable) protection: active May 10 00:43:48.987140 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable May 10 00:43:48.987153 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable May 10 00:43:48.987166 kernel: extended physical RAM map: May 10 00:43:48.987184 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 10 00:43:48.987197 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable May 10 00:43:48.987210 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable May 10 00:43:48.987223 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable May 10 00:43:48.987236 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 10 00:43:48.987249 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 10 00:43:48.987260 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 10 00:43:48.987271 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 10 00:43:48.987281 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 10 00:43:48.987291 kernel: efi: EFI v2.70 by EDK II May 10 00:43:48.987305 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77004a98 May 10 00:43:48.987316 kernel: SMBIOS 2.7 present. May 10 00:43:48.987327 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 10 00:43:48.987338 kernel: Hypervisor detected: KVM May 10 00:43:48.987350 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 00:43:48.987363 kernel: kvm-clock: cpu 0, msr 47196001, primary cpu clock May 10 00:43:48.987375 kernel: kvm-clock: using sched offset of 3987309961 cycles May 10 00:43:48.987389 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 00:43:48.987402 kernel: tsc: Detected 2499.996 MHz processor May 10 00:43:48.987414 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 00:43:48.987426 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 00:43:48.987441 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 10 00:43:48.987455 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 00:43:48.987468 kernel: Using GB pages for direct mapping May 10 00:43:48.987481 kernel: Secure boot disabled May 10 00:43:48.987495 kernel: ACPI: Early table checksum verification disabled May 10 00:43:48.987511 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 10 00:43:48.987523 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 10 00:43:48.987539 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 10 00:43:48.987553 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 10 00:43:48.987567 kernel: ACPI: FACS 0x00000000789D0000 000040 May 10 00:43:48.987581 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 10 00:43:48.987595 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 10 00:43:48.987608 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 10 00:43:48.987618 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 10 00:43:48.987632 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 10 00:43:48.987644 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 10 00:43:48.987655 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 10 00:43:48.987666 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 10 00:43:48.987678 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 10 00:43:48.987690 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 10 00:43:48.987701 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 10 00:43:48.987723 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 10 00:43:48.987735 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 10 00:43:48.987750 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 10 00:43:48.987763 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 10 00:43:48.987774 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 10 00:43:48.987786 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 10 00:43:48.987797 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 10 00:43:48.987810 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 10 00:43:48.987822 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 10 00:43:48.987834 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 10 00:43:48.987847 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 10 00:43:48.987863 kernel: NUMA: Initialized distance table, cnt=1 May 10 00:43:48.987877 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 10 00:43:48.987892 kernel: Zone ranges: May 10 00:43:48.987906 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 00:43:48.987919 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 10 00:43:48.987934 kernel: Normal empty May 10 00:43:48.987949 kernel: Movable zone start for each node May 10 00:43:48.987960 kernel: Early memory node ranges May 10 00:43:48.987971 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 10 00:43:48.988001 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 10 00:43:48.988014 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 10 00:43:48.988027 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 10 00:43:48.988042 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 00:43:48.988055 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 10 00:43:48.988068 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 10 00:43:48.988082 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 10 00:43:48.988096 kernel: ACPI: PM-Timer IO Port: 0xb008 May 10 00:43:48.988109 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 00:43:48.988128 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 10 00:43:48.988143 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 00:43:48.988158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 00:43:48.988173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 00:43:48.988189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 00:43:48.988203 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 00:43:48.988217 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 10 00:43:48.988232 kernel: TSC deadline timer available May 10 00:43:48.988245 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 10 00:43:48.988257 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 10 00:43:48.988274 kernel: Booting paravirtualized kernel on KVM May 10 00:43:48.988289 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 00:43:48.988303 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 10 00:43:48.988315 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 10 00:43:48.988327 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 10 00:43:48.988341 kernel: pcpu-alloc: [0] 0 1 May 10 00:43:48.988353 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 May 10 00:43:48.988366 kernel: kvm-guest: PV spinlocks enabled May 10 00:43:48.988379 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 10 00:43:48.988394 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 10 00:43:48.988406 kernel: Policy zone: DMA32 May 10 00:43:48.988420 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:43:48.988432 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:43:48.988446 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 00:43:48.988460 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 10 00:43:48.988475 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:43:48.988492 kernel: Memory: 1876640K/2037804K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 160904K reserved, 0K cma-reserved) May 10 00:43:48.988506 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 10 00:43:48.988520 kernel: Kernel/User page tables isolation: enabled May 10 00:43:48.988534 kernel: ftrace: allocating 34584 entries in 136 pages May 10 00:43:48.988548 kernel: ftrace: allocated 136 pages with 2 groups May 10 00:43:48.988562 kernel: rcu: Hierarchical RCU implementation. May 10 00:43:48.988578 kernel: rcu: RCU event tracing is enabled. May 10 00:43:48.988605 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 10 00:43:48.988621 kernel: Rude variant of Tasks RCU enabled. May 10 00:43:48.988636 kernel: Tracing variant of Tasks RCU enabled. May 10 00:43:48.988651 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:43:48.988666 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 10 00:43:48.988683 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 10 00:43:48.988698 kernel: random: crng init done May 10 00:43:48.988712 kernel: Console: colour dummy device 80x25 May 10 00:43:48.988727 kernel: printk: console [tty0] enabled May 10 00:43:48.988742 kernel: printk: console [ttyS0] enabled May 10 00:43:48.988757 kernel: ACPI: Core revision 20210730 May 10 00:43:48.988772 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 10 00:43:48.988790 kernel: APIC: Switch to symmetric I/O mode setup May 10 00:43:48.988805 kernel: x2apic enabled May 10 00:43:48.988820 kernel: Switched APIC routing to physical x2apic. May 10 00:43:48.988835 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 10 00:43:48.988850 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) May 10 00:43:48.988865 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 10 00:43:48.988880 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 10 00:43:48.988897 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 00:43:48.988912 kernel: Spectre V2 : Mitigation: Retpolines May 10 00:43:48.988926 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 00:43:48.988941 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 10 00:43:48.988956 kernel: RETBleed: Vulnerable May 10 00:43:48.988971 kernel: Speculative Store Bypass: Vulnerable May 10 00:43:48.988999 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 10 00:43:48.989014 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 10 00:43:48.989029 kernel: GDS: Unknown: Dependent on hypervisor status May 10 00:43:48.989043 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 10 00:43:48.989058 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 10 00:43:48.989075 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 10 00:43:48.989090 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 10 00:43:48.989105 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 10 00:43:48.989120 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 10 00:43:48.989134 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 10 00:43:48.989149 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 10 00:43:48.989164 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 10 00:43:48.989179 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 10 00:43:48.989193 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 10 00:43:48.989208 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 10 00:43:48.989222 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 10 00:43:48.989239 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 10 00:43:48.989254 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 10 00:43:48.989268 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 10 00:43:48.989283 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 10 00:43:48.989298 kernel: Freeing SMP alternatives memory: 32K May 10 00:43:48.989312 kernel: pid_max: default: 32768 minimum: 301 May 10 00:43:48.989327 kernel: LSM: Security Framework initializing May 10 00:43:48.989341 kernel: SELinux: Initializing. May 10 00:43:48.989356 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 10 00:43:48.989371 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 10 00:43:48.989386 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 10 00:43:48.989403 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 10 00:43:48.989418 kernel: signal: max sigframe size: 3632 May 10 00:43:48.989433 kernel: rcu: Hierarchical SRCU implementation. May 10 00:43:48.989448 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 10 00:43:48.989463 kernel: smp: Bringing up secondary CPUs ... May 10 00:43:48.989477 kernel: x86: Booting SMP configuration: May 10 00:43:48.989492 kernel: .... node #0, CPUs: #1 May 10 00:43:48.989508 kernel: kvm-clock: cpu 1, msr 47196041, secondary cpu clock May 10 00:43:48.989523 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 May 10 00:43:48.989542 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 10 00:43:48.989558 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 10 00:43:48.989573 kernel: smp: Brought up 1 node, 2 CPUs May 10 00:43:48.989588 kernel: smpboot: Max logical packages: 1 May 10 00:43:48.989603 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) May 10 00:43:48.989618 kernel: devtmpfs: initialized May 10 00:43:48.989633 kernel: x86/mm: Memory block size: 128MB May 10 00:43:48.989648 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 10 00:43:48.989663 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:43:48.989678 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 10 00:43:48.989688 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:43:48.989700 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:43:48.989713 kernel: audit: initializing netlink subsys (disabled) May 10 00:43:48.989725 kernel: audit: type=2000 audit(1746837828.771:1): state=initialized audit_enabled=0 res=1 May 10 00:43:48.989738 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:43:48.989750 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 00:43:48.989762 kernel: cpuidle: using governor menu May 10 00:43:48.989775 kernel: ACPI: bus type PCI registered May 10 00:43:48.989790 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:43:48.989804 kernel: dca service started, version 1.12.1 May 10 00:43:48.989816 kernel: PCI: Using configuration type 1 for base access May 10 00:43:48.989829 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 00:43:48.989841 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:43:48.989854 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:43:48.989866 kernel: ACPI: Added _OSI(Module Device) May 10 00:43:48.989879 kernel: ACPI: Added _OSI(Processor Device) May 10 00:43:48.989892 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:43:48.989908 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:43:48.989920 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 10 00:43:48.989933 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 10 00:43:48.989946 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 10 00:43:48.989958 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 10 00:43:48.989971 kernel: ACPI: Interpreter enabled May 10 00:43:48.990000 kernel: ACPI: PM: (supports S0 S5) May 10 00:43:48.990013 kernel: ACPI: Using IOAPIC for interrupt routing May 10 00:43:48.990026 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 00:43:48.990041 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 10 00:43:48.990054 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 00:43:48.990261 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 10 00:43:48.990383 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 10 00:43:48.990398 kernel: acpiphp: Slot [3] registered May 10 00:43:48.990412 kernel: acpiphp: Slot [4] registered May 10 00:43:48.990424 kernel: acpiphp: Slot [5] registered May 10 00:43:48.990440 kernel: acpiphp: Slot [6] registered May 10 00:43:48.990453 kernel: acpiphp: Slot [7] registered May 10 00:43:48.990465 kernel: acpiphp: Slot [8] registered May 10 00:43:48.990477 kernel: acpiphp: Slot [9] registered May 10 00:43:48.990489 kernel: acpiphp: Slot [10] registered May 10 00:43:48.990502 kernel: acpiphp: Slot [11] registered May 10 00:43:48.990514 kernel: acpiphp: Slot [12] registered May 10 00:43:48.990527 kernel: acpiphp: Slot [13] registered May 10 00:43:48.990539 kernel: acpiphp: Slot [14] registered May 10 00:43:48.990552 kernel: acpiphp: Slot [15] registered May 10 00:43:48.990568 kernel: acpiphp: Slot [16] registered May 10 00:43:48.990580 kernel: acpiphp: Slot [17] registered May 10 00:43:48.990592 kernel: acpiphp: Slot [18] registered May 10 00:43:48.990605 kernel: acpiphp: Slot [19] registered May 10 00:43:48.990617 kernel: acpiphp: Slot [20] registered May 10 00:43:48.990631 kernel: acpiphp: Slot [21] registered May 10 00:43:48.990645 kernel: acpiphp: Slot [22] registered May 10 00:43:48.990660 kernel: acpiphp: Slot [23] registered May 10 00:43:48.990674 kernel: acpiphp: Slot [24] registered May 10 00:43:48.990691 kernel: acpiphp: Slot [25] registered May 10 00:43:48.990706 kernel: acpiphp: Slot [26] registered May 10 00:43:48.990720 kernel: acpiphp: Slot [27] registered May 10 00:43:48.990735 kernel: acpiphp: Slot [28] registered May 10 00:43:48.990748 kernel: acpiphp: Slot [29] registered May 10 00:43:48.990760 kernel: acpiphp: Slot [30] registered May 10 00:43:48.990774 kernel: acpiphp: Slot [31] registered May 10 00:43:48.990789 kernel: PCI host bridge to bus 0000:00 May 10 00:43:48.990921 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 00:43:48.991054 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 00:43:48.991168 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 00:43:48.991280 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 10 00:43:48.991390 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 10 00:43:48.991500 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 00:43:48.991675 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 10 00:43:48.991823 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 10 00:43:48.991960 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 10 00:43:48.992099 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 10 00:43:48.992221 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 10 00:43:48.992342 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 10 00:43:48.992464 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 10 00:43:48.992584 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 10 00:43:48.992710 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 10 00:43:48.992831 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 10 00:43:48.992961 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 10 00:43:48.993096 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 10 00:43:48.993218 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 10 00:43:48.993338 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 10 00:43:48.993459 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 10 00:43:48.993589 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 10 00:43:48.993712 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 10 00:43:48.993841 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 10 00:43:48.993964 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 10 00:43:48.993981 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 00:43:49.012064 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 00:43:49.012082 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 00:43:49.012104 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 00:43:49.012119 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 10 00:43:49.012135 kernel: iommu: Default domain type: Translated May 10 00:43:49.012150 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 00:43:49.012342 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 10 00:43:49.012477 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 10 00:43:49.012605 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 10 00:43:49.012625 kernel: vgaarb: loaded May 10 00:43:49.012641 kernel: pps_core: LinuxPPS API ver. 1 registered May 10 00:43:49.012660 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 10 00:43:49.012675 kernel: PTP clock support registered May 10 00:43:49.012690 kernel: Registered efivars operations May 10 00:43:49.012704 kernel: PCI: Using ACPI for IRQ routing May 10 00:43:49.012719 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 00:43:49.012734 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] May 10 00:43:49.012748 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 10 00:43:49.012762 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 10 00:43:49.012776 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 10 00:43:49.012792 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 10 00:43:49.012808 kernel: clocksource: Switched to clocksource kvm-clock May 10 00:43:49.012820 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:43:49.012835 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:43:49.012850 kernel: pnp: PnP ACPI init May 10 00:43:49.012864 kernel: pnp: PnP ACPI: found 5 devices May 10 00:43:49.012879 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 00:43:49.012893 kernel: NET: Registered PF_INET protocol family May 10 00:43:49.012908 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 00:43:49.012925 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 10 00:43:49.012939 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:43:49.012954 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 10 00:43:49.012968 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 10 00:43:49.012983 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 10 00:43:49.013039 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 10 00:43:49.013054 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 10 00:43:49.013068 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:43:49.013085 kernel: NET: Registered PF_XDP protocol family May 10 00:43:49.013214 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 00:43:49.013325 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 00:43:49.013440 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 00:43:49.013544 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 10 00:43:49.013647 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 10 00:43:49.013772 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 10 00:43:49.013894 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 10 00:43:49.013915 kernel: PCI: CLS 0 bytes, default 64 May 10 00:43:49.013930 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 10 00:43:49.013944 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 10 00:43:49.013958 kernel: clocksource: Switched to clocksource tsc May 10 00:43:49.013972 kernel: Initialise system trusted keyrings May 10 00:43:49.013995 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 10 00:43:49.014009 kernel: Key type asymmetric registered May 10 00:43:49.014023 kernel: Asymmetric key parser 'x509' registered May 10 00:43:49.014037 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 10 00:43:49.014054 kernel: io scheduler mq-deadline registered May 10 00:43:49.014068 kernel: io scheduler kyber registered May 10 00:43:49.014081 kernel: io scheduler bfq registered May 10 00:43:49.014095 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 00:43:49.014109 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:43:49.014123 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 00:43:49.014137 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 00:43:49.014151 kernel: i8042: Warning: Keylock active May 10 00:43:49.014165 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 00:43:49.014181 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 00:43:49.014320 kernel: rtc_cmos 00:00: RTC can wake from S4 May 10 00:43:49.014446 kernel: rtc_cmos 00:00: registered as rtc0 May 10 00:43:49.014560 kernel: rtc_cmos 00:00: setting system clock to 2025-05-10T00:43:48 UTC (1746837828) May 10 00:43:49.014695 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 10 00:43:49.014713 kernel: intel_pstate: CPU model not supported May 10 00:43:49.014726 kernel: efifb: probing for efifb May 10 00:43:49.014741 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 10 00:43:49.014760 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 10 00:43:49.014774 kernel: efifb: scrolling: redraw May 10 00:43:49.014788 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 10 00:43:49.014803 kernel: Console: switching to colour frame buffer device 100x37 May 10 00:43:49.014819 kernel: fb0: EFI VGA frame buffer device May 10 00:43:49.014834 kernel: pstore: Registered efi as persistent store backend May 10 00:43:49.014873 kernel: NET: Registered PF_INET6 protocol family May 10 00:43:49.014891 kernel: Segment Routing with IPv6 May 10 00:43:49.014906 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:43:49.014925 kernel: NET: Registered PF_PACKET protocol family May 10 00:43:49.014941 kernel: Key type dns_resolver registered May 10 00:43:49.014956 kernel: IPI shorthand broadcast: enabled May 10 00:43:49.014972 kernel: sched_clock: Marking stable (359415110, 132305096)->(554759411, -63039205) May 10 00:43:49.015006 kernel: registered taskstats version 1 May 10 00:43:49.015022 kernel: Loading compiled-in X.509 certificates May 10 00:43:49.015039 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 0c62a22cd9157131d2e97d5a2e1bd9023e187117' May 10 00:43:49.015057 kernel: Key type .fscrypt registered May 10 00:43:49.015072 kernel: Key type fscrypt-provisioning registered May 10 00:43:49.015090 kernel: pstore: Using crash dump compression: deflate May 10 00:43:49.015107 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 00:43:49.015122 kernel: ima: Allocated hash algorithm: sha1 May 10 00:43:49.015138 kernel: ima: No architecture policies found May 10 00:43:49.015153 kernel: clk: Disabling unused clocks May 10 00:43:49.015169 kernel: Freeing unused kernel image (initmem) memory: 47456K May 10 00:43:49.015185 kernel: Write protecting the kernel read-only data: 28672k May 10 00:43:49.015201 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 10 00:43:49.015217 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 10 00:43:49.015235 kernel: Run /init as init process May 10 00:43:49.015250 kernel: with arguments: May 10 00:43:49.015266 kernel: /init May 10 00:43:49.015281 kernel: with environment: May 10 00:43:49.015297 kernel: HOME=/ May 10 00:43:49.015312 kernel: TERM=linux May 10 00:43:49.015327 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:43:49.015346 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:43:49.015368 systemd[1]: Detected virtualization amazon. May 10 00:43:49.015385 systemd[1]: Detected architecture x86-64. May 10 00:43:49.015401 systemd[1]: Running in initrd. May 10 00:43:49.015416 systemd[1]: No hostname configured, using default hostname. May 10 00:43:49.015432 systemd[1]: Hostname set to . May 10 00:43:49.015449 systemd[1]: Initializing machine ID from VM UUID. May 10 00:43:49.015465 systemd[1]: Queued start job for default target initrd.target. May 10 00:43:49.015484 systemd[1]: Started systemd-ask-password-console.path. May 10 00:43:49.015503 systemd[1]: Reached target cryptsetup.target. May 10 00:43:49.015519 systemd[1]: Reached target paths.target. May 10 00:43:49.015536 systemd[1]: Reached target slices.target. May 10 00:43:49.015552 systemd[1]: Reached target swap.target. May 10 00:43:49.015568 systemd[1]: Reached target timers.target. May 10 00:43:49.015587 systemd[1]: Listening on iscsid.socket. May 10 00:43:49.015603 systemd[1]: Listening on iscsiuio.socket. May 10 00:43:49.015620 systemd[1]: Listening on systemd-journald-audit.socket. May 10 00:43:49.015636 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 00:43:49.015653 systemd[1]: Listening on systemd-journald.socket. May 10 00:43:49.015669 systemd[1]: Listening on systemd-networkd.socket. May 10 00:43:49.015685 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:43:49.015701 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:43:49.015728 systemd[1]: Reached target sockets.target. May 10 00:43:49.015745 systemd[1]: Starting kmod-static-nodes.service... May 10 00:43:49.015761 systemd[1]: Finished network-cleanup.service. May 10 00:43:49.015777 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:43:49.015793 systemd[1]: Starting systemd-journald.service... May 10 00:43:49.015809 systemd[1]: Starting systemd-modules-load.service... May 10 00:43:49.015826 systemd[1]: Starting systemd-resolved.service... May 10 00:43:49.015842 systemd[1]: Starting systemd-vconsole-setup.service... May 10 00:43:49.015858 systemd[1]: Finished kmod-static-nodes.service. May 10 00:43:49.015877 kernel: audit: type=1130 audit(1746837828.994:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.015894 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:43:49.015910 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 00:43:49.015933 systemd-journald[185]: Journal started May 10 00:43:49.016033 systemd-journald[185]: Runtime Journal (/run/log/journal/ec29aa5de02668fbc62f58677d4839f8) is 4.8M, max 38.3M, 33.5M free. May 10 00:43:48.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.010417 systemd-modules-load[186]: Inserted module 'overlay' May 10 00:43:49.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.027004 kernel: audit: type=1130 audit(1746837829.018:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.027048 systemd[1]: Started systemd-journald.service. May 10 00:43:49.038894 systemd-resolved[187]: Positive Trust Anchors: May 10 00:43:49.040639 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:43:49.043386 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:43:49.084219 kernel: audit: type=1130 audit(1746837829.042:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.084255 kernel: audit: type=1130 audit(1746837829.063:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.084275 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:43:49.084295 kernel: audit: type=1130 audit(1746837829.064:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.045128 systemd[1]: Finished systemd-vconsole-setup.service. May 10 00:43:49.061273 systemd-resolved[187]: Defaulting to hostname 'linux'. May 10 00:43:49.093920 kernel: Bridge firewalling registered May 10 00:43:49.065381 systemd[1]: Started systemd-resolved.service. May 10 00:43:49.073744 systemd[1]: Reached target nss-lookup.target. May 10 00:43:49.090429 systemd[1]: Starting dracut-cmdline-ask.service... May 10 00:43:49.091753 systemd-modules-load[186]: Inserted module 'br_netfilter' May 10 00:43:49.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.098070 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:43:49.122156 kernel: audit: type=1130 audit(1746837829.111:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.111935 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:43:49.126182 systemd[1]: Finished dracut-cmdline-ask.service. May 10 00:43:49.138116 kernel: audit: type=1130 audit(1746837829.126:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.136621 systemd[1]: Starting dracut-cmdline.service... May 10 00:43:49.144048 kernel: SCSI subsystem initialized May 10 00:43:49.150162 dracut-cmdline[202]: dracut-dracut-053 May 10 00:43:49.154730 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:43:49.173300 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:43:49.173335 kernel: device-mapper: uevent: version 1.0.3 May 10 00:43:49.173362 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 10 00:43:49.168180 systemd-modules-load[186]: Inserted module 'dm_multipath' May 10 00:43:49.190557 kernel: audit: type=1130 audit(1746837829.172:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.169107 systemd[1]: Finished systemd-modules-load.service. May 10 00:43:49.182596 systemd[1]: Starting systemd-sysctl.service... May 10 00:43:49.197226 systemd[1]: Finished systemd-sysctl.service. May 10 00:43:49.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.208020 kernel: audit: type=1130 audit(1746837829.198:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.246010 kernel: Loading iSCSI transport class v2.0-870. May 10 00:43:49.265014 kernel: iscsi: registered transport (tcp) May 10 00:43:49.291353 kernel: iscsi: registered transport (qla4xxx) May 10 00:43:49.291428 kernel: QLogic iSCSI HBA Driver May 10 00:43:49.322788 systemd[1]: Finished dracut-cmdline.service. May 10 00:43:49.324905 systemd[1]: Starting dracut-pre-udev.service... May 10 00:43:49.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.377016 kernel: raid6: avx512x4 gen() 18398 MB/s May 10 00:43:49.395013 kernel: raid6: avx512x4 xor() 8171 MB/s May 10 00:43:49.413008 kernel: raid6: avx512x2 gen() 18235 MB/s May 10 00:43:49.431011 kernel: raid6: avx512x2 xor() 24189 MB/s May 10 00:43:49.449006 kernel: raid6: avx512x1 gen() 18215 MB/s May 10 00:43:49.467012 kernel: raid6: avx512x1 xor() 21791 MB/s May 10 00:43:49.485008 kernel: raid6: avx2x4 gen() 18133 MB/s May 10 00:43:49.503015 kernel: raid6: avx2x4 xor() 7535 MB/s May 10 00:43:49.521008 kernel: raid6: avx2x2 gen() 18134 MB/s May 10 00:43:49.539011 kernel: raid6: avx2x2 xor() 18029 MB/s May 10 00:43:49.557006 kernel: raid6: avx2x1 gen() 14015 MB/s May 10 00:43:49.575012 kernel: raid6: avx2x1 xor() 15700 MB/s May 10 00:43:49.593005 kernel: raid6: sse2x4 gen() 9527 MB/s May 10 00:43:49.611012 kernel: raid6: sse2x4 xor() 6071 MB/s May 10 00:43:49.629005 kernel: raid6: sse2x2 gen() 10548 MB/s May 10 00:43:49.647013 kernel: raid6: sse2x2 xor() 6268 MB/s May 10 00:43:49.665005 kernel: raid6: sse2x1 gen() 9423 MB/s May 10 00:43:49.683219 kernel: raid6: sse2x1 xor() 4871 MB/s May 10 00:43:49.683270 kernel: raid6: using algorithm avx512x4 gen() 18398 MB/s May 10 00:43:49.683289 kernel: raid6: .... xor() 8171 MB/s, rmw enabled May 10 00:43:49.684341 kernel: raid6: using avx512x2 recovery algorithm May 10 00:43:49.699015 kernel: xor: automatically using best checksumming function avx May 10 00:43:49.800014 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 10 00:43:49.808888 systemd[1]: Finished dracut-pre-udev.service. May 10 00:43:49.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.807000 audit: BPF prog-id=7 op=LOAD May 10 00:43:49.807000 audit: BPF prog-id=8 op=LOAD May 10 00:43:49.810556 systemd[1]: Starting systemd-udevd.service... May 10 00:43:49.823936 systemd-udevd[385]: Using default interface naming scheme 'v252'. May 10 00:43:49.829315 systemd[1]: Started systemd-udevd.service. May 10 00:43:49.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.834304 systemd[1]: Starting dracut-pre-trigger.service... May 10 00:43:49.852288 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation May 10 00:43:49.884064 systemd[1]: Finished dracut-pre-trigger.service. May 10 00:43:49.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.885650 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:43:49.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:49.927594 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:43:49.991117 kernel: cryptd: max_cpu_qlen set to 1000 May 10 00:43:50.009951 kernel: nvme nvme0: pci function 0000:00:04.0 May 10 00:43:50.010214 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 10 00:43:50.024007 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 10 00:43:50.040369 kernel: AVX2 version of gcm_enc/dec engaged. May 10 00:43:50.040431 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 00:43:50.040453 kernel: GPT:9289727 != 16777215 May 10 00:43:50.040468 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 00:43:50.040479 kernel: GPT:9289727 != 16777215 May 10 00:43:50.040488 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:43:50.040498 kernel: AES CTR mode by8 optimization enabled May 10 00:43:50.040509 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 10 00:43:50.045811 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 10 00:43:50.057796 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 10 00:43:50.057915 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 10 00:43:50.058071 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:8e:ce:03:54:57 May 10 00:43:50.059615 (udev-worker)[438]: Network interface NamePolicy= disabled on kernel command line. May 10 00:43:50.130012 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (430) May 10 00:43:50.157882 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 10 00:43:50.162756 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 10 00:43:50.168572 systemd[1]: Starting disk-uuid.service... May 10 00:43:50.174162 disk-uuid[572]: Primary Header is updated. May 10 00:43:50.174162 disk-uuid[572]: Secondary Entries is updated. May 10 00:43:50.174162 disk-uuid[572]: Secondary Header is updated. May 10 00:43:50.195472 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 10 00:43:50.210116 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:43:50.218881 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 10 00:43:51.191292 disk-uuid[574]: The operation has completed successfully. May 10 00:43:51.192272 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 10 00:43:51.326136 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:43:51.326251 systemd[1]: Finished disk-uuid.service. May 10 00:43:51.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:51.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:51.333486 systemd[1]: Starting verity-setup.service... May 10 00:43:51.374022 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 10 00:43:51.480957 systemd[1]: Found device dev-mapper-usr.device. May 10 00:43:51.484248 systemd[1]: Mounting sysusr-usr.mount... May 10 00:43:51.489461 systemd[1]: Finished verity-setup.service. May 10 00:43:51.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:51.576014 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 10 00:43:51.576501 systemd[1]: Mounted sysusr-usr.mount. May 10 00:43:51.580533 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 10 00:43:51.581307 systemd[1]: Starting ignition-setup.service... May 10 00:43:51.585400 systemd[1]: Starting parse-ip-for-networkd.service... May 10 00:43:51.610529 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 10 00:43:51.610609 kernel: BTRFS info (device nvme0n1p6): using free space tree May 10 00:43:51.610630 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 10 00:43:51.630022 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 10 00:43:51.644137 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 00:43:51.655501 systemd[1]: Finished ignition-setup.service. May 10 00:43:51.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:51.657599 systemd[1]: Starting ignition-fetch-offline.service... May 10 00:43:51.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:51.670188 systemd[1]: Finished parse-ip-for-networkd.service. May 10 00:43:51.670000 audit: BPF prog-id=9 op=LOAD May 10 00:43:51.673516 systemd[1]: Starting systemd-networkd.service... May 10 00:43:51.696904 systemd-networkd[1107]: lo: Link UP May 10 00:43:51.696917 systemd-networkd[1107]: lo: Gained carrier May 10 00:43:51.697891 systemd-networkd[1107]: Enumeration completed May 10 00:43:51.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:51.698024 systemd[1]: Started systemd-networkd.service. May 10 00:43:51.698555 systemd-networkd[1107]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:43:51.700631 systemd[1]: Reached target network.target. May 10 00:43:51.702164 systemd-networkd[1107]: eth0: Link UP May 10 00:43:51.702169 systemd-networkd[1107]: eth0: Gained carrier May 10 00:43:51.706622 systemd[1]: Starting iscsiuio.service... May 10 00:43:51.713503 systemd[1]: Started iscsiuio.service. May 10 00:43:51.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:51.715098 systemd-networkd[1107]: eth0: DHCPv4 address 172.31.16.44/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 10 00:43:51.717958 systemd[1]: Starting iscsid.service... May 10 00:43:51.719315 iscsid[1112]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 10 00:43:51.719315 iscsid[1112]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 10 00:43:51.719315 iscsid[1112]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 10 00:43:51.719315 iscsid[1112]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 10 00:43:51.719315 iscsid[1112]: If using hardware iscsi like qla4xxx this message can be ignored. May 10 00:43:51.719315 iscsid[1112]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 10 00:43:51.719315 iscsid[1112]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 10 00:43:51.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:51.719922 systemd[1]: Started iscsid.service. May 10 00:43:51.722058 systemd[1]: Starting dracut-initqueue.service... May 10 00:43:51.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:51.732963 systemd[1]: Finished dracut-initqueue.service. May 10 00:43:51.733765 systemd[1]: Reached target remote-fs-pre.target. May 10 00:43:51.734451 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:43:51.736128 systemd[1]: Reached target remote-fs.target. May 10 00:43:51.738706 systemd[1]: Starting dracut-pre-mount.service... May 10 00:43:51.749969 systemd[1]: Finished dracut-pre-mount.service. May 10 00:43:51.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:52.286178 ignition[1101]: Ignition 2.14.0 May 10 00:43:52.286191 ignition[1101]: Stage: fetch-offline May 10 00:43:52.286300 ignition[1101]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:52.286331 ignition[1101]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:43:52.301140 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:43:52.301374 ignition[1101]: Ignition finished successfully May 10 00:43:52.303508 systemd[1]: Finished ignition-fetch-offline.service. May 10 00:43:52.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:52.305318 systemd[1]: Starting ignition-fetch.service... May 10 00:43:52.314369 ignition[1131]: Ignition 2.14.0 May 10 00:43:52.314381 ignition[1131]: Stage: fetch May 10 00:43:52.314580 ignition[1131]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:52.314618 ignition[1131]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:43:52.322454 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:43:52.323319 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:43:52.335338 ignition[1131]: INFO : PUT result: OK May 10 00:43:52.338023 ignition[1131]: DEBUG : parsed url from cmdline: "" May 10 00:43:52.338023 ignition[1131]: INFO : no config URL provided May 10 00:43:52.338023 ignition[1131]: INFO : reading system config file "/usr/lib/ignition/user.ign" May 10 00:43:52.338023 ignition[1131]: INFO : no config at "/usr/lib/ignition/user.ign" May 10 00:43:52.338023 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:43:52.341945 ignition[1131]: INFO : PUT result: OK May 10 00:43:52.342603 ignition[1131]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 10 00:43:52.342603 ignition[1131]: INFO : GET result: OK May 10 00:43:52.344441 ignition[1131]: DEBUG : parsing config with SHA512: e33d02755179d94232aada7aa4f15870a83a2b9b3bebebba5df5c6414cbdf146601f16788e9f989418896b417dd21c4a0faa8a94893847a6f416a47e9fdda837 May 10 00:43:52.349065 unknown[1131]: fetched base config from "system" May 10 00:43:52.349081 unknown[1131]: fetched base config from "system" May 10 00:43:52.349669 ignition[1131]: fetch: fetch complete May 10 00:43:52.349087 unknown[1131]: fetched user config from "aws" May 10 00:43:52.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:52.349674 ignition[1131]: fetch: fetch passed May 10 00:43:52.351046 systemd[1]: Finished ignition-fetch.service. May 10 00:43:52.349719 ignition[1131]: Ignition finished successfully May 10 00:43:52.352634 systemd[1]: Starting ignition-kargs.service... May 10 00:43:52.362299 ignition[1137]: Ignition 2.14.0 May 10 00:43:52.362312 ignition[1137]: Stage: kargs May 10 00:43:52.362465 ignition[1137]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:52.362487 ignition[1137]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:43:52.368816 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:43:52.369635 ignition[1137]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:43:52.370317 ignition[1137]: INFO : PUT result: OK May 10 00:43:52.373267 ignition[1137]: kargs: kargs passed May 10 00:43:52.373339 ignition[1137]: Ignition finished successfully May 10 00:43:52.375568 systemd[1]: Finished ignition-kargs.service. May 10 00:43:52.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:52.377520 systemd[1]: Starting ignition-disks.service... May 10 00:43:52.386468 ignition[1143]: Ignition 2.14.0 May 10 00:43:52.386481 ignition[1143]: Stage: disks May 10 00:43:52.386677 ignition[1143]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:52.386714 ignition[1143]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:43:52.394604 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:43:52.395654 ignition[1143]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:43:52.396665 ignition[1143]: INFO : PUT result: OK May 10 00:43:52.398568 ignition[1143]: disks: disks passed May 10 00:43:52.399256 ignition[1143]: Ignition finished successfully May 10 00:43:52.400742 systemd[1]: Finished ignition-disks.service. May 10 00:43:52.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:52.401534 systemd[1]: Reached target initrd-root-device.target. May 10 00:43:52.402605 systemd[1]: Reached target local-fs-pre.target. May 10 00:43:52.403820 systemd[1]: Reached target local-fs.target. May 10 00:43:52.404869 systemd[1]: Reached target sysinit.target. May 10 00:43:52.405979 systemd[1]: Reached target basic.target. May 10 00:43:52.408324 systemd[1]: Starting systemd-fsck-root.service... May 10 00:43:52.444895 systemd-fsck[1151]: ROOT: clean, 623/553520 files, 56023/553472 blocks May 10 00:43:52.448350 systemd[1]: Finished systemd-fsck-root.service. May 10 00:43:52.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:52.450309 systemd[1]: Mounting sysroot.mount... May 10 00:43:52.471178 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 10 00:43:52.470579 systemd[1]: Mounted sysroot.mount. May 10 00:43:52.472051 systemd[1]: Reached target initrd-root-fs.target. May 10 00:43:52.480751 systemd[1]: Mounting sysroot-usr.mount... May 10 00:43:52.482692 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 10 00:43:52.483489 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:43:52.483517 systemd[1]: Reached target ignition-diskful.target. May 10 00:43:52.485646 systemd[1]: Mounted sysroot-usr.mount. May 10 00:43:52.489085 systemd[1]: Starting initrd-setup-root.service... May 10 00:43:52.502183 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:43:52.521895 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:43:52.530582 initrd-setup-root[1181]: cut: /sysroot/etc/group: No such file or directory May 10 00:43:52.535279 initrd-setup-root[1189]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:43:52.541277 initrd-setup-root[1197]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:43:52.552094 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1179) May 10 00:43:52.552139 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 10 00:43:52.552160 kernel: BTRFS info (device nvme0n1p6): using free space tree May 10 00:43:52.552178 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 10 00:43:52.573042 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 10 00:43:52.583831 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:43:52.711592 systemd[1]: Finished initrd-setup-root.service. May 10 00:43:52.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:52.713332 systemd[1]: Starting ignition-mount.service... May 10 00:43:52.716340 systemd[1]: Starting sysroot-boot.service... May 10 00:43:52.721331 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 10 00:43:52.721425 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 10 00:43:52.735818 ignition[1233]: INFO : Ignition 2.14.0 May 10 00:43:52.736761 ignition[1233]: INFO : Stage: mount May 10 00:43:52.739554 ignition[1233]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:52.739554 ignition[1233]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:43:52.746376 ignition[1233]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:43:52.747292 ignition[1233]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:43:52.749038 ignition[1233]: INFO : PUT result: OK May 10 00:43:52.754565 ignition[1233]: INFO : mount: mount passed May 10 00:43:52.754565 ignition[1233]: INFO : Ignition finished successfully May 10 00:43:52.756806 systemd[1]: Finished ignition-mount.service. May 10 00:43:52.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:52.758306 systemd[1]: Starting ignition-files.service... May 10 00:43:52.762638 systemd[1]: Finished sysroot-boot.service. May 10 00:43:52.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:52.767294 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:43:52.786014 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1243) May 10 00:43:52.790165 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 10 00:43:52.790239 kernel: BTRFS info (device nvme0n1p6): using free space tree May 10 00:43:52.790259 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 10 00:43:52.838021 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 10 00:43:52.841729 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:43:52.852832 ignition[1262]: INFO : Ignition 2.14.0 May 10 00:43:52.852832 ignition[1262]: INFO : Stage: files May 10 00:43:52.855812 ignition[1262]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:52.855812 ignition[1262]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:43:52.862465 ignition[1262]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:43:52.863461 ignition[1262]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:43:52.864530 ignition[1262]: INFO : PUT result: OK May 10 00:43:52.867466 ignition[1262]: DEBUG : files: compiled without relabeling support, skipping May 10 00:43:52.874380 ignition[1262]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:43:52.875948 ignition[1262]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:43:52.890232 ignition[1262]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:43:52.891638 ignition[1262]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:43:52.893190 unknown[1262]: wrote ssh authorized keys file for user: core May 10 00:43:52.894085 ignition[1262]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:43:52.901401 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:43:52.902971 ignition[1262]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 10 00:43:53.022520 ignition[1262]: INFO : GET result: OK May 10 00:43:53.206153 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:43:53.206153 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:43:53.211278 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:43:53.211278 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" May 10 00:43:53.211278 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 10 00:43:53.221941 ignition[1262]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem174810630" May 10 00:43:53.221941 ignition[1262]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem174810630": device or resource busy May 10 00:43:53.221941 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem174810630", trying btrfs: device or resource busy May 10 00:43:53.221941 ignition[1262]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem174810630" May 10 00:43:53.221941 ignition[1262]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem174810630" May 10 00:43:53.236199 ignition[1262]: INFO : op(3): [started] unmounting "/mnt/oem174810630" May 10 00:43:53.237515 ignition[1262]: INFO : op(3): [finished] unmounting "/mnt/oem174810630" May 10 00:43:53.237515 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" May 10 00:43:53.237515 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:43:53.237515 ignition[1262]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 10 00:43:53.542139 systemd-networkd[1107]: eth0: Gained IPv6LL May 10 00:43:53.568796 ignition[1262]: INFO : GET result: OK May 10 00:43:53.687658 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:43:53.690296 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" May 10 00:43:53.690296 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:43:53.690296 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 00:43:53.690296 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 00:43:53.690296 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:43:53.690296 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:43:53.690296 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:43:53.690296 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:43:53.690296 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:43:53.690296 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:43:53.690296 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 10 00:43:53.690296 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 10 00:43:53.733252 ignition[1262]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1757041165" May 10 00:43:53.733252 ignition[1262]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1757041165": device or resource busy May 10 00:43:53.733252 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1757041165", trying btrfs: device or resource busy May 10 00:43:53.733252 ignition[1262]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1757041165" May 10 00:43:53.733252 ignition[1262]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1757041165" May 10 00:43:53.733252 ignition[1262]: INFO : op(6): [started] unmounting "/mnt/oem1757041165" May 10 00:43:53.733252 ignition[1262]: INFO : op(6): [finished] unmounting "/mnt/oem1757041165" May 10 00:43:53.733252 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 10 00:43:53.733252 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" May 10 00:43:53.733252 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 10 00:43:53.733252 ignition[1262]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2171230094" May 10 00:43:53.733252 ignition[1262]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2171230094": device or resource busy May 10 00:43:53.733252 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2171230094", trying btrfs: device or resource busy May 10 00:43:53.733252 ignition[1262]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2171230094" May 10 00:43:53.733252 ignition[1262]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2171230094" May 10 00:43:53.733252 ignition[1262]: INFO : op(9): [started] unmounting "/mnt/oem2171230094" May 10 00:43:53.733252 ignition[1262]: INFO : op(9): [finished] unmounting "/mnt/oem2171230094" May 10 00:43:53.733252 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" May 10 00:43:53.733252 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:43:53.733252 ignition[1262]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 10 00:43:53.706586 systemd[1]: mnt-oem1757041165.mount: Deactivated successfully. May 10 00:43:54.161088 ignition[1262]: INFO : GET result: OK May 10 00:43:54.996274 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:43:54.996274 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" May 10 00:43:54.999622 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 10 00:43:55.003963 ignition[1262]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2722374727" May 10 00:43:55.012091 ignition[1262]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2722374727": device or resource busy May 10 00:43:55.012091 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2722374727", trying btrfs: device or resource busy May 10 00:43:55.012091 ignition[1262]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2722374727" May 10 00:43:55.012091 ignition[1262]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2722374727" May 10 00:43:55.012091 ignition[1262]: INFO : op(c): [started] unmounting "/mnt/oem2722374727" May 10 00:43:55.012091 ignition[1262]: INFO : op(c): [finished] unmounting "/mnt/oem2722374727" May 10 00:43:55.012091 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" May 10 00:43:55.012091 ignition[1262]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" May 10 00:43:55.012091 ignition[1262]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" May 10 00:43:55.012091 ignition[1262]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" May 10 00:43:55.012091 ignition[1262]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" May 10 00:43:55.012091 ignition[1262]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" May 10 00:43:55.012091 ignition[1262]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" May 10 00:43:55.012091 ignition[1262]: INFO : files: op(13): [started] processing unit "nvidia.service" May 10 00:43:55.012091 ignition[1262]: INFO : files: op(13): [finished] processing unit "nvidia.service" May 10 00:43:55.012091 ignition[1262]: INFO : files: op(14): [started] processing unit "prepare-helm.service" May 10 00:43:55.012091 ignition[1262]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:43:55.012091 ignition[1262]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:43:55.012091 ignition[1262]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" May 10 00:43:55.012091 ignition[1262]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 00:43:55.085832 kernel: kauditd_printk_skb: 26 callbacks suppressed May 10 00:43:55.085867 kernel: audit: type=1130 audit(1746837835.023:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.085889 kernel: audit: type=1130 audit(1746837835.042:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.085909 kernel: audit: type=1131 audit(1746837835.042:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.085928 kernel: audit: type=1130 audit(1746837835.056:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.010149 systemd[1]: mnt-oem2722374727.mount: Deactivated successfully. May 10 00:43:55.088070 ignition[1262]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 00:43:55.088070 ignition[1262]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" May 10 00:43:55.088070 ignition[1262]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" May 10 00:43:55.088070 ignition[1262]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" May 10 00:43:55.088070 ignition[1262]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" May 10 00:43:55.088070 ignition[1262]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" May 10 00:43:55.088070 ignition[1262]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" May 10 00:43:55.088070 ignition[1262]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:43:55.088070 ignition[1262]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:43:55.088070 ignition[1262]: INFO : files: files passed May 10 00:43:55.088070 ignition[1262]: INFO : Ignition finished successfully May 10 00:43:55.140891 kernel: audit: type=1130 audit(1746837835.089:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.140929 kernel: audit: type=1131 audit(1746837835.089:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.140949 kernel: audit: type=1130 audit(1746837835.127:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.024151 systemd[1]: Finished ignition-files.service. May 10 00:43:55.027679 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 10 00:43:55.144658 initrd-setup-root-after-ignition[1288]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:43:55.036220 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 10 00:43:55.037081 systemd[1]: Starting ignition-quench.service... May 10 00:43:55.163856 kernel: audit: type=1130 audit(1746837835.150:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.163895 kernel: audit: type=1131 audit(1746837835.150:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.041756 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:43:55.041862 systemd[1]: Finished ignition-quench.service. May 10 00:43:55.052256 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 10 00:43:55.058626 systemd[1]: Reached target ignition-complete.target. May 10 00:43:55.176171 kernel: audit: type=1131 audit(1746837835.168:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.066882 systemd[1]: Starting initrd-parse-etc.service... May 10 00:43:55.089453 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:43:55.089573 systemd[1]: Finished initrd-parse-etc.service. May 10 00:43:55.091847 systemd[1]: Reached target initrd-fs.target. May 10 00:43:55.105320 systemd[1]: Reached target initrd.target. May 10 00:43:55.108345 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 10 00:43:55.109613 systemd[1]: Starting dracut-pre-pivot.service... May 10 00:43:55.126532 systemd[1]: Finished dracut-pre-pivot.service. May 10 00:43:55.130608 systemd[1]: Starting initrd-cleanup.service... May 10 00:43:55.150681 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:43:55.150809 systemd[1]: Finished initrd-cleanup.service. May 10 00:43:55.153417 systemd[1]: Stopped target nss-lookup.target. May 10 00:43:55.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.164843 systemd[1]: Stopped target remote-cryptsetup.target. May 10 00:43:55.166630 systemd[1]: Stopped target timers.target. May 10 00:43:55.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.168464 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:43:55.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.168545 systemd[1]: Stopped dracut-pre-pivot.service. May 10 00:43:55.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.170236 systemd[1]: Stopped target initrd.target. May 10 00:43:55.177081 systemd[1]: Stopped target basic.target. May 10 00:43:55.178620 systemd[1]: Stopped target ignition-complete.target. May 10 00:43:55.180284 systemd[1]: Stopped target ignition-diskful.target. May 10 00:43:55.227290 ignition[1301]: INFO : Ignition 2.14.0 May 10 00:43:55.227290 ignition[1301]: INFO : Stage: umount May 10 00:43:55.227290 ignition[1301]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:55.227290 ignition[1301]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 10 00:43:55.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.181786 systemd[1]: Stopped target initrd-root-device.target. May 10 00:43:55.241831 ignition[1301]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 10 00:43:55.241831 ignition[1301]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 10 00:43:55.241831 ignition[1301]: INFO : PUT result: OK May 10 00:43:55.241831 ignition[1301]: INFO : umount: umount passed May 10 00:43:55.241831 ignition[1301]: INFO : Ignition finished successfully May 10 00:43:55.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.183293 systemd[1]: Stopped target remote-fs.target. May 10 00:43:55.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.184844 systemd[1]: Stopped target remote-fs-pre.target. May 10 00:43:55.187577 systemd[1]: Stopped target sysinit.target. May 10 00:43:55.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.189097 systemd[1]: Stopped target local-fs.target. May 10 00:43:55.190511 systemd[1]: Stopped target local-fs-pre.target. May 10 00:43:55.192054 systemd[1]: Stopped target swap.target. May 10 00:43:55.193442 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:43:55.193527 systemd[1]: Stopped dracut-pre-mount.service. May 10 00:43:55.194897 systemd[1]: Stopped target cryptsetup.target. May 10 00:43:55.196348 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:43:55.196427 systemd[1]: Stopped dracut-initqueue.service. May 10 00:43:55.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.197846 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:43:55.197910 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 10 00:43:55.199256 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:43:55.199317 systemd[1]: Stopped ignition-files.service. May 10 00:43:55.201765 systemd[1]: Stopping ignition-mount.service... May 10 00:43:55.204027 systemd[1]: Stopping iscsiuio.service... May 10 00:43:55.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.215163 systemd[1]: Stopping sysroot-boot.service... May 10 00:43:55.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.226493 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:43:55.278000 audit: BPF prog-id=6 op=UNLOAD May 10 00:43:55.226605 systemd[1]: Stopped systemd-udev-trigger.service. May 10 00:43:55.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.229545 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:43:55.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.229616 systemd[1]: Stopped dracut-pre-trigger.service. May 10 00:43:55.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.231490 systemd[1]: iscsiuio.service: Deactivated successfully. May 10 00:43:55.231613 systemd[1]: Stopped iscsiuio.service. May 10 00:43:55.242843 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:43:55.243015 systemd[1]: Stopped ignition-mount.service. May 10 00:43:55.247279 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:43:55.247842 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:43:55.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.247912 systemd[1]: Stopped ignition-disks.service. May 10 00:43:55.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.249617 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:43:55.249678 systemd[1]: Stopped ignition-kargs.service. May 10 00:43:55.251013 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 00:43:55.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.251071 systemd[1]: Stopped ignition-fetch.service. May 10 00:43:55.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.252609 systemd[1]: Stopped target network.target. May 10 00:43:55.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.253970 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:43:55.254055 systemd[1]: Stopped ignition-fetch-offline.service. May 10 00:43:55.255465 systemd[1]: Stopped target paths.target. May 10 00:43:55.256838 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:43:55.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.258040 systemd[1]: Stopped systemd-ask-password-console.path. May 10 00:43:55.258865 systemd[1]: Stopped target slices.target. May 10 00:43:55.260295 systemd[1]: Stopped target sockets.target. May 10 00:43:55.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.261613 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:43:55.261656 systemd[1]: Closed iscsid.socket. May 10 00:43:55.262944 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:43:55.263010 systemd[1]: Closed iscsiuio.socket. May 10 00:43:55.264390 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:43:55.264458 systemd[1]: Stopped ignition-setup.service. May 10 00:43:55.266130 systemd[1]: Stopping systemd-networkd.service... May 10 00:43:55.267416 systemd[1]: Stopping systemd-resolved.service... May 10 00:43:55.270076 systemd-networkd[1107]: eth0: DHCPv6 lease lost May 10 00:43:55.330000 audit: BPF prog-id=9 op=UNLOAD May 10 00:43:55.271339 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:43:55.271468 systemd[1]: Stopped systemd-networkd.service. May 10 00:43:55.275596 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:43:55.275795 systemd[1]: Stopped systemd-resolved.service. May 10 00:43:55.277335 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:43:55.277384 systemd[1]: Closed systemd-networkd.socket. May 10 00:43:55.280480 systemd[1]: Stopping network-cleanup.service... May 10 00:43:55.281226 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:43:55.281303 systemd[1]: Stopped parse-ip-for-networkd.service. May 10 00:43:55.283464 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:43:55.283511 systemd[1]: Stopped systemd-sysctl.service. May 10 00:43:55.284517 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:43:55.284576 systemd[1]: Stopped systemd-modules-load.service. May 10 00:43:55.285963 systemd[1]: Stopping systemd-udevd.service... May 10 00:43:55.293896 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 00:43:55.299484 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:43:55.299668 systemd[1]: Stopped systemd-udevd.service. May 10 00:43:55.301957 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:43:55.302097 systemd[1]: Stopped network-cleanup.service. May 10 00:43:55.303319 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:43:55.303372 systemd[1]: Closed systemd-udevd-control.socket. May 10 00:43:55.304804 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:43:55.304855 systemd[1]: Closed systemd-udevd-kernel.socket. May 10 00:43:55.306161 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:43:55.306223 systemd[1]: Stopped dracut-pre-udev.service. May 10 00:43:55.307612 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:43:55.307672 systemd[1]: Stopped dracut-cmdline.service. May 10 00:43:55.309139 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:43:55.309198 systemd[1]: Stopped dracut-cmdline-ask.service. May 10 00:43:55.311571 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 10 00:43:55.317076 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 00:43:55.317177 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 10 00:43:55.319670 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:43:55.319755 systemd[1]: Stopped kmod-static-nodes.service. May 10 00:43:55.321729 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:43:55.321778 systemd[1]: Stopped systemd-vconsole-setup.service. May 10 00:43:55.323566 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 10 00:43:55.325284 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:43:55.325407 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 10 00:43:55.363568 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:43:55.363782 systemd[1]: Stopped sysroot-boot.service. May 10 00:43:55.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.365352 systemd[1]: Reached target initrd-switch-root.target. May 10 00:43:55.366570 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:43:55.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:55.366652 systemd[1]: Stopped initrd-setup-root.service. May 10 00:43:55.369179 systemd[1]: Starting initrd-switch-root.service... May 10 00:43:55.382477 systemd[1]: Switching root. May 10 00:43:55.401554 iscsid[1112]: iscsid shutting down. May 10 00:43:55.402944 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). May 10 00:43:55.403035 systemd-journald[185]: Journal stopped May 10 00:44:00.986627 kernel: SELinux: Class mctp_socket not defined in policy. May 10 00:44:00.986731 kernel: SELinux: Class anon_inode not defined in policy. May 10 00:44:00.986755 kernel: SELinux: the above unknown classes and permissions will be allowed May 10 00:44:00.986778 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:44:00.986806 kernel: SELinux: policy capability open_perms=1 May 10 00:44:00.986831 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:44:00.986854 kernel: SELinux: policy capability always_check_network=0 May 10 00:44:00.986876 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:44:00.986897 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:44:00.986918 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:44:00.986942 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:44:00.986966 systemd[1]: Successfully loaded SELinux policy in 107.513ms. May 10 00:44:00.990950 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.043ms. May 10 00:44:00.991022 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:44:00.991045 systemd[1]: Detected virtualization amazon. May 10 00:44:00.991064 systemd[1]: Detected architecture x86-64. May 10 00:44:00.991084 systemd[1]: Detected first boot. May 10 00:44:00.991109 systemd[1]: Initializing machine ID from VM UUID. May 10 00:44:00.991128 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 10 00:44:00.991151 systemd[1]: Populated /etc with preset unit settings. May 10 00:44:00.991173 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:44:00.991203 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:44:00.991227 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:44:00.991250 kernel: kauditd_printk_skb: 48 callbacks suppressed May 10 00:44:00.991270 kernel: audit: type=1334 audit(1746837840.617:88): prog-id=12 op=LOAD May 10 00:44:00.991289 kernel: audit: type=1334 audit(1746837840.617:89): prog-id=3 op=UNLOAD May 10 00:44:00.991307 kernel: audit: type=1334 audit(1746837840.619:90): prog-id=13 op=LOAD May 10 00:44:00.991329 kernel: audit: type=1334 audit(1746837840.624:91): prog-id=14 op=LOAD May 10 00:44:00.991349 kernel: audit: type=1334 audit(1746837840.624:92): prog-id=4 op=UNLOAD May 10 00:44:00.991368 kernel: audit: type=1334 audit(1746837840.624:93): prog-id=5 op=UNLOAD May 10 00:44:00.991388 kernel: audit: type=1334 audit(1746837840.626:94): prog-id=15 op=LOAD May 10 00:44:00.991407 kernel: audit: type=1334 audit(1746837840.626:95): prog-id=12 op=UNLOAD May 10 00:44:00.991426 kernel: audit: type=1334 audit(1746837840.630:96): prog-id=16 op=LOAD May 10 00:44:00.991446 kernel: audit: type=1334 audit(1746837840.632:97): prog-id=17 op=LOAD May 10 00:44:00.991466 systemd[1]: iscsid.service: Deactivated successfully. May 10 00:44:00.991487 systemd[1]: Stopped iscsid.service. May 10 00:44:00.991511 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 00:44:00.991532 systemd[1]: Stopped initrd-switch-root.service. May 10 00:44:00.991552 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 00:44:00.991573 systemd[1]: Created slice system-addon\x2dconfig.slice. May 10 00:44:00.991595 systemd[1]: Created slice system-addon\x2drun.slice. May 10 00:44:00.991616 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 10 00:44:00.991636 systemd[1]: Created slice system-getty.slice. May 10 00:44:00.991672 systemd[1]: Created slice system-modprobe.slice. May 10 00:44:00.991693 systemd[1]: Created slice system-serial\x2dgetty.slice. May 10 00:44:00.991715 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 10 00:44:00.991736 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 10 00:44:00.991757 systemd[1]: Created slice user.slice. May 10 00:44:00.991786 systemd[1]: Started systemd-ask-password-console.path. May 10 00:44:00.991808 systemd[1]: Started systemd-ask-password-wall.path. May 10 00:44:00.991830 systemd[1]: Set up automount boot.automount. May 10 00:44:00.991863 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 10 00:44:00.991886 systemd[1]: Stopped target initrd-switch-root.target. May 10 00:44:00.991908 systemd[1]: Stopped target initrd-fs.target. May 10 00:44:00.991928 systemd[1]: Stopped target initrd-root-fs.target. May 10 00:44:00.991949 systemd[1]: Reached target integritysetup.target. May 10 00:44:00.991970 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:44:00.992064 systemd[1]: Reached target remote-fs.target. May 10 00:44:00.992085 systemd[1]: Reached target slices.target. May 10 00:44:00.992106 systemd[1]: Reached target swap.target. May 10 00:44:00.992125 systemd[1]: Reached target torcx.target. May 10 00:44:00.992146 systemd[1]: Reached target veritysetup.target. May 10 00:44:00.992168 systemd[1]: Listening on systemd-coredump.socket. May 10 00:44:00.992188 systemd[1]: Listening on systemd-initctl.socket. May 10 00:44:00.992216 systemd[1]: Listening on systemd-networkd.socket. May 10 00:44:00.992237 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:44:00.992257 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:44:00.992281 systemd[1]: Listening on systemd-userdbd.socket. May 10 00:44:00.992303 systemd[1]: Mounting dev-hugepages.mount... May 10 00:44:00.992325 systemd[1]: Mounting dev-mqueue.mount... May 10 00:44:00.992346 systemd[1]: Mounting media.mount... May 10 00:44:00.992367 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:00.992389 systemd[1]: Mounting sys-kernel-debug.mount... May 10 00:44:00.992410 systemd[1]: Mounting sys-kernel-tracing.mount... May 10 00:44:00.992431 systemd[1]: Mounting tmp.mount... May 10 00:44:00.992452 systemd[1]: Starting flatcar-tmpfiles.service... May 10 00:44:00.992476 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:44:00.992498 systemd[1]: Starting kmod-static-nodes.service... May 10 00:44:00.992518 systemd[1]: Starting modprobe@configfs.service... May 10 00:44:00.992538 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:44:00.992560 systemd[1]: Starting modprobe@drm.service... May 10 00:44:00.992580 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:44:00.992601 systemd[1]: Starting modprobe@fuse.service... May 10 00:44:00.992622 systemd[1]: Starting modprobe@loop.service... May 10 00:44:00.992643 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:44:00.992667 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 00:44:00.992688 systemd[1]: Stopped systemd-fsck-root.service. May 10 00:44:00.992708 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 00:44:00.992729 systemd[1]: Stopped systemd-fsck-usr.service. May 10 00:44:00.992750 systemd[1]: Stopped systemd-journald.service. May 10 00:44:00.992771 systemd[1]: Starting systemd-journald.service... May 10 00:44:00.992791 systemd[1]: Starting systemd-modules-load.service... May 10 00:44:00.992811 systemd[1]: Starting systemd-network-generator.service... May 10 00:44:00.992832 kernel: fuse: init (API version 7.34) May 10 00:44:00.992856 systemd[1]: Starting systemd-remount-fs.service... May 10 00:44:00.992879 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:44:00.992899 systemd[1]: verity-setup.service: Deactivated successfully. May 10 00:44:00.992919 systemd[1]: Stopped verity-setup.service. May 10 00:44:00.992941 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:00.992962 systemd[1]: Mounted dev-hugepages.mount. May 10 00:44:00.993039 systemd[1]: Mounted dev-mqueue.mount. May 10 00:44:00.993059 systemd[1]: Mounted media.mount. May 10 00:44:00.993077 systemd[1]: Mounted sys-kernel-debug.mount. May 10 00:44:00.993098 systemd[1]: Mounted sys-kernel-tracing.mount. May 10 00:44:00.993115 kernel: loop: module loaded May 10 00:44:00.993132 systemd[1]: Mounted tmp.mount. May 10 00:44:00.993151 systemd[1]: Finished kmod-static-nodes.service. May 10 00:44:00.993169 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:44:00.993187 systemd[1]: Finished modprobe@configfs.service. May 10 00:44:00.993205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:44:00.993277 systemd-journald[1418]: Journal started May 10 00:44:00.993364 systemd-journald[1418]: Runtime Journal (/run/log/journal/ec29aa5de02668fbc62f58677d4839f8) is 4.8M, max 38.3M, 33.5M free. May 10 00:43:56.325000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:43:56.529000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:43:56.529000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:43:56.529000 audit: BPF prog-id=10 op=LOAD May 10 00:43:56.529000 audit: BPF prog-id=10 op=UNLOAD May 10 00:43:56.529000 audit: BPF prog-id=11 op=LOAD May 10 00:43:56.529000 audit: BPF prog-id=11 op=UNLOAD May 10 00:43:56.684000 audit[1335]: AVC avc: denied { associate } for pid=1335 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 10 00:43:56.684000 audit[1335]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c000024302 a1=c00002a3d8 a2=c000028840 a3=32 items=0 ppid=1318 pid=1335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:43:56.684000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:43:56.688000 audit[1335]: AVC avc: denied { associate } for pid=1335 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 10 00:43:56.688000 audit[1335]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0000243d9 a2=1ed a3=0 items=2 ppid=1318 pid=1335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:43:56.688000 audit: CWD cwd="/" May 10 00:43:56.688000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:56.688000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:56.688000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:44:00.617000 audit: BPF prog-id=12 op=LOAD May 10 00:44:00.617000 audit: BPF prog-id=3 op=UNLOAD May 10 00:44:00.619000 audit: BPF prog-id=13 op=LOAD May 10 00:44:00.624000 audit: BPF prog-id=14 op=LOAD May 10 00:44:00.624000 audit: BPF prog-id=4 op=UNLOAD May 10 00:44:00.624000 audit: BPF prog-id=5 op=UNLOAD May 10 00:44:00.626000 audit: BPF prog-id=15 op=LOAD May 10 00:44:00.626000 audit: BPF prog-id=12 op=UNLOAD May 10 00:44:00.630000 audit: BPF prog-id=16 op=LOAD May 10 00:44:00.632000 audit: BPF prog-id=17 op=LOAD May 10 00:44:00.632000 audit: BPF prog-id=13 op=UNLOAD May 10 00:44:00.632000 audit: BPF prog-id=14 op=UNLOAD May 10 00:44:00.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:00.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:00.648000 audit: BPF prog-id=15 op=UNLOAD May 10 00:44:00.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:00.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:00.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:00.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:00.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:00.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:00.876000 audit: BPF prog-id=18 op=LOAD May 10 00:44:00.876000 audit: BPF prog-id=19 op=LOAD May 10 00:44:00.876000 audit: BPF prog-id=20 op=LOAD May 10 00:44:00.876000 audit: BPF prog-id=16 op=UNLOAD May 10 00:44:00.876000 audit: BPF prog-id=17 op=UNLOAD May 10 00:44:00.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:00.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:00.982000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 10 00:44:00.982000 audit[1418]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffec8851f10 a2=4000 a3=7ffec8851fac items=0 ppid=1 pid=1418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:44:00.982000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 10 00:44:00.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:00.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:00.616773 systemd[1]: Queued start job for default target multi-user.target. May 10 00:44:01.002666 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:44:01.002705 systemd[1]: Started systemd-journald.service. May 10 00:44:00.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:00.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:56.674447 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:44:00.616788 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. May 10 00:43:56.675013 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:44:00.635204 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 00:43:56.675038 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:43:56.675073 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 10 00:43:56.675085 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=debug msg="skipped missing lower profile" missing profile=oem May 10 00:43:56.675119 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 10 00:43:56.675133 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 10 00:43:56.675324 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 10 00:44:01.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.005944 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:43:56.675367 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:44:01.006159 systemd[1]: Finished modprobe@drm.service. May 10 00:43:56.675380 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:44:01.008081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:43:56.676780 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 10 00:44:01.008254 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:43:56.676851 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 10 00:43:56.676883 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 10 00:43:56.676908 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 10 00:43:56.676941 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 10 00:43:56.676964 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 10 00:43:59.950133 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:59Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:43:59.950398 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:59Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:44:01.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:59.950509 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:59Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:44:01.012468 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:43:59.950699 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:59Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:44:01.012659 systemd[1]: Finished modprobe@fuse.service. May 10 00:43:59.950746 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:59Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 10 00:44:01.014427 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:43:59.950810 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2025-05-10T00:43:59Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 10 00:44:01.014613 systemd[1]: Finished modprobe@loop.service. May 10 00:44:01.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.017945 systemd[1]: Finished systemd-modules-load.service. May 10 00:44:01.019559 systemd[1]: Finished systemd-network-generator.service. May 10 00:44:01.021604 systemd[1]: Finished flatcar-tmpfiles.service. May 10 00:44:01.023412 systemd[1]: Finished systemd-remount-fs.service. May 10 00:44:01.025442 systemd[1]: Reached target network-pre.target. May 10 00:44:01.029669 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 10 00:44:01.036591 systemd[1]: Mounting sys-kernel-config.mount... May 10 00:44:01.038451 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:44:01.044895 systemd[1]: Starting systemd-hwdb-update.service... May 10 00:44:01.050721 systemd[1]: Starting systemd-journal-flush.service... May 10 00:44:01.052654 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:44:01.054676 systemd[1]: Starting systemd-random-seed.service... May 10 00:44:01.056157 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:44:01.060211 systemd[1]: Starting systemd-sysctl.service... May 10 00:44:01.063940 systemd[1]: Starting systemd-sysusers.service... May 10 00:44:01.067705 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 10 00:44:01.073752 systemd[1]: Mounted sys-kernel-config.mount. May 10 00:44:01.082341 systemd-journald[1418]: Time spent on flushing to /var/log/journal/ec29aa5de02668fbc62f58677d4839f8 is 45.567ms for 1231 entries. May 10 00:44:01.082341 systemd-journald[1418]: System Journal (/var/log/journal/ec29aa5de02668fbc62f58677d4839f8) is 8.0M, max 195.6M, 187.6M free. May 10 00:44:01.136740 systemd-journald[1418]: Received client request to flush runtime journal. May 10 00:44:01.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.086146 systemd[1]: Finished systemd-random-seed.service. May 10 00:44:01.138169 udevadm[1452]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 10 00:44:01.087232 systemd[1]: Reached target first-boot-complete.target. May 10 00:44:01.100096 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:44:01.102753 systemd[1]: Starting systemd-udev-settle.service... May 10 00:44:01.132236 systemd[1]: Finished systemd-sysctl.service. May 10 00:44:01.137977 systemd[1]: Finished systemd-journal-flush.service. May 10 00:44:01.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.176318 systemd[1]: Finished systemd-sysusers.service. May 10 00:44:01.183531 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:44:01.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:01.316642 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:44:02.201277 systemd[1]: Finished systemd-hwdb-update.service. May 10 00:44:02.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:02.200000 audit: BPF prog-id=21 op=LOAD May 10 00:44:02.200000 audit: BPF prog-id=22 op=LOAD May 10 00:44:02.200000 audit: BPF prog-id=7 op=UNLOAD May 10 00:44:02.200000 audit: BPF prog-id=8 op=UNLOAD May 10 00:44:02.203432 systemd[1]: Starting systemd-udevd.service... May 10 00:44:02.223182 systemd-udevd[1457]: Using default interface naming scheme 'v252'. May 10 00:44:02.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:02.289000 audit: BPF prog-id=23 op=LOAD May 10 00:44:02.287356 systemd[1]: Started systemd-udevd.service. May 10 00:44:02.292129 systemd[1]: Starting systemd-networkd.service... May 10 00:44:02.316000 audit: BPF prog-id=24 op=LOAD May 10 00:44:02.316000 audit: BPF prog-id=25 op=LOAD May 10 00:44:02.316000 audit: BPF prog-id=26 op=LOAD May 10 00:44:02.319840 systemd[1]: Starting systemd-userdbd.service... May 10 00:44:02.340497 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 10 00:44:02.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:02.389292 systemd[1]: Started systemd-userdbd.service. May 10 00:44:02.392102 (udev-worker)[1464]: Network interface NamePolicy= disabled on kernel command line. May 10 00:44:02.438016 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 10 00:44:02.450510 kernel: ACPI: button: Power Button [PWRF] May 10 00:44:02.455013 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 10 00:44:02.447000 audit[1471]: AVC avc: denied { confidentiality } for pid=1471 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 00:44:02.469033 kernel: ACPI: button: Sleep Button [SLPF] May 10 00:44:02.447000 audit[1471]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55717c8404e0 a1=338ac a2=7f7fc0dc4bc5 a3=5 items=110 ppid=1457 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:44:02.447000 audit: CWD cwd="/" May 10 00:44:02.447000 audit: PATH item=0 name=(null) inode=43 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=1 name=(null) inode=14111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=2 name=(null) inode=14111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=3 name=(null) inode=14112 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=4 name=(null) inode=14111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=5 name=(null) inode=14113 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=6 name=(null) inode=14111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=7 name=(null) inode=14114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=8 name=(null) inode=14114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=9 name=(null) inode=14115 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=10 name=(null) inode=14114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=11 name=(null) inode=14116 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=12 name=(null) inode=14114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=13 name=(null) inode=14117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=14 name=(null) inode=14114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=15 name=(null) inode=14118 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=16 name=(null) inode=14114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=17 name=(null) inode=14119 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=18 name=(null) inode=14111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=19 name=(null) inode=14120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=20 name=(null) inode=14120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=21 name=(null) inode=14121 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=22 name=(null) inode=14120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=23 name=(null) inode=14122 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=24 name=(null) inode=14120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=25 name=(null) inode=14123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=26 name=(null) inode=14120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=27 name=(null) inode=14124 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=28 name=(null) inode=14120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=29 name=(null) inode=14125 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=30 name=(null) inode=14111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=31 name=(null) inode=14126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=32 name=(null) inode=14126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=33 name=(null) inode=14127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=34 name=(null) inode=14126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=35 name=(null) inode=14128 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=36 name=(null) inode=14126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=37 name=(null) inode=14129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=38 name=(null) inode=14126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=39 name=(null) inode=14130 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=40 name=(null) inode=14126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=41 name=(null) inode=14131 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=42 name=(null) inode=14111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=43 name=(null) inode=14132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=44 name=(null) inode=14132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=45 name=(null) inode=14133 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=46 name=(null) inode=14132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=47 name=(null) inode=14134 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=48 name=(null) inode=14132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=49 name=(null) inode=14135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=50 name=(null) inode=14132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=51 name=(null) inode=14136 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=52 name=(null) inode=14132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=53 name=(null) inode=14137 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=54 name=(null) inode=43 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=55 name=(null) inode=14138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=56 name=(null) inode=14138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=57 name=(null) inode=14139 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=58 name=(null) inode=14138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=59 name=(null) inode=14140 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=60 name=(null) inode=14138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=61 name=(null) inode=14141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=62 name=(null) inode=14141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=63 name=(null) inode=14142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=64 name=(null) inode=14141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=65 name=(null) inode=14143 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=66 name=(null) inode=14141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=67 name=(null) inode=14144 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=68 name=(null) inode=14141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=69 name=(null) inode=14145 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=70 name=(null) inode=14141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=71 name=(null) inode=14146 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=72 name=(null) inode=14138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=73 name=(null) inode=14147 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=74 name=(null) inode=14147 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=75 name=(null) inode=14148 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=76 name=(null) inode=14147 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=77 name=(null) inode=14149 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=78 name=(null) inode=14147 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=79 name=(null) inode=14150 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=80 name=(null) inode=14147 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=81 name=(null) inode=14151 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=82 name=(null) inode=14147 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=83 name=(null) inode=14152 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=84 name=(null) inode=14138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=85 name=(null) inode=14153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=86 name=(null) inode=14153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=87 name=(null) inode=14154 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=88 name=(null) inode=14153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=89 name=(null) inode=14155 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=90 name=(null) inode=14153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=91 name=(null) inode=14156 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=92 name=(null) inode=14153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=93 name=(null) inode=14157 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=94 name=(null) inode=14153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=95 name=(null) inode=14158 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=96 name=(null) inode=14138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=97 name=(null) inode=14159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=98 name=(null) inode=14159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=99 name=(null) inode=14160 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=100 name=(null) inode=14159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=101 name=(null) inode=14161 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=102 name=(null) inode=14159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=103 name=(null) inode=14162 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=104 name=(null) inode=14159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=105 name=(null) inode=14163 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=106 name=(null) inode=14159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=107 name=(null) inode=14164 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PATH item=109 name=(null) inode=14165 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:02.447000 audit: PROCTITLE proctitle="(udev-worker)" May 10 00:44:02.506081 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 May 10 00:44:02.524032 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 10 00:44:02.562307 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 00:44:02.562343 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:44:02.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:02.533361 systemd-networkd[1465]: lo: Link UP May 10 00:44:02.533369 systemd-networkd[1465]: lo: Gained carrier May 10 00:44:02.534103 systemd-networkd[1465]: Enumeration completed May 10 00:44:02.534859 systemd[1]: Started systemd-networkd.service. May 10 00:44:02.537510 systemd[1]: Starting systemd-networkd-wait-online.service... May 10 00:44:02.539680 systemd-networkd[1465]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:44:02.550063 systemd-networkd[1465]: eth0: Link UP May 10 00:44:02.550285 systemd-networkd[1465]: eth0: Gained carrier May 10 00:44:02.557201 systemd-networkd[1465]: eth0: DHCPv4 address 172.31.16.44/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 10 00:44:02.657784 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:44:02.668755 systemd[1]: Finished systemd-udev-settle.service. May 10 00:44:02.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:02.671293 systemd[1]: Starting lvm2-activation-early.service... May 10 00:44:02.748338 lvm[1571]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:44:02.774222 systemd[1]: Finished lvm2-activation-early.service. May 10 00:44:02.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:02.775043 systemd[1]: Reached target cryptsetup.target. May 10 00:44:02.777214 systemd[1]: Starting lvm2-activation.service... May 10 00:44:02.782679 lvm[1572]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:44:02.804462 systemd[1]: Finished lvm2-activation.service. May 10 00:44:02.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:02.805278 systemd[1]: Reached target local-fs-pre.target. May 10 00:44:02.806297 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:44:02.806339 systemd[1]: Reached target local-fs.target. May 10 00:44:02.807051 systemd[1]: Reached target machines.target. May 10 00:44:02.809489 systemd[1]: Starting ldconfig.service... May 10 00:44:02.813921 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:44:02.814021 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:44:02.817477 systemd[1]: Starting systemd-boot-update.service... May 10 00:44:02.819732 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 10 00:44:02.822453 systemd[1]: Starting systemd-machine-id-commit.service... May 10 00:44:02.826727 systemd[1]: Starting systemd-sysext.service... May 10 00:44:02.834378 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1574 (bootctl) May 10 00:44:02.836174 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 10 00:44:02.853435 systemd[1]: Unmounting usr-share-oem.mount... May 10 00:44:02.860025 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 10 00:44:02.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:02.862617 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 10 00:44:02.862862 systemd[1]: Unmounted usr-share-oem.mount. May 10 00:44:02.884272 kernel: loop0: detected capacity change from 0 to 205544 May 10 00:44:03.006888 systemd-fsck[1583]: fsck.fat 4.2 (2021-01-31) May 10 00:44:03.006888 systemd-fsck[1583]: /dev/nvme0n1p1: 790 files, 120688/258078 clusters May 10 00:44:03.009623 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 10 00:44:03.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.012074 systemd[1]: Mounting boot.mount... May 10 00:44:03.037982 systemd[1]: Mounted boot.mount. May 10 00:44:03.071170 systemd[1]: Finished systemd-boot-update.service. May 10 00:44:03.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.100015 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:44:03.130023 kernel: loop1: detected capacity change from 0 to 205544 May 10 00:44:03.152503 (sd-sysext)[1598]: Using extensions 'kubernetes'. May 10 00:44:03.153329 (sd-sysext)[1598]: Merged extensions into '/usr'. May 10 00:44:03.164540 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:44:03.167406 systemd[1]: Finished systemd-machine-id-commit.service. May 10 00:44:03.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.170626 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:03.172442 systemd[1]: Mounting usr-share-oem.mount... May 10 00:44:03.173849 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:44:03.175854 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:44:03.178567 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:44:03.182672 systemd[1]: Starting modprobe@loop.service... May 10 00:44:03.183744 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:44:03.184136 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:44:03.184333 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:03.185737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:44:03.186149 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:44:03.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.190062 systemd[1]: Mounted usr-share-oem.mount. May 10 00:44:03.191007 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:44:03.191175 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:44:03.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.192291 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:44:03.192453 systemd[1]: Finished modprobe@loop.service. May 10 00:44:03.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.193581 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:44:03.193742 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:44:03.195028 systemd[1]: Finished systemd-sysext.service. May 10 00:44:03.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.197127 systemd[1]: Starting ensure-sysext.service... May 10 00:44:03.199440 systemd[1]: Starting systemd-tmpfiles-setup.service... May 10 00:44:03.209956 systemd[1]: Reloading. May 10 00:44:03.220587 systemd-tmpfiles[1605]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 10 00:44:03.223428 systemd-tmpfiles[1605]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:44:03.228787 systemd-tmpfiles[1605]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:44:03.285012 /usr/lib/systemd/system-generators/torcx-generator[1624]: time="2025-05-10T00:44:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:44:03.291950 /usr/lib/systemd/system-generators/torcx-generator[1624]: time="2025-05-10T00:44:03Z" level=info msg="torcx already run" May 10 00:44:03.453536 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:44:03.453822 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:44:03.481291 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:44:03.553000 audit: BPF prog-id=27 op=LOAD May 10 00:44:03.553000 audit: BPF prog-id=23 op=UNLOAD May 10 00:44:03.556000 audit: BPF prog-id=28 op=LOAD May 10 00:44:03.556000 audit: BPF prog-id=24 op=UNLOAD May 10 00:44:03.556000 audit: BPF prog-id=29 op=LOAD May 10 00:44:03.556000 audit: BPF prog-id=30 op=LOAD May 10 00:44:03.556000 audit: BPF prog-id=25 op=UNLOAD May 10 00:44:03.556000 audit: BPF prog-id=26 op=UNLOAD May 10 00:44:03.558000 audit: BPF prog-id=31 op=LOAD May 10 00:44:03.558000 audit: BPF prog-id=18 op=UNLOAD May 10 00:44:03.558000 audit: BPF prog-id=32 op=LOAD May 10 00:44:03.558000 audit: BPF prog-id=33 op=LOAD May 10 00:44:03.558000 audit: BPF prog-id=19 op=UNLOAD May 10 00:44:03.558000 audit: BPF prog-id=20 op=UNLOAD May 10 00:44:03.559000 audit: BPF prog-id=34 op=LOAD May 10 00:44:03.559000 audit: BPF prog-id=35 op=LOAD May 10 00:44:03.559000 audit: BPF prog-id=21 op=UNLOAD May 10 00:44:03.559000 audit: BPF prog-id=22 op=UNLOAD May 10 00:44:03.565637 systemd[1]: Finished systemd-tmpfiles-setup.service. May 10 00:44:03.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.571837 systemd[1]: Starting audit-rules.service... May 10 00:44:03.574233 systemd[1]: Starting clean-ca-certificates.service... May 10 00:44:03.576910 systemd[1]: Starting systemd-journal-catalog-update.service... May 10 00:44:03.580000 audit: BPF prog-id=36 op=LOAD May 10 00:44:03.583000 audit: BPF prog-id=37 op=LOAD May 10 00:44:03.583261 systemd[1]: Starting systemd-resolved.service... May 10 00:44:03.586609 systemd[1]: Starting systemd-timesyncd.service... May 10 00:44:03.591707 systemd[1]: Starting systemd-update-utmp.service... May 10 00:44:03.593380 systemd[1]: Finished clean-ca-certificates.service. May 10 00:44:03.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.600389 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:44:03.605685 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:44:03.607682 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:44:03.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.612421 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:44:03.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.615359 systemd[1]: Starting modprobe@loop.service... May 10 00:44:03.616324 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:44:03.616534 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:44:03.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.616738 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:44:03.618665 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:44:03.618878 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:44:03.624304 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:44:03.624471 systemd[1]: Finished modprobe@loop.service. May 10 00:44:03.629713 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:44:03.633717 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:44:03.636586 systemd[1]: Starting modprobe@loop.service... May 10 00:44:03.639534 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:44:03.639750 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:44:03.639906 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:44:03.640967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:44:03.641208 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:44:03.642641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:44:03.642838 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:44:03.644619 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:44:03.644785 systemd[1]: Finished modprobe@loop.service. May 10 00:44:03.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.655000 audit[1687]: SYSTEM_BOOT pid=1687 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 10 00:44:03.648913 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:44:03.649099 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:44:03.653453 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:44:03.658243 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:44:03.660967 systemd[1]: Starting modprobe@drm.service... May 10 00:44:03.666157 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:44:03.668720 systemd[1]: Starting modprobe@loop.service... May 10 00:44:03.669640 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:44:03.669863 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:44:03.670112 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:44:03.675439 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:44:03.675654 systemd[1]: Finished modprobe@drm.service. May 10 00:44:03.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.682700 systemd[1]: Finished ensure-sysext.service. May 10 00:44:03.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.684599 systemd[1]: Finished systemd-update-utmp.service. May 10 00:44:03.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.688520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:44:03.688697 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:44:03.689520 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:44:03.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.689874 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:44:03.690095 systemd[1]: Finished modprobe@loop.service. May 10 00:44:03.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.696690 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:44:03.696867 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:44:03.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.697942 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:44:03.727467 systemd[1]: Finished systemd-journal-catalog-update.service. May 10 00:44:03.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:03.777582 systemd-resolved[1684]: Positive Trust Anchors: May 10 00:44:03.777610 systemd-resolved[1684]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:44:03.777649 systemd-resolved[1684]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:44:03.780000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 10 00:44:03.780000 audit[1710]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffde1354a60 a2=420 a3=0 items=0 ppid=1681 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:44:03.780000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 10 00:44:03.783134 augenrules[1710]: No rules May 10 00:44:03.783689 systemd[1]: Finished audit-rules.service. May 10 00:44:03.793940 systemd[1]: Started systemd-timesyncd.service. May 10 00:44:03.794720 systemd[1]: Reached target time-set.target. May 10 00:44:03.820459 systemd-resolved[1684]: Defaulting to hostname 'linux'. May 10 00:44:03.822406 systemd[1]: Started systemd-resolved.service. May 10 00:44:03.822959 systemd[1]: Reached target network.target. May 10 00:44:03.823415 systemd[1]: Reached target nss-lookup.target. May 10 00:44:03.862916 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:03.862940 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:03.903426 ldconfig[1573]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:44:03.913148 systemd[1]: Finished ldconfig.service. May 10 00:44:03.914861 systemd[1]: Starting systemd-update-done.service... May 10 00:44:03.922521 systemd[1]: Finished systemd-update-done.service. May 10 00:44:03.923073 systemd[1]: Reached target sysinit.target. May 10 00:44:03.923541 systemd[1]: Started motdgen.path. May 10 00:44:03.924165 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 10 00:44:03.924689 systemd[1]: Started logrotate.timer. May 10 00:44:03.925161 systemd[1]: Started mdadm.timer. May 10 00:44:03.925514 systemd[1]: Started systemd-tmpfiles-clean.timer. May 10 00:44:03.925868 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:44:03.925905 systemd[1]: Reached target paths.target. May 10 00:44:03.926265 systemd[1]: Reached target timers.target. May 10 00:44:03.926910 systemd[1]: Listening on dbus.socket. May 10 00:44:03.928545 systemd[1]: Starting docker.socket... May 10 00:44:03.932695 systemd[1]: Listening on sshd.socket. May 10 00:44:03.933277 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:44:03.933820 systemd[1]: Listening on docker.socket. May 10 00:44:03.934298 systemd[1]: Reached target sockets.target. May 10 00:44:03.934660 systemd[1]: Reached target basic.target. May 10 00:44:03.935053 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:44:03.935080 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:44:03.936469 systemd[1]: Starting containerd.service... May 10 00:44:03.938547 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 10 00:44:03.941207 systemd[1]: Starting dbus.service... May 10 00:44:03.943687 systemd[1]: Starting enable-oem-cloudinit.service... May 10 00:44:03.946122 systemd[1]: Starting extend-filesystems.service... May 10 00:44:03.948405 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 10 00:44:03.951535 systemd[1]: Starting motdgen.service... May 10 00:44:03.958453 systemd[1]: Starting prepare-helm.service... May 10 00:44:03.971979 jq[1722]: false May 10 00:44:03.963826 systemd[1]: Starting ssh-key-proc-cmdline.service... May 10 00:44:03.967683 systemd[1]: Starting sshd-keygen.service... May 10 00:44:03.973706 systemd[1]: Starting systemd-logind.service... May 10 00:44:03.975022 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:44:03.975113 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:44:03.975773 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:44:03.977411 systemd[1]: Starting update-engine.service... May 10 00:44:03.982899 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 10 00:44:03.987848 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:44:03.990868 jq[1732]: true May 10 00:44:03.988141 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 10 00:44:04.750161 systemd-resolved[1684]: Clock change detected. Flushing caches. May 10 00:44:04.750309 systemd-timesyncd[1685]: Contacted time server 23.186.168.129:123 (0.flatcar.pool.ntp.org). May 10 00:44:04.750374 systemd-timesyncd[1685]: Initial clock synchronization to Sat 2025-05-10 00:44:04.750103 UTC. May 10 00:44:04.751340 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:44:04.751547 systemd[1]: Finished ssh-key-proc-cmdline.service. May 10 00:44:04.762362 tar[1734]: linux-amd64/helm May 10 00:44:04.781872 jq[1736]: true May 10 00:44:04.807721 dbus-daemon[1721]: [system] SELinux support is enabled May 10 00:44:04.809023 systemd[1]: Started dbus.service. May 10 00:44:04.815917 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:44:04.815951 systemd[1]: Reached target system-config.target. May 10 00:44:04.819056 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:44:04.819086 systemd[1]: Reached target user-config.target. May 10 00:44:04.824634 dbus-daemon[1721]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1465 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 10 00:44:04.831102 systemd[1]: Starting systemd-hostnamed.service... May 10 00:44:04.836086 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:44:04.836308 systemd[1]: Finished motdgen.service. May 10 00:44:04.846206 systemd-networkd[1465]: eth0: Gained IPv6LL May 10 00:44:04.848636 extend-filesystems[1723]: Found loop1 May 10 00:44:04.848716 systemd[1]: Finished systemd-networkd-wait-online.service. May 10 00:44:04.849628 systemd[1]: Reached target network-online.target. May 10 00:44:04.850563 extend-filesystems[1723]: Found nvme0n1 May 10 00:44:04.851596 extend-filesystems[1723]: Found nvme0n1p1 May 10 00:44:04.851596 extend-filesystems[1723]: Found nvme0n1p2 May 10 00:44:04.851596 extend-filesystems[1723]: Found nvme0n1p3 May 10 00:44:04.851596 extend-filesystems[1723]: Found usr May 10 00:44:04.860215 extend-filesystems[1723]: Found nvme0n1p4 May 10 00:44:04.860215 extend-filesystems[1723]: Found nvme0n1p6 May 10 00:44:04.860215 extend-filesystems[1723]: Found nvme0n1p7 May 10 00:44:04.860215 extend-filesystems[1723]: Found nvme0n1p9 May 10 00:44:04.860215 extend-filesystems[1723]: Checking size of /dev/nvme0n1p9 May 10 00:44:04.852516 systemd[1]: Started amazon-ssm-agent.service. May 10 00:44:05.015310 bash[1781]: Updated "/home/core/.ssh/authorized_keys" May 10 00:44:05.015414 env[1739]: time="2025-05-10T00:44:04.993941917Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 10 00:44:04.857449 systemd[1]: Starting kubelet.service... May 10 00:44:05.035099 update_engine[1731]: I0510 00:44:05.021772 1731 main.cc:92] Flatcar Update Engine starting May 10 00:44:04.862491 systemd[1]: Started nvidia.service. May 10 00:44:05.010891 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 10 00:44:05.037869 systemd[1]: Started update-engine.service. May 10 00:44:05.041200 systemd[1]: Started locksmithd.service. May 10 00:44:05.045910 update_engine[1731]: I0510 00:44:05.042660 1731 update_check_scheduler.cc:74] Next update check in 3m40s May 10 00:44:05.059532 extend-filesystems[1723]: Resized partition /dev/nvme0n1p9 May 10 00:44:05.085398 extend-filesystems[1789]: resize2fs 1.46.5 (30-Dec-2021) May 10 00:44:05.102065 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 10 00:44:05.153707 env[1739]: time="2025-05-10T00:44:05.153641849Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:44:05.153863 env[1739]: time="2025-05-10T00:44:05.153844322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:44:05.172679 env[1739]: time="2025-05-10T00:44:05.172398964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:44:05.172679 env[1739]: time="2025-05-10T00:44:05.172451020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:44:05.172846 env[1739]: time="2025-05-10T00:44:05.172751434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:44:05.172846 env[1739]: time="2025-05-10T00:44:05.172775067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:44:05.172846 env[1739]: time="2025-05-10T00:44:05.172793626Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 10 00:44:05.172846 env[1739]: time="2025-05-10T00:44:05.172806809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:44:05.173015 env[1739]: time="2025-05-10T00:44:05.172909636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:44:05.174547 amazon-ssm-agent[1767]: 2025/05/10 00:44:05 Failed to load instance info from vault. RegistrationKey does not exist. May 10 00:44:05.175900 amazon-ssm-agent[1767]: Initializing new seelog logger May 10 00:44:05.176180 amazon-ssm-agent[1767]: New Seelog Logger Creation Complete May 10 00:44:05.176180 amazon-ssm-agent[1767]: 2025/05/10 00:44:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 10 00:44:05.176180 amazon-ssm-agent[1767]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 10 00:44:05.176382 amazon-ssm-agent[1767]: 2025/05/10 00:44:05 processing appconfig overrides May 10 00:44:05.178860 env[1739]: time="2025-05-10T00:44:05.178818858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:44:05.179139 env[1739]: time="2025-05-10T00:44:05.179107052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:44:05.179202 env[1739]: time="2025-05-10T00:44:05.179142325Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:44:05.179266 env[1739]: time="2025-05-10T00:44:05.179243413Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 10 00:44:05.179323 env[1739]: time="2025-05-10T00:44:05.179263379Z" level=info msg="metadata content store policy set" policy=shared May 10 00:44:05.226087 systemd-logind[1730]: Watching system buttons on /dev/input/event1 (Power Button) May 10 00:44:05.226122 systemd-logind[1730]: Watching system buttons on /dev/input/event2 (Sleep Button) May 10 00:44:05.226146 systemd-logind[1730]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 00:44:05.226404 systemd-logind[1730]: New seat seat0. May 10 00:44:05.228647 systemd[1]: Started systemd-logind.service. May 10 00:44:05.237381 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 10 00:44:05.257901 extend-filesystems[1789]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 10 00:44:05.257901 extend-filesystems[1789]: old_desc_blocks = 1, new_desc_blocks = 1 May 10 00:44:05.257901 extend-filesystems[1789]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 10 00:44:05.275028 extend-filesystems[1723]: Resized filesystem in /dev/nvme0n1p9 May 10 00:44:05.263122 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.260643605Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.260711797Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.260736641Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.260799060Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.260820331Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.260866954Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.260887162Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.260991960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.261012216Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.261032329Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.261073169Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.261100196Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.261299856Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:44:05.278236 env[1739]: time="2025-05-10T00:44:05.261456029Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:44:05.263329 systemd[1]: Finished extend-filesystems.service. May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.261926941Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.261961230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.261992337Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.262080056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.262099851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.262116523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.262404381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.262427626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.262459988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.262482991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.262504001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.262541091Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.265775705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.265806270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:44:05.285204 env[1739]: time="2025-05-10T00:44:05.265824882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:44:05.271399 systemd[1]: Started containerd.service. May 10 00:44:05.285851 env[1739]: time="2025-05-10T00:44:05.265855593Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:44:05.285851 env[1739]: time="2025-05-10T00:44:05.265878232Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 10 00:44:05.285851 env[1739]: time="2025-05-10T00:44:05.265895942Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:44:05.285851 env[1739]: time="2025-05-10T00:44:05.265921821Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 10 00:44:05.285851 env[1739]: time="2025-05-10T00:44:05.265966347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:44:05.290098 env[1739]: time="2025-05-10T00:44:05.268617566Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:44:05.290098 env[1739]: time="2025-05-10T00:44:05.268706390Z" level=info msg="Connect containerd service" May 10 00:44:05.290098 env[1739]: time="2025-05-10T00:44:05.268760101Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:44:05.290098 env[1739]: time="2025-05-10T00:44:05.270534588Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:44:05.290098 env[1739]: time="2025-05-10T00:44:05.271208754Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:44:05.290098 env[1739]: time="2025-05-10T00:44:05.271266424Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:44:05.290098 env[1739]: time="2025-05-10T00:44:05.272161256Z" level=info msg="containerd successfully booted in 0.279658s" May 10 00:44:05.308063 env[1739]: time="2025-05-10T00:44:05.306413419Z" level=info msg="Start subscribing containerd event" May 10 00:44:05.308843 dbus-daemon[1721]: [system] Successfully activated service 'org.freedesktop.hostname1' May 10 00:44:05.309021 systemd[1]: Started systemd-hostnamed.service. May 10 00:44:05.311269 dbus-daemon[1721]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1756 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 10 00:44:05.315067 systemd[1]: Starting polkit.service... May 10 00:44:05.333913 env[1739]: time="2025-05-10T00:44:05.333876489Z" level=info msg="Start recovering state" May 10 00:44:05.354660 polkitd[1832]: Started polkitd version 121 May 10 00:44:05.363647 env[1739]: time="2025-05-10T00:44:05.363581703Z" level=info msg="Start event monitor" May 10 00:44:05.376275 env[1739]: time="2025-05-10T00:44:05.376216985Z" level=info msg="Start snapshots syncer" May 10 00:44:05.383742 polkitd[1832]: Loading rules from directory /etc/polkit-1/rules.d May 10 00:44:05.388377 env[1739]: time="2025-05-10T00:44:05.378016567Z" level=info msg="Start cni network conf syncer for default" May 10 00:44:05.388637 env[1739]: time="2025-05-10T00:44:05.388597122Z" level=info msg="Start streaming server" May 10 00:44:05.389115 polkitd[1832]: Loading rules from directory /usr/share/polkit-1/rules.d May 10 00:44:05.396956 polkitd[1832]: Finished loading, compiling and executing 2 rules May 10 00:44:05.397726 dbus-daemon[1721]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 10 00:44:05.397910 systemd[1]: Started polkit.service. May 10 00:44:05.400308 polkitd[1832]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 10 00:44:05.410892 systemd[1]: nvidia.service: Deactivated successfully. May 10 00:44:05.424905 systemd-hostnamed[1756]: Hostname set to (transient) May 10 00:44:05.425022 systemd-resolved[1684]: System hostname changed to 'ip-172-31-16-44'. May 10 00:44:05.633096 coreos-metadata[1720]: May 10 00:44:05.632 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 10 00:44:05.637210 coreos-metadata[1720]: May 10 00:44:05.637 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 May 10 00:44:05.637767 coreos-metadata[1720]: May 10 00:44:05.637 INFO Fetch successful May 10 00:44:05.637870 coreos-metadata[1720]: May 10 00:44:05.637 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 May 10 00:44:05.638392 coreos-metadata[1720]: May 10 00:44:05.638 INFO Fetch successful May 10 00:44:05.641790 unknown[1720]: wrote ssh authorized keys file for user: core May 10 00:44:05.666246 update-ssh-keys[1889]: Updated "/home/core/.ssh/authorized_keys" May 10 00:44:05.666771 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 10 00:44:05.859787 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO Create new startup processor May 10 00:44:05.882540 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [LongRunningPluginsManager] registered plugins: {} May 10 00:44:05.886205 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO Initializing bookkeeping folders May 10 00:44:05.886363 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO removing the completed state files May 10 00:44:05.886445 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO Initializing bookkeeping folders for long running plugins May 10 00:44:05.886533 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO Initializing replies folder for MDS reply requests that couldn't reach the service May 10 00:44:05.887890 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO Initializing healthcheck folders for long running plugins May 10 00:44:05.888048 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO Initializing locations for inventory plugin May 10 00:44:05.888143 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO Initializing default location for custom inventory May 10 00:44:05.888219 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO Initializing default location for file inventory May 10 00:44:05.888298 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO Initializing default location for role inventory May 10 00:44:05.888381 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO Init the cloudwatchlogs publisher May 10 00:44:05.888460 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [instanceID=i-095c038852ef5a445] Successfully loaded platform independent plugin aws:runPowerShellScript May 10 00:44:05.888530 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [instanceID=i-095c038852ef5a445] Successfully loaded platform independent plugin aws:configurePackage May 10 00:44:05.888612 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [instanceID=i-095c038852ef5a445] Successfully loaded platform independent plugin aws:downloadContent May 10 00:44:05.888707 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [instanceID=i-095c038852ef5a445] Successfully loaded platform independent plugin aws:runDocument May 10 00:44:05.890554 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [instanceID=i-095c038852ef5a445] Successfully loaded platform independent plugin aws:softwareInventory May 10 00:44:05.890678 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [instanceID=i-095c038852ef5a445] Successfully loaded platform independent plugin aws:configureDocker May 10 00:44:05.890763 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [instanceID=i-095c038852ef5a445] Successfully loaded platform independent plugin aws:runDockerAction May 10 00:44:05.890836 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [instanceID=i-095c038852ef5a445] Successfully loaded platform independent plugin aws:refreshAssociation May 10 00:44:05.890916 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [instanceID=i-095c038852ef5a445] Successfully loaded platform independent plugin aws:updateSsmAgent May 10 00:44:05.894087 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [instanceID=i-095c038852ef5a445] Successfully loaded platform dependent plugin aws:runShellScript May 10 00:44:05.894211 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 May 10 00:44:05.894287 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO OS: linux, Arch: amd64 May 10 00:44:05.896586 amazon-ssm-agent[1767]: datastore file /var/lib/amazon/ssm/i-095c038852ef5a445/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute May 10 00:44:05.963710 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessageGatewayService] Starting session document processing engine... May 10 00:44:06.058669 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessageGatewayService] [EngineProcessor] Starting May 10 00:44:06.115674 tar[1734]: linux-amd64/LICENSE May 10 00:44:06.116247 tar[1734]: linux-amd64/README.md May 10 00:44:06.123085 systemd[1]: Finished prepare-helm.service. May 10 00:44:06.153086 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. May 10 00:44:06.248790 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-095c038852ef5a445, requestId: 6385d7b8-83f9-41ab-afbf-f19a997f0d87 May 10 00:44:06.291669 locksmithd[1788]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:44:06.343548 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessagingDeliveryService] Starting document processing engine... May 10 00:44:06.438521 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessagingDeliveryService] [EngineProcessor] Starting May 10 00:44:06.533577 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing May 10 00:44:06.630181 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessagingDeliveryService] Starting message polling May 10 00:44:06.725724 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessagingDeliveryService] Starting send replies to MDS May 10 00:44:06.822026 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [instanceID=i-095c038852ef5a445] Starting association polling May 10 00:44:06.848517 sshd_keygen[1747]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:44:06.874105 systemd[1]: Finished sshd-keygen.service. May 10 00:44:06.876613 systemd[1]: Starting issuegen.service... May 10 00:44:06.883491 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:44:06.883717 systemd[1]: Finished issuegen.service. May 10 00:44:06.886753 systemd[1]: Starting systemd-user-sessions.service... May 10 00:44:06.896534 systemd[1]: Finished systemd-user-sessions.service. May 10 00:44:06.898998 systemd[1]: Started getty@tty1.service. May 10 00:44:06.901545 systemd[1]: Started serial-getty@ttyS0.service. May 10 00:44:06.902615 systemd[1]: Reached target getty.target. May 10 00:44:06.917898 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting May 10 00:44:07.014020 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessagingDeliveryService] [Association] Launching response handler May 10 00:44:07.110246 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing May 10 00:44:07.206740 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service May 10 00:44:07.303471 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized May 10 00:44:07.395623 systemd[1]: Started kubelet.service. May 10 00:44:07.397203 systemd[1]: Reached target multi-user.target. May 10 00:44:07.399300 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 10 00:44:07.402685 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessageGatewayService] listening reply. May 10 00:44:07.409558 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 10 00:44:07.409718 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 10 00:44:07.410570 systemd[1]: Startup finished in 595ms (kernel) + 7.438s (initrd) + 10.469s (userspace) = 18.504s. May 10 00:44:07.499841 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [HealthCheck] HealthCheck reporting agent health. May 10 00:44:07.597177 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [OfflineService] Starting document processing engine... May 10 00:44:07.694573 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [OfflineService] [EngineProcessor] Starting May 10 00:44:07.792225 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [OfflineService] [EngineProcessor] Initial processing May 10 00:44:07.890146 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [OfflineService] Starting message polling May 10 00:44:07.988181 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [OfflineService] Starting send replies to MDS May 10 00:44:08.086437 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [LongRunningPluginsManager] starting long running plugin manager May 10 00:44:08.184909 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute May 10 00:44:08.283449 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck May 10 00:44:08.382305 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [StartupProcessor] Executing startup processor tasks May 10 00:44:08.481361 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running May 10 00:44:08.574903 kubelet[1932]: E0510 00:44:08.574849 1932 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:44:08.576471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:44:08.576614 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:44:08.576847 systemd[1]: kubelet.service: Consumed 1.084s CPU time. May 10 00:44:08.580617 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk May 10 00:44:08.680175 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 May 10 00:44:08.779954 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-095c038852ef5a445?role=subscribe&stream=input May 10 00:44:08.879971 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-095c038852ef5a445?role=subscribe&stream=input May 10 00:44:08.980163 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessageGatewayService] Starting receiving message from control channel May 10 00:44:09.080441 amazon-ssm-agent[1767]: 2025-05-10 00:44:05 INFO [MessageGatewayService] [EngineProcessor] Initial processing May 10 00:44:14.496258 systemd[1]: Created slice system-sshd.slice. May 10 00:44:14.497471 systemd[1]: Started sshd@0-172.31.16.44:22-139.178.89.65:54048.service. May 10 00:44:14.675369 sshd[1939]: Accepted publickey for core from 139.178.89.65 port 54048 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:44:14.678589 sshd[1939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:44:14.692796 systemd[1]: Created slice user-500.slice. May 10 00:44:14.694277 systemd[1]: Starting user-runtime-dir@500.service... May 10 00:44:14.698724 systemd-logind[1730]: New session 1 of user core. May 10 00:44:14.706908 systemd[1]: Finished user-runtime-dir@500.service. May 10 00:44:14.708928 systemd[1]: Starting user@500.service... May 10 00:44:14.713817 (systemd)[1942]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:44:14.822404 systemd[1942]: Queued start job for default target default.target. May 10 00:44:14.822956 systemd[1942]: Reached target paths.target. May 10 00:44:14.822982 systemd[1942]: Reached target sockets.target. May 10 00:44:14.822996 systemd[1942]: Reached target timers.target. May 10 00:44:14.823008 systemd[1942]: Reached target basic.target. May 10 00:44:14.823123 systemd[1]: Started user@500.service. May 10 00:44:14.824145 systemd[1]: Started session-1.scope. May 10 00:44:14.824646 systemd[1942]: Reached target default.target. May 10 00:44:14.824930 systemd[1942]: Startup finished in 103ms. May 10 00:44:14.969146 systemd[1]: Started sshd@1-172.31.16.44:22-139.178.89.65:54062.service. May 10 00:44:15.131627 sshd[1951]: Accepted publickey for core from 139.178.89.65 port 54062 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:44:15.133081 sshd[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:44:15.137788 systemd-logind[1730]: New session 2 of user core. May 10 00:44:15.138237 systemd[1]: Started session-2.scope. May 10 00:44:15.267928 sshd[1951]: pam_unix(sshd:session): session closed for user core May 10 00:44:15.270538 systemd[1]: sshd@1-172.31.16.44:22-139.178.89.65:54062.service: Deactivated successfully. May 10 00:44:15.271243 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:44:15.271905 systemd-logind[1730]: Session 2 logged out. Waiting for processes to exit. May 10 00:44:15.272822 systemd-logind[1730]: Removed session 2. May 10 00:44:15.292399 systemd[1]: Started sshd@2-172.31.16.44:22-139.178.89.65:54078.service. May 10 00:44:15.447477 sshd[1957]: Accepted publickey for core from 139.178.89.65 port 54078 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:44:15.448649 sshd[1957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:44:15.453437 systemd-logind[1730]: New session 3 of user core. May 10 00:44:15.453995 systemd[1]: Started session-3.scope. May 10 00:44:15.574894 sshd[1957]: pam_unix(sshd:session): session closed for user core May 10 00:44:15.578114 systemd[1]: sshd@2-172.31.16.44:22-139.178.89.65:54078.service: Deactivated successfully. May 10 00:44:15.578971 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:44:15.579637 systemd-logind[1730]: Session 3 logged out. Waiting for processes to exit. May 10 00:44:15.580665 systemd-logind[1730]: Removed session 3. May 10 00:44:15.599752 systemd[1]: Started sshd@3-172.31.16.44:22-139.178.89.65:54084.service. May 10 00:44:15.754169 sshd[1963]: Accepted publickey for core from 139.178.89.65 port 54084 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:44:15.755988 sshd[1963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:44:15.760528 systemd-logind[1730]: New session 4 of user core. May 10 00:44:15.760997 systemd[1]: Started session-4.scope. May 10 00:44:15.886460 sshd[1963]: pam_unix(sshd:session): session closed for user core May 10 00:44:15.889159 systemd[1]: sshd@3-172.31.16.44:22-139.178.89.65:54084.service: Deactivated successfully. May 10 00:44:15.889810 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:44:15.890304 systemd-logind[1730]: Session 4 logged out. Waiting for processes to exit. May 10 00:44:15.891001 systemd-logind[1730]: Removed session 4. May 10 00:44:15.912577 systemd[1]: Started sshd@4-172.31.16.44:22-139.178.89.65:54092.service. May 10 00:44:16.073464 sshd[1969]: Accepted publickey for core from 139.178.89.65 port 54092 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:44:16.074356 sshd[1969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:44:16.079114 systemd-logind[1730]: New session 5 of user core. May 10 00:44:16.079320 systemd[1]: Started session-5.scope. May 10 00:44:16.228253 sudo[1972]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:44:16.228607 sudo[1972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 00:44:16.256722 systemd[1]: Starting docker.service... May 10 00:44:16.297832 env[1982]: time="2025-05-10T00:44:16.297773267Z" level=info msg="Starting up" May 10 00:44:16.299187 env[1982]: time="2025-05-10T00:44:16.299159212Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:44:16.299297 env[1982]: time="2025-05-10T00:44:16.299284483Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:44:16.299375 env[1982]: time="2025-05-10T00:44:16.299362151Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:44:16.299418 env[1982]: time="2025-05-10T00:44:16.299410226Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:44:16.301059 env[1982]: time="2025-05-10T00:44:16.300980748Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:44:16.301157 env[1982]: time="2025-05-10T00:44:16.301142969Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:44:16.301231 env[1982]: time="2025-05-10T00:44:16.301217902Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:44:16.301290 env[1982]: time="2025-05-10T00:44:16.301281143Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:44:16.333406 env[1982]: time="2025-05-10T00:44:16.332819076Z" level=info msg="Loading containers: start." May 10 00:44:16.489067 kernel: Initializing XFRM netlink socket May 10 00:44:16.526988 env[1982]: time="2025-05-10T00:44:16.526943872Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 10 00:44:16.527923 (udev-worker)[1992]: Network interface NamePolicy= disabled on kernel command line. May 10 00:44:16.592342 systemd-networkd[1465]: docker0: Link UP May 10 00:44:16.609624 env[1982]: time="2025-05-10T00:44:16.609573187Z" level=info msg="Loading containers: done." May 10 00:44:16.619488 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1086511088-merged.mount: Deactivated successfully. May 10 00:44:16.626180 env[1982]: time="2025-05-10T00:44:16.626126473Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:44:16.626374 env[1982]: time="2025-05-10T00:44:16.626312862Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 10 00:44:16.626425 env[1982]: time="2025-05-10T00:44:16.626405676Z" level=info msg="Daemon has completed initialization" May 10 00:44:16.644432 systemd[1]: Started docker.service. May 10 00:44:16.652225 env[1982]: time="2025-05-10T00:44:16.652152990Z" level=info msg="API listen on /run/docker.sock" May 10 00:44:17.948562 env[1739]: time="2025-05-10T00:44:17.948517360Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 10 00:44:18.578456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2778645854.mount: Deactivated successfully. May 10 00:44:18.579438 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:44:18.579579 systemd[1]: Stopped kubelet.service. May 10 00:44:18.579622 systemd[1]: kubelet.service: Consumed 1.084s CPU time. May 10 00:44:18.581076 systemd[1]: Starting kubelet.service... May 10 00:44:18.779558 systemd[1]: Started kubelet.service. May 10 00:44:18.857084 kubelet[2108]: E0510 00:44:18.856870 2108 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:44:18.861199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:44:18.861373 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:44:20.846278 env[1739]: time="2025-05-10T00:44:20.846221301Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:20.849307 env[1739]: time="2025-05-10T00:44:20.849256032Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:20.853271 env[1739]: time="2025-05-10T00:44:20.853227009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:20.856310 env[1739]: time="2025-05-10T00:44:20.856262169Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:20.857086 env[1739]: time="2025-05-10T00:44:20.857053630Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 10 00:44:20.859234 env[1739]: time="2025-05-10T00:44:20.859209866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 10 00:44:20.872884 amazon-ssm-agent[1767]: 2025-05-10 00:44:20 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. May 10 00:44:22.967445 env[1739]: time="2025-05-10T00:44:22.967396486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:22.969511 env[1739]: time="2025-05-10T00:44:22.969468017Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:22.971502 env[1739]: time="2025-05-10T00:44:22.971460718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:22.973336 env[1739]: time="2025-05-10T00:44:22.973302651Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:22.974082 env[1739]: time="2025-05-10T00:44:22.974027929Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 10 00:44:22.974904 env[1739]: time="2025-05-10T00:44:22.974877753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 10 00:44:24.949985 env[1739]: time="2025-05-10T00:44:24.949871390Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:24.952252 env[1739]: time="2025-05-10T00:44:24.952213924Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:24.953934 env[1739]: time="2025-05-10T00:44:24.953903004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:24.955850 env[1739]: time="2025-05-10T00:44:24.955815718Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:24.956644 env[1739]: time="2025-05-10T00:44:24.956609116Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 10 00:44:24.957132 env[1739]: time="2025-05-10T00:44:24.957108174Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 10 00:44:26.007689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537697022.mount: Deactivated successfully. May 10 00:44:26.783796 env[1739]: time="2025-05-10T00:44:26.783732247Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:26.784475 env[1739]: time="2025-05-10T00:44:26.784451099Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:26.786718 env[1739]: time="2025-05-10T00:44:26.786671572Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:26.790816 env[1739]: time="2025-05-10T00:44:26.790769530Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:26.794501 env[1739]: time="2025-05-10T00:44:26.794419633Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 10 00:44:26.795232 env[1739]: time="2025-05-10T00:44:26.795199416Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:44:27.291058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778788827.mount: Deactivated successfully. May 10 00:44:28.203398 env[1739]: time="2025-05-10T00:44:28.203338017Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:28.205358 env[1739]: time="2025-05-10T00:44:28.205320792Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:28.207986 env[1739]: time="2025-05-10T00:44:28.207951604Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:28.210058 env[1739]: time="2025-05-10T00:44:28.210006247Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:28.210912 env[1739]: time="2025-05-10T00:44:28.210862065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 10 00:44:28.212119 env[1739]: time="2025-05-10T00:44:28.212078052Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 10 00:44:28.710143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount318387412.mount: Deactivated successfully. May 10 00:44:28.716138 env[1739]: time="2025-05-10T00:44:28.716075603Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:28.718499 env[1739]: time="2025-05-10T00:44:28.718458638Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:28.720129 env[1739]: time="2025-05-10T00:44:28.720091338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:28.722081 env[1739]: time="2025-05-10T00:44:28.722007251Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:28.722669 env[1739]: time="2025-05-10T00:44:28.722634743Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 10 00:44:28.723380 env[1739]: time="2025-05-10T00:44:28.723354869Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 10 00:44:29.112319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:44:29.112510 systemd[1]: Stopped kubelet.service. May 10 00:44:29.113944 systemd[1]: Starting kubelet.service... May 10 00:44:29.260863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2876961667.mount: Deactivated successfully. May 10 00:44:29.326169 systemd[1]: Started kubelet.service. May 10 00:44:29.412167 kubelet[2117]: E0510 00:44:29.411655 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:44:29.413539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:44:29.413663 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:44:32.177936 env[1739]: time="2025-05-10T00:44:32.177888125Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:32.180298 env[1739]: time="2025-05-10T00:44:32.180245692Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:32.182127 env[1739]: time="2025-05-10T00:44:32.182086705Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:32.184165 env[1739]: time="2025-05-10T00:44:32.184134691Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:32.185332 env[1739]: time="2025-05-10T00:44:32.185290196Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 10 00:44:35.300350 systemd[1]: Stopped kubelet.service. May 10 00:44:35.303094 systemd[1]: Starting kubelet.service... May 10 00:44:35.335354 systemd[1]: Reloading. May 10 00:44:35.446175 /usr/lib/systemd/system-generators/torcx-generator[2165]: time="2025-05-10T00:44:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:44:35.448299 /usr/lib/systemd/system-generators/torcx-generator[2165]: time="2025-05-10T00:44:35Z" level=info msg="torcx already run" May 10 00:44:35.563318 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:44:35.563777 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:44:35.589913 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:44:35.701447 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 10 00:44:35.714488 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 10 00:44:35.714585 systemd[1]: kubelet.service: Failed with result 'signal'. May 10 00:44:35.714842 systemd[1]: Stopped kubelet.service. May 10 00:44:35.716868 systemd[1]: Starting kubelet.service... May 10 00:44:35.968673 systemd[1]: Started kubelet.service. May 10 00:44:36.026575 kubelet[2228]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:44:36.026575 kubelet[2228]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:44:36.026575 kubelet[2228]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:44:36.028308 kubelet[2228]: I0510 00:44:36.028245 2228 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:44:36.531168 kubelet[2228]: I0510 00:44:36.531128 2228 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 00:44:36.531168 kubelet[2228]: I0510 00:44:36.531156 2228 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:44:36.531507 kubelet[2228]: I0510 00:44:36.531483 2228 server.go:929] "Client rotation is on, will bootstrap in background" May 10 00:44:36.584659 kubelet[2228]: I0510 00:44:36.584619 2228 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:44:36.586334 kubelet[2228]: E0510 00:44:36.586305 2228 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:36.596241 kubelet[2228]: E0510 00:44:36.596174 2228 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:44:36.596241 kubelet[2228]: I0510 00:44:36.596237 2228 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:44:36.600662 kubelet[2228]: I0510 00:44:36.600626 2228 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:44:36.603240 kubelet[2228]: I0510 00:44:36.603197 2228 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 00:44:36.603510 kubelet[2228]: I0510 00:44:36.603468 2228 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:44:36.603888 kubelet[2228]: I0510 00:44:36.603508 2228 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-44","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:44:36.604030 kubelet[2228]: I0510 00:44:36.603898 2228 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:44:36.604030 kubelet[2228]: I0510 00:44:36.603914 2228 container_manager_linux.go:300] "Creating device plugin manager" May 10 00:44:36.604146 kubelet[2228]: I0510 00:44:36.604073 2228 state_mem.go:36] "Initialized new in-memory state store" May 10 00:44:36.607745 kubelet[2228]: I0510 00:44:36.607697 2228 kubelet.go:408] "Attempting to sync node with API server" May 10 00:44:36.607745 kubelet[2228]: I0510 00:44:36.607736 2228 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:44:36.607973 kubelet[2228]: I0510 00:44:36.607772 2228 kubelet.go:314] "Adding apiserver pod source" May 10 00:44:36.607973 kubelet[2228]: I0510 00:44:36.607786 2228 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:44:36.647422 kubelet[2228]: W0510 00:44:36.647222 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.44:6443: connect: connection refused May 10 00:44:36.647422 kubelet[2228]: E0510 00:44:36.647282 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:36.647422 kubelet[2228]: W0510 00:44:36.647345 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-44&limit=500&resourceVersion=0": dial tcp 172.31.16.44:6443: connect: connection refused May 10 00:44:36.647422 kubelet[2228]: E0510 00:44:36.647369 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-44&limit=500&resourceVersion=0\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:36.647972 kubelet[2228]: I0510 00:44:36.647955 2228 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:44:36.657686 kubelet[2228]: I0510 00:44:36.657654 2228 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:44:36.659433 kubelet[2228]: W0510 00:44:36.659405 2228 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:44:36.660215 kubelet[2228]: I0510 00:44:36.660194 2228 server.go:1269] "Started kubelet" May 10 00:44:36.668111 kubelet[2228]: I0510 00:44:36.668075 2228 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:44:36.669232 kubelet[2228]: I0510 00:44:36.669155 2228 server.go:460] "Adding debug handlers to kubelet server" May 10 00:44:36.670571 kubelet[2228]: I0510 00:44:36.670523 2228 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:44:36.671071 kubelet[2228]: I0510 00:44:36.671028 2228 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:44:36.673396 kubelet[2228]: E0510 00:44:36.671535 2228 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.44:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-44.183e03cb2e86eb5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-44,UID:ip-172-31-16-44,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-44,},FirstTimestamp:2025-05-10 00:44:36.660169564 +0000 UTC m=+0.688057157,LastTimestamp:2025-05-10 00:44:36.660169564 +0000 UTC m=+0.688057157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-44,}" May 10 00:44:36.674218 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 10 00:44:36.674336 kubelet[2228]: I0510 00:44:36.674321 2228 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:44:36.675830 kubelet[2228]: I0510 00:44:36.675804 2228 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:44:36.680758 kubelet[2228]: E0510 00:44:36.680738 2228 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:44:36.681084 kubelet[2228]: E0510 00:44:36.681071 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:36.681170 kubelet[2228]: I0510 00:44:36.681163 2228 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 00:44:36.681431 kubelet[2228]: I0510 00:44:36.681419 2228 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 00:44:36.681536 kubelet[2228]: I0510 00:44:36.681529 2228 reconciler.go:26] "Reconciler: start to sync state" May 10 00:44:36.682215 kubelet[2228]: I0510 00:44:36.682201 2228 factory.go:221] Registration of the systemd container factory successfully May 10 00:44:36.682371 kubelet[2228]: I0510 00:44:36.682357 2228 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:44:36.684204 kubelet[2228]: W0510 00:44:36.684165 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.44:6443: connect: connection refused May 10 00:44:36.684321 kubelet[2228]: E0510 00:44:36.684307 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:36.685421 kubelet[2228]: I0510 00:44:36.685407 2228 factory.go:221] Registration of the containerd container factory successfully May 10 00:44:36.686932 kubelet[2228]: E0510 00:44:36.686906 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-44?timeout=10s\": dial tcp 172.31.16.44:6443: connect: connection refused" interval="200ms" May 10 00:44:36.703388 kubelet[2228]: I0510 00:44:36.703335 2228 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:44:36.709819 kubelet[2228]: I0510 00:44:36.709797 2228 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:44:36.709992 kubelet[2228]: I0510 00:44:36.709980 2228 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:44:36.710257 kubelet[2228]: I0510 00:44:36.710236 2228 state_mem.go:36] "Initialized new in-memory state store" May 10 00:44:36.713712 kubelet[2228]: I0510 00:44:36.710213 2228 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:44:36.713838 kubelet[2228]: I0510 00:44:36.713829 2228 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:44:36.713899 kubelet[2228]: I0510 00:44:36.713893 2228 kubelet.go:2321] "Starting kubelet main sync loop" May 10 00:44:36.713990 kubelet[2228]: E0510 00:44:36.713973 2228 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:44:36.714656 kubelet[2228]: I0510 00:44:36.714640 2228 policy_none.go:49] "None policy: Start" May 10 00:44:36.716267 kubelet[2228]: W0510 00:44:36.715997 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.44:6443: connect: connection refused May 10 00:44:36.716483 kubelet[2228]: E0510 00:44:36.716444 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:36.717340 kubelet[2228]: I0510 00:44:36.717214 2228 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:44:36.717619 kubelet[2228]: I0510 00:44:36.717608 2228 state_mem.go:35] "Initializing new in-memory state store" May 10 00:44:36.725065 systemd[1]: Created slice kubepods.slice. May 10 00:44:36.730192 systemd[1]: Created slice kubepods-burstable.slice. May 10 00:44:36.733643 systemd[1]: Created slice kubepods-besteffort.slice. May 10 00:44:36.741205 kubelet[2228]: I0510 00:44:36.741176 2228 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:44:36.741342 kubelet[2228]: I0510 00:44:36.741329 2228 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:44:36.741402 kubelet[2228]: I0510 00:44:36.741344 2228 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:44:36.741732 kubelet[2228]: I0510 00:44:36.741717 2228 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:44:36.747457 kubelet[2228]: E0510 00:44:36.747426 2228 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-44\" not found" May 10 00:44:36.823456 systemd[1]: Created slice kubepods-burstable-poda0539a046fc2ba150f2a4b68eefb2c3d.slice. May 10 00:44:36.839842 systemd[1]: Created slice kubepods-burstable-pod7b0ea17c00c69a70b502b0733897b0fc.slice. May 10 00:44:36.843743 kubelet[2228]: I0510 00:44:36.843544 2228 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-44" May 10 00:44:36.844280 kubelet[2228]: E0510 00:44:36.844250 2228 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.44:6443/api/v1/nodes\": dial tcp 172.31.16.44:6443: connect: connection refused" node="ip-172-31-16-44" May 10 00:44:36.848136 systemd[1]: Created slice kubepods-burstable-pod94eda1404c152403afc6f1a8804dad0f.slice. May 10 00:44:36.887878 kubelet[2228]: E0510 00:44:36.887835 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-44?timeout=10s\": dial tcp 172.31.16.44:6443: connect: connection refused" interval="400ms" May 10 00:44:36.983280 kubelet[2228]: I0510 00:44:36.983211 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0539a046fc2ba150f2a4b68eefb2c3d-ca-certs\") pod \"kube-apiserver-ip-172-31-16-44\" (UID: \"a0539a046fc2ba150f2a4b68eefb2c3d\") " pod="kube-system/kube-apiserver-ip-172-31-16-44" May 10 00:44:36.983280 kubelet[2228]: I0510 00:44:36.983259 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0539a046fc2ba150f2a4b68eefb2c3d-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-44\" (UID: \"a0539a046fc2ba150f2a4b68eefb2c3d\") " pod="kube-system/kube-apiserver-ip-172-31-16-44" May 10 00:44:36.983280 kubelet[2228]: I0510 00:44:36.983278 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0539a046fc2ba150f2a4b68eefb2c3d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-44\" (UID: \"a0539a046fc2ba150f2a4b68eefb2c3d\") " pod="kube-system/kube-apiserver-ip-172-31-16-44" May 10 00:44:36.983498 kubelet[2228]: I0510 00:44:36.983297 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b0ea17c00c69a70b502b0733897b0fc-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-44\" (UID: \"7b0ea17c00c69a70b502b0733897b0fc\") " pod="kube-system/kube-controller-manager-ip-172-31-16-44" May 10 00:44:36.983498 kubelet[2228]: I0510 00:44:36.983314 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b0ea17c00c69a70b502b0733897b0fc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-44\" (UID: \"7b0ea17c00c69a70b502b0733897b0fc\") " pod="kube-system/kube-controller-manager-ip-172-31-16-44" May 10 00:44:36.983498 kubelet[2228]: I0510 00:44:36.983329 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b0ea17c00c69a70b502b0733897b0fc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-44\" (UID: \"7b0ea17c00c69a70b502b0733897b0fc\") " pod="kube-system/kube-controller-manager-ip-172-31-16-44" May 10 00:44:36.983498 kubelet[2228]: I0510 00:44:36.983343 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b0ea17c00c69a70b502b0733897b0fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-44\" (UID: \"7b0ea17c00c69a70b502b0733897b0fc\") " pod="kube-system/kube-controller-manager-ip-172-31-16-44" May 10 00:44:36.983498 kubelet[2228]: I0510 00:44:36.983358 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b0ea17c00c69a70b502b0733897b0fc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-44\" (UID: \"7b0ea17c00c69a70b502b0733897b0fc\") " pod="kube-system/kube-controller-manager-ip-172-31-16-44" May 10 00:44:36.983815 kubelet[2228]: I0510 00:44:36.983374 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94eda1404c152403afc6f1a8804dad0f-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-44\" (UID: \"94eda1404c152403afc6f1a8804dad0f\") " pod="kube-system/kube-scheduler-ip-172-31-16-44" May 10 00:44:37.045997 kubelet[2228]: I0510 00:44:37.045957 2228 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-44" May 10 00:44:37.046455 kubelet[2228]: E0510 00:44:37.046353 2228 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.44:6443/api/v1/nodes\": dial tcp 172.31.16.44:6443: connect: connection refused" node="ip-172-31-16-44" May 10 00:44:37.138961 env[1739]: time="2025-05-10T00:44:37.138853927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-44,Uid:a0539a046fc2ba150f2a4b68eefb2c3d,Namespace:kube-system,Attempt:0,}" May 10 00:44:37.146671 env[1739]: time="2025-05-10T00:44:37.146624563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-44,Uid:7b0ea17c00c69a70b502b0733897b0fc,Namespace:kube-system,Attempt:0,}" May 10 00:44:37.151432 env[1739]: time="2025-05-10T00:44:37.151385704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-44,Uid:94eda1404c152403afc6f1a8804dad0f,Namespace:kube-system,Attempt:0,}" May 10 00:44:37.288923 kubelet[2228]: E0510 00:44:37.288882 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-44?timeout=10s\": dial tcp 172.31.16.44:6443: connect: connection refused" interval="800ms" May 10 00:44:37.450312 kubelet[2228]: I0510 00:44:37.447987 2228 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-44" May 10 00:44:37.451157 kubelet[2228]: E0510 00:44:37.451048 2228 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.44:6443/api/v1/nodes\": dial tcp 172.31.16.44:6443: connect: connection refused" node="ip-172-31-16-44" May 10 00:44:37.478518 kubelet[2228]: W0510 00:44:37.478457 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-44&limit=500&resourceVersion=0": dial tcp 172.31.16.44:6443: connect: connection refused May 10 00:44:37.478647 kubelet[2228]: E0510 00:44:37.478526 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-44&limit=500&resourceVersion=0\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:37.591409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2320864324.mount: Deactivated successfully. May 10 00:44:37.599309 env[1739]: time="2025-05-10T00:44:37.599265389Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:37.600406 env[1739]: time="2025-05-10T00:44:37.600372236Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:37.601440 kubelet[2228]: W0510 00:44:37.601384 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.44:6443: connect: connection refused May 10 00:44:37.601550 kubelet[2228]: E0510 00:44:37.601448 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:37.603691 env[1739]: time="2025-05-10T00:44:37.603521455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:37.604888 env[1739]: time="2025-05-10T00:44:37.604860752Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:37.606150 env[1739]: time="2025-05-10T00:44:37.606118928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:37.608530 env[1739]: time="2025-05-10T00:44:37.608483145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:37.610064 env[1739]: time="2025-05-10T00:44:37.610017621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:37.610975 env[1739]: time="2025-05-10T00:44:37.610938542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:37.616126 env[1739]: time="2025-05-10T00:44:37.616079741Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:37.617759 env[1739]: time="2025-05-10T00:44:37.617712268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:37.618729 env[1739]: time="2025-05-10T00:44:37.618690639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:37.624061 env[1739]: time="2025-05-10T00:44:37.623992737Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:37.659765 env[1739]: time="2025-05-10T00:44:37.659688663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:44:37.659970 env[1739]: time="2025-05-10T00:44:37.659742310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:44:37.659970 env[1739]: time="2025-05-10T00:44:37.659760056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:44:37.659970 env[1739]: time="2025-05-10T00:44:37.659912678Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0961314fc631f49f6120242819087bbec3a2102bf7c918c5e7e27083eba125f pid=2272 runtime=io.containerd.runc.v2 May 10 00:44:37.678853 systemd[1]: Started cri-containerd-a0961314fc631f49f6120242819087bbec3a2102bf7c918c5e7e27083eba125f.scope. May 10 00:44:37.693685 env[1739]: time="2025-05-10T00:44:37.693586819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:44:37.693849 env[1739]: time="2025-05-10T00:44:37.693692641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:44:37.693849 env[1739]: time="2025-05-10T00:44:37.693723456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:44:37.693955 env[1739]: time="2025-05-10T00:44:37.693882172Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d13385a6a0d85fc9bcf66db6deef3816421dcf7a99e160d0db366f6cab821c24 pid=2286 runtime=io.containerd.runc.v2 May 10 00:44:37.708251 env[1739]: time="2025-05-10T00:44:37.708022171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:44:37.708251 env[1739]: time="2025-05-10T00:44:37.708190389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:44:37.709448 env[1739]: time="2025-05-10T00:44:37.708222480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:44:37.709984 env[1739]: time="2025-05-10T00:44:37.709938780Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a76accebcd4ea1db0b4977318cfb25ba847c29706ed259bee516e5db3b436c40 pid=2314 runtime=io.containerd.runc.v2 May 10 00:44:37.724912 systemd[1]: Started cri-containerd-d13385a6a0d85fc9bcf66db6deef3816421dcf7a99e160d0db366f6cab821c24.scope. May 10 00:44:37.736613 systemd[1]: Started cri-containerd-a76accebcd4ea1db0b4977318cfb25ba847c29706ed259bee516e5db3b436c40.scope. May 10 00:44:37.806285 env[1739]: time="2025-05-10T00:44:37.806238254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-44,Uid:94eda1404c152403afc6f1a8804dad0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0961314fc631f49f6120242819087bbec3a2102bf7c918c5e7e27083eba125f\"" May 10 00:44:37.810486 kubelet[2228]: W0510 00:44:37.810407 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.44:6443: connect: connection refused May 10 00:44:37.810486 kubelet[2228]: E0510 00:44:37.810456 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:37.816061 env[1739]: time="2025-05-10T00:44:37.814758364Z" level=info msg="CreateContainer within sandbox \"a0961314fc631f49f6120242819087bbec3a2102bf7c918c5e7e27083eba125f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:44:37.818492 env[1739]: time="2025-05-10T00:44:37.818451582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-44,Uid:a0539a046fc2ba150f2a4b68eefb2c3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d13385a6a0d85fc9bcf66db6deef3816421dcf7a99e160d0db366f6cab821c24\"" May 10 00:44:37.821706 env[1739]: time="2025-05-10T00:44:37.821663332Z" level=info msg="CreateContainer within sandbox \"d13385a6a0d85fc9bcf66db6deef3816421dcf7a99e160d0db366f6cab821c24\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:44:37.843692 env[1739]: time="2025-05-10T00:44:37.843641184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-44,Uid:7b0ea17c00c69a70b502b0733897b0fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a76accebcd4ea1db0b4977318cfb25ba847c29706ed259bee516e5db3b436c40\"" May 10 00:44:37.847402 kubelet[2228]: W0510 00:44:37.847359 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.44:6443: connect: connection refused May 10 00:44:37.847644 kubelet[2228]: E0510 00:44:37.847454 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:37.847907 env[1739]: time="2025-05-10T00:44:37.847880713Z" level=info msg="CreateContainer within sandbox \"a76accebcd4ea1db0b4977318cfb25ba847c29706ed259bee516e5db3b436c40\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:44:37.865930 env[1739]: time="2025-05-10T00:44:37.865865174Z" level=info msg="CreateContainer within sandbox \"a0961314fc631f49f6120242819087bbec3a2102bf7c918c5e7e27083eba125f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"77ddea4a3cd3863de286bacd8764e16730203049210bc3a9c2dde8f551058cdb\"" May 10 00:44:37.866687 env[1739]: time="2025-05-10T00:44:37.866656843Z" level=info msg="StartContainer for \"77ddea4a3cd3863de286bacd8764e16730203049210bc3a9c2dde8f551058cdb\"" May 10 00:44:37.871108 env[1739]: time="2025-05-10T00:44:37.871065215Z" level=info msg="CreateContainer within sandbox \"a76accebcd4ea1db0b4977318cfb25ba847c29706ed259bee516e5db3b436c40\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e490b70eb41e9c515fc8226d7c449a4b7a1deb609aab0c9dd29849dc72d81c3f\"" May 10 00:44:37.871866 env[1739]: time="2025-05-10T00:44:37.871840285Z" level=info msg="StartContainer for \"e490b70eb41e9c515fc8226d7c449a4b7a1deb609aab0c9dd29849dc72d81c3f\"" May 10 00:44:37.873219 env[1739]: time="2025-05-10T00:44:37.873193645Z" level=info msg="CreateContainer within sandbox \"d13385a6a0d85fc9bcf66db6deef3816421dcf7a99e160d0db366f6cab821c24\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e67918ac15e08ddef09733cfbd7d1b2cbed7210023dace26fd585dbbfa4eb232\"" May 10 00:44:37.873734 env[1739]: time="2025-05-10T00:44:37.873707587Z" level=info msg="StartContainer for \"e67918ac15e08ddef09733cfbd7d1b2cbed7210023dace26fd585dbbfa4eb232\"" May 10 00:44:37.887190 systemd[1]: Started cri-containerd-77ddea4a3cd3863de286bacd8764e16730203049210bc3a9c2dde8f551058cdb.scope. May 10 00:44:37.912943 systemd[1]: Started cri-containerd-e490b70eb41e9c515fc8226d7c449a4b7a1deb609aab0c9dd29849dc72d81c3f.scope. May 10 00:44:37.934819 systemd[1]: Started cri-containerd-e67918ac15e08ddef09733cfbd7d1b2cbed7210023dace26fd585dbbfa4eb232.scope. May 10 00:44:38.011302 env[1739]: time="2025-05-10T00:44:38.011205118Z" level=info msg="StartContainer for \"77ddea4a3cd3863de286bacd8764e16730203049210bc3a9c2dde8f551058cdb\" returns successfully" May 10 00:44:38.020337 env[1739]: time="2025-05-10T00:44:38.020285450Z" level=info msg="StartContainer for \"e490b70eb41e9c515fc8226d7c449a4b7a1deb609aab0c9dd29849dc72d81c3f\" returns successfully" May 10 00:44:38.055894 env[1739]: time="2025-05-10T00:44:38.055845230Z" level=info msg="StartContainer for \"e67918ac15e08ddef09733cfbd7d1b2cbed7210023dace26fd585dbbfa4eb232\" returns successfully" May 10 00:44:38.090618 kubelet[2228]: E0510 00:44:38.090551 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-44?timeout=10s\": dial tcp 172.31.16.44:6443: connect: connection refused" interval="1.6s" May 10 00:44:38.253064 kubelet[2228]: I0510 00:44:38.253017 2228 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-44" May 10 00:44:38.253400 kubelet[2228]: E0510 00:44:38.253371 2228 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.44:6443/api/v1/nodes\": dial tcp 172.31.16.44:6443: connect: connection refused" node="ip-172-31-16-44" May 10 00:44:38.736167 kubelet[2228]: E0510 00:44:38.736123 2228 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:39.691200 kubelet[2228]: E0510 00:44:39.691144 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-44?timeout=10s\": dial tcp 172.31.16.44:6443: connect: connection refused" interval="3.2s" May 10 00:44:39.857245 kubelet[2228]: I0510 00:44:39.857205 2228 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-44" May 10 00:44:39.857610 kubelet[2228]: E0510 00:44:39.857576 2228 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.44:6443/api/v1/nodes\": dial tcp 172.31.16.44:6443: connect: connection refused" node="ip-172-31-16-44" May 10 00:44:40.012522 kubelet[2228]: W0510 00:44:40.012379 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.44:6443: connect: connection refused May 10 00:44:40.012522 kubelet[2228]: E0510 00:44:40.012458 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:40.251231 kubelet[2228]: W0510 00:44:40.251145 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-44&limit=500&resourceVersion=0": dial tcp 172.31.16.44:6443: connect: connection refused May 10 00:44:40.251231 kubelet[2228]: E0510 00:44:40.251223 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-44&limit=500&resourceVersion=0\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:40.344615 kubelet[2228]: W0510 00:44:40.344491 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.44:6443: connect: connection refused May 10 00:44:40.344615 kubelet[2228]: E0510 00:44:40.344561 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:40.534513 kubelet[2228]: W0510 00:44:40.534451 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.44:6443: connect: connection refused May 10 00:44:40.534669 kubelet[2228]: E0510 00:44:40.534518 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.44:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:42.227499 kubelet[2228]: E0510 00:44:42.227396 2228 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.44:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-44.183e03cb2e86eb5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-44,UID:ip-172-31-16-44,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-44,},FirstTimestamp:2025-05-10 00:44:36.660169564 +0000 UTC m=+0.688057157,LastTimestamp:2025-05-10 00:44:36.660169564 +0000 UTC m=+0.688057157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-44,}" May 10 00:44:43.060201 kubelet[2228]: I0510 00:44:43.060176 2228 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-44" May 10 00:44:44.548602 kubelet[2228]: E0510 00:44:44.548566 2228 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-44\" not found" node="ip-172-31-16-44" May 10 00:44:44.672571 kubelet[2228]: I0510 00:44:44.672541 2228 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-16-44" May 10 00:44:44.672780 kubelet[2228]: E0510 00:44:44.672767 2228 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-44\": node \"ip-172-31-16-44\" not found" May 10 00:44:44.688839 kubelet[2228]: E0510 00:44:44.688798 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:44.789049 kubelet[2228]: E0510 00:44:44.788973 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:44.890058 kubelet[2228]: E0510 00:44:44.889919 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:44.991159 kubelet[2228]: E0510 00:44:44.991118 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:45.091980 kubelet[2228]: E0510 00:44:45.091944 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:45.193147 kubelet[2228]: E0510 00:44:45.193015 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:45.293856 kubelet[2228]: E0510 00:44:45.293803 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:45.394456 kubelet[2228]: E0510 00:44:45.394408 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:45.495261 kubelet[2228]: E0510 00:44:45.495163 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:45.596293 kubelet[2228]: E0510 00:44:45.596250 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:45.696934 kubelet[2228]: E0510 00:44:45.696893 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:45.797031 kubelet[2228]: E0510 00:44:45.796994 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:45.897708 kubelet[2228]: E0510 00:44:45.897659 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:45.998618 kubelet[2228]: E0510 00:44:45.998571 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:46.100089 kubelet[2228]: E0510 00:44:46.099477 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:46.199824 kubelet[2228]: E0510 00:44:46.199782 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-44\" not found" May 10 00:44:46.446231 systemd[1]: Reloading. May 10 00:44:46.558005 /usr/lib/systemd/system-generators/torcx-generator[2517]: time="2025-05-10T00:44:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:44:46.558540 /usr/lib/systemd/system-generators/torcx-generator[2517]: time="2025-05-10T00:44:46Z" level=info msg="torcx already run" May 10 00:44:46.639859 kubelet[2228]: I0510 00:44:46.639794 2228 apiserver.go:52] "Watching apiserver" May 10 00:44:46.670730 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:44:46.670755 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:44:46.682361 kubelet[2228]: I0510 00:44:46.682311 2228 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 00:44:46.690260 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:44:46.825234 systemd[1]: Stopping kubelet.service... May 10 00:44:46.842859 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:44:46.843125 systemd[1]: Stopped kubelet.service. May 10 00:44:46.843193 systemd[1]: kubelet.service: Consumed 1.013s CPU time. May 10 00:44:46.845527 systemd[1]: Starting kubelet.service... May 10 00:44:48.142642 systemd[1]: Started kubelet.service. May 10 00:44:48.247866 kubelet[2579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:44:48.247866 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:44:48.247866 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:44:48.249400 kubelet[2579]: I0510 00:44:48.247922 2579 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:44:48.264926 kubelet[2579]: I0510 00:44:48.264888 2579 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 00:44:48.264926 kubelet[2579]: I0510 00:44:48.264926 2579 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:44:48.265527 kubelet[2579]: I0510 00:44:48.265507 2579 server.go:929] "Client rotation is on, will bootstrap in background" May 10 00:44:48.269855 kubelet[2579]: I0510 00:44:48.269824 2579 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:44:48.275791 kubelet[2579]: I0510 00:44:48.275757 2579 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:44:48.282408 kubelet[2579]: E0510 00:44:48.282374 2579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:44:48.282580 kubelet[2579]: I0510 00:44:48.282566 2579 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:44:48.286195 sudo[2594]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 10 00:44:48.286510 sudo[2594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 10 00:44:48.292588 kubelet[2579]: I0510 00:44:48.292563 2579 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:44:48.292963 kubelet[2579]: I0510 00:44:48.292951 2579 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 00:44:48.293227 kubelet[2579]: I0510 00:44:48.293197 2579 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:44:48.293473 kubelet[2579]: I0510 00:44:48.293301 2579 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-44","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:44:48.293756 kubelet[2579]: I0510 00:44:48.293742 2579 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:44:48.293837 kubelet[2579]: I0510 00:44:48.293829 2579 container_manager_linux.go:300] "Creating device plugin manager" May 10 00:44:48.293953 kubelet[2579]: I0510 00:44:48.293944 2579 state_mem.go:36] "Initialized new in-memory state store" May 10 00:44:48.294154 kubelet[2579]: I0510 00:44:48.294143 2579 kubelet.go:408] "Attempting to sync node with API server" May 10 00:44:48.294249 kubelet[2579]: I0510 00:44:48.294239 2579 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:44:48.294354 kubelet[2579]: I0510 00:44:48.294346 2579 kubelet.go:314] "Adding apiserver pod source" May 10 00:44:48.294508 kubelet[2579]: I0510 00:44:48.294421 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:44:48.303064 kubelet[2579]: I0510 00:44:48.303009 2579 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:44:48.303666 kubelet[2579]: I0510 00:44:48.303615 2579 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:44:48.304207 kubelet[2579]: I0510 00:44:48.304191 2579 server.go:1269] "Started kubelet" May 10 00:44:48.312836 kubelet[2579]: I0510 00:44:48.311982 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:44:48.321299 kubelet[2579]: I0510 00:44:48.321256 2579 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:44:48.331189 kubelet[2579]: I0510 00:44:48.328745 2579 server.go:460] "Adding debug handlers to kubelet server" May 10 00:44:48.339314 kubelet[2579]: I0510 00:44:48.339232 2579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:44:48.339830 kubelet[2579]: I0510 00:44:48.339810 2579 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:44:48.340249 kubelet[2579]: I0510 00:44:48.340226 2579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:44:48.342793 kubelet[2579]: I0510 00:44:48.342776 2579 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 00:44:48.348774 kubelet[2579]: E0510 00:44:48.344902 2579 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:44:48.348774 kubelet[2579]: I0510 00:44:48.345839 2579 factory.go:221] Registration of the systemd container factory successfully May 10 00:44:48.348774 kubelet[2579]: I0510 00:44:48.346122 2579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:44:48.352907 kubelet[2579]: I0510 00:44:48.350703 2579 factory.go:221] Registration of the containerd container factory successfully May 10 00:44:48.354300 kubelet[2579]: I0510 00:44:48.354278 2579 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 00:44:48.354617 kubelet[2579]: I0510 00:44:48.354603 2579 reconciler.go:26] "Reconciler: start to sync state" May 10 00:44:48.362866 kubelet[2579]: I0510 00:44:48.362819 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:44:48.364447 kubelet[2579]: I0510 00:44:48.364415 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:44:48.364583 kubelet[2579]: I0510 00:44:48.364455 2579 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:44:48.364583 kubelet[2579]: I0510 00:44:48.364480 2579 kubelet.go:2321] "Starting kubelet main sync loop" May 10 00:44:48.364583 kubelet[2579]: E0510 00:44:48.364531 2579 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:44:48.447192 kubelet[2579]: I0510 00:44:48.447085 2579 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:44:48.447192 kubelet[2579]: I0510 00:44:48.447105 2579 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:44:48.447192 kubelet[2579]: I0510 00:44:48.447125 2579 state_mem.go:36] "Initialized new in-memory state store" May 10 00:44:48.447424 kubelet[2579]: I0510 00:44:48.447369 2579 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:44:48.447424 kubelet[2579]: I0510 00:44:48.447392 2579 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:44:48.447424 kubelet[2579]: I0510 00:44:48.447419 2579 policy_none.go:49] "None policy: Start" May 10 00:44:48.448914 kubelet[2579]: I0510 00:44:48.448482 2579 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:44:48.448914 kubelet[2579]: I0510 00:44:48.448507 2579 state_mem.go:35] "Initializing new in-memory state store" May 10 00:44:48.448914 kubelet[2579]: I0510 00:44:48.448814 2579 state_mem.go:75] "Updated machine memory state" May 10 00:44:48.453797 kubelet[2579]: I0510 00:44:48.453750 2579 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:44:48.453977 kubelet[2579]: I0510 00:44:48.453963 2579 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:44:48.454195 kubelet[2579]: I0510 00:44:48.453982 2579 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:44:48.454600 kubelet[2579]: I0510 00:44:48.454584 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:44:48.562873 kubelet[2579]: I0510 00:44:48.562846 2579 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-44" May 10 00:44:48.575460 kubelet[2579]: I0510 00:44:48.575428 2579 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-16-44" May 10 00:44:48.575621 kubelet[2579]: I0510 00:44:48.575508 2579 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-16-44" May 10 00:44:48.656137 kubelet[2579]: I0510 00:44:48.656062 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b0ea17c00c69a70b502b0733897b0fc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-44\" (UID: \"7b0ea17c00c69a70b502b0733897b0fc\") " pod="kube-system/kube-controller-manager-ip-172-31-16-44" May 10 00:44:48.656137 kubelet[2579]: I0510 00:44:48.656116 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0539a046fc2ba150f2a4b68eefb2c3d-ca-certs\") pod \"kube-apiserver-ip-172-31-16-44\" (UID: \"a0539a046fc2ba150f2a4b68eefb2c3d\") " pod="kube-system/kube-apiserver-ip-172-31-16-44" May 10 00:44:48.656393 kubelet[2579]: I0510 00:44:48.656161 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0539a046fc2ba150f2a4b68eefb2c3d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-44\" (UID: \"a0539a046fc2ba150f2a4b68eefb2c3d\") " pod="kube-system/kube-apiserver-ip-172-31-16-44" May 10 00:44:48.656393 kubelet[2579]: I0510 00:44:48.656184 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b0ea17c00c69a70b502b0733897b0fc-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-44\" (UID: \"7b0ea17c00c69a70b502b0733897b0fc\") " pod="kube-system/kube-controller-manager-ip-172-31-16-44" May 10 00:44:48.656393 kubelet[2579]: I0510 00:44:48.656209 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b0ea17c00c69a70b502b0733897b0fc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-44\" (UID: \"7b0ea17c00c69a70b502b0733897b0fc\") " pod="kube-system/kube-controller-manager-ip-172-31-16-44" May 10 00:44:48.656393 kubelet[2579]: I0510 00:44:48.656232 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b0ea17c00c69a70b502b0733897b0fc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-44\" (UID: \"7b0ea17c00c69a70b502b0733897b0fc\") " pod="kube-system/kube-controller-manager-ip-172-31-16-44" May 10 00:44:48.656393 kubelet[2579]: I0510 00:44:48.656252 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b0ea17c00c69a70b502b0733897b0fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-44\" (UID: \"7b0ea17c00c69a70b502b0733897b0fc\") " pod="kube-system/kube-controller-manager-ip-172-31-16-44" May 10 00:44:48.656546 kubelet[2579]: I0510 00:44:48.656275 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94eda1404c152403afc6f1a8804dad0f-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-44\" (UID: \"94eda1404c152403afc6f1a8804dad0f\") " pod="kube-system/kube-scheduler-ip-172-31-16-44" May 10 00:44:48.656546 kubelet[2579]: I0510 00:44:48.656298 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0539a046fc2ba150f2a4b68eefb2c3d-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-44\" (UID: \"a0539a046fc2ba150f2a4b68eefb2c3d\") " pod="kube-system/kube-apiserver-ip-172-31-16-44" May 10 00:44:49.295825 kubelet[2579]: I0510 00:44:49.295781 2579 apiserver.go:52] "Watching apiserver" May 10 00:44:49.354593 kubelet[2579]: I0510 00:44:49.354550 2579 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 00:44:49.444196 kubelet[2579]: E0510 00:44:49.444155 2579 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-16-44\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-44" May 10 00:44:49.464462 kubelet[2579]: I0510 00:44:49.464248 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-44" podStartSLOduration=1.464229418 podStartE2EDuration="1.464229418s" podCreationTimestamp="2025-05-10 00:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:44:49.455028637 +0000 UTC m=+1.277845216" watchObservedRunningTime="2025-05-10 00:44:49.464229418 +0000 UTC m=+1.287045971" May 10 00:44:49.473563 kubelet[2579]: I0510 00:44:49.473513 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-44" podStartSLOduration=1.473496957 podStartE2EDuration="1.473496957s" podCreationTimestamp="2025-05-10 00:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:44:49.464733961 +0000 UTC m=+1.287550496" watchObservedRunningTime="2025-05-10 00:44:49.473496957 +0000 UTC m=+1.296313495" May 10 00:44:49.482432 kubelet[2579]: I0510 00:44:49.482363 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-44" podStartSLOduration=1.4823457580000001 podStartE2EDuration="1.482345758s" podCreationTimestamp="2025-05-10 00:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:44:49.474194685 +0000 UTC m=+1.297011254" watchObservedRunningTime="2025-05-10 00:44:49.482345758 +0000 UTC m=+1.305162312" May 10 00:44:49.907716 update_engine[1731]: I0510 00:44:49.907099 1731 update_attempter.cc:509] Updating boot flags... May 10 00:44:50.216814 sudo[2594]: pam_unix(sudo:session): session closed for user root May 10 00:44:50.902989 amazon-ssm-agent[1767]: 2025-05-10 00:44:50 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated May 10 00:44:52.581908 sudo[1972]: pam_unix(sudo:session): session closed for user root May 10 00:44:52.605400 sshd[1969]: pam_unix(sshd:session): session closed for user core May 10 00:44:52.608985 systemd[1]: sshd@4-172.31.16.44:22-139.178.89.65:54092.service: Deactivated successfully. May 10 00:44:52.609994 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:44:52.610211 systemd[1]: session-5.scope: Consumed 5.026s CPU time. May 10 00:44:52.610949 systemd-logind[1730]: Session 5 logged out. Waiting for processes to exit. May 10 00:44:52.612222 systemd-logind[1730]: Removed session 5. May 10 00:44:53.048003 kubelet[2579]: I0510 00:44:53.047721 2579 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:44:53.048503 env[1739]: time="2025-05-10T00:44:53.048272958Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:44:53.048915 kubelet[2579]: I0510 00:44:53.048653 2579 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:44:58.250800 systemd[1]: Created slice kubepods-besteffort-pod8e5e8ab3_5313_4bcb_b848_e5db439dfee9.slice. May 10 00:44:58.253641 systemd[1]: Created slice kubepods-burstable-pod0efd06ce_a961_4d40_84e3_2dfa6b234ac5.slice. May 10 00:44:58.276291 systemd[1]: Created slice kubepods-besteffort-pod11e70d08_0a53_4b02_9fb4_ba63e73411e2.slice. May 10 00:44:58.322531 kubelet[2579]: I0510 00:44:58.322461 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cilium-run\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.322531 kubelet[2579]: I0510 00:44:58.322503 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-bpf-maps\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.322531 kubelet[2579]: I0510 00:44:58.322519 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-hostproc\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.322531 kubelet[2579]: I0510 00:44:58.322535 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fbtj\" (UniqueName: \"kubernetes.io/projected/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-kube-api-access-8fbtj\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.323106 kubelet[2579]: I0510 00:44:58.322553 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq7g7\" (UniqueName: \"kubernetes.io/projected/8e5e8ab3-5313-4bcb-b848-e5db439dfee9-kube-api-access-cq7g7\") pod \"kube-proxy-zmmgw\" (UID: \"8e5e8ab3-5313-4bcb-b848-e5db439dfee9\") " pod="kube-system/kube-proxy-zmmgw" May 10 00:44:58.323106 kubelet[2579]: I0510 00:44:58.322569 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cni-path\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.323106 kubelet[2579]: I0510 00:44:58.322583 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-xtables-lock\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.323106 kubelet[2579]: I0510 00:44:58.322597 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-host-proc-sys-kernel\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.323106 kubelet[2579]: I0510 00:44:58.322614 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llljd\" (UniqueName: \"kubernetes.io/projected/11e70d08-0a53-4b02-9fb4-ba63e73411e2-kube-api-access-llljd\") pod \"cilium-operator-5d85765b45-lf797\" (UID: \"11e70d08-0a53-4b02-9fb4-ba63e73411e2\") " pod="kube-system/cilium-operator-5d85765b45-lf797" May 10 00:44:58.323246 kubelet[2579]: I0510 00:44:58.322631 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e5e8ab3-5313-4bcb-b848-e5db439dfee9-xtables-lock\") pod \"kube-proxy-zmmgw\" (UID: \"8e5e8ab3-5313-4bcb-b848-e5db439dfee9\") " pod="kube-system/kube-proxy-zmmgw" May 10 00:44:58.323246 kubelet[2579]: I0510 00:44:58.322657 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-etc-cni-netd\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.323246 kubelet[2579]: I0510 00:44:58.322672 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-clustermesh-secrets\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.323246 kubelet[2579]: I0510 00:44:58.322687 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e5e8ab3-5313-4bcb-b848-e5db439dfee9-lib-modules\") pod \"kube-proxy-zmmgw\" (UID: \"8e5e8ab3-5313-4bcb-b848-e5db439dfee9\") " pod="kube-system/kube-proxy-zmmgw" May 10 00:44:58.323246 kubelet[2579]: I0510 00:44:58.322702 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cilium-cgroup\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.323246 kubelet[2579]: I0510 00:44:58.322717 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cilium-config-path\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.323402 kubelet[2579]: I0510 00:44:58.322730 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-host-proc-sys-net\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.323402 kubelet[2579]: I0510 00:44:58.322748 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e5e8ab3-5313-4bcb-b848-e5db439dfee9-kube-proxy\") pod \"kube-proxy-zmmgw\" (UID: \"8e5e8ab3-5313-4bcb-b848-e5db439dfee9\") " pod="kube-system/kube-proxy-zmmgw" May 10 00:44:58.323402 kubelet[2579]: I0510 00:44:58.322766 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-lib-modules\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.323402 kubelet[2579]: I0510 00:44:58.322783 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-hubble-tls\") pod \"cilium-m5w5p\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " pod="kube-system/cilium-m5w5p" May 10 00:44:58.323402 kubelet[2579]: I0510 00:44:58.322798 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11e70d08-0a53-4b02-9fb4-ba63e73411e2-cilium-config-path\") pod \"cilium-operator-5d85765b45-lf797\" (UID: \"11e70d08-0a53-4b02-9fb4-ba63e73411e2\") " pod="kube-system/cilium-operator-5d85765b45-lf797" May 10 00:44:58.428776 kubelet[2579]: I0510 00:44:58.428737 2579 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 10 00:44:58.559344 env[1739]: time="2025-05-10T00:44:58.558948442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zmmgw,Uid:8e5e8ab3-5313-4bcb-b848-e5db439dfee9,Namespace:kube-system,Attempt:0,}" May 10 00:44:58.560112 env[1739]: time="2025-05-10T00:44:58.560079918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m5w5p,Uid:0efd06ce-a961-4d40-84e3-2dfa6b234ac5,Namespace:kube-system,Attempt:0,}" May 10 00:44:58.580291 env[1739]: time="2025-05-10T00:44:58.580251815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lf797,Uid:11e70d08-0a53-4b02-9fb4-ba63e73411e2,Namespace:kube-system,Attempt:0,}" May 10 00:44:58.594572 env[1739]: time="2025-05-10T00:44:58.594489424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:44:58.594714 env[1739]: time="2025-05-10T00:44:58.594580349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:44:58.594714 env[1739]: time="2025-05-10T00:44:58.594603477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:44:58.594843 env[1739]: time="2025-05-10T00:44:58.594805903Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f647d6c7ef166df9265c23bee25d28c93aac4ab4a66a25aebd8ddbdf0774c0e pid=2762 runtime=io.containerd.runc.v2 May 10 00:44:58.609416 systemd[1]: Started cri-containerd-7f647d6c7ef166df9265c23bee25d28c93aac4ab4a66a25aebd8ddbdf0774c0e.scope. May 10 00:44:58.623366 env[1739]: time="2025-05-10T00:44:58.622478538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:44:58.623366 env[1739]: time="2025-05-10T00:44:58.622571081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:44:58.623366 env[1739]: time="2025-05-10T00:44:58.622602959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:44:58.623366 env[1739]: time="2025-05-10T00:44:58.622782062Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f pid=2790 runtime=io.containerd.runc.v2 May 10 00:44:58.657517 systemd[1]: Started cri-containerd-e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f.scope. May 10 00:44:58.671521 env[1739]: time="2025-05-10T00:44:58.671434300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:44:58.671819 env[1739]: time="2025-05-10T00:44:58.671760849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:44:58.672133 env[1739]: time="2025-05-10T00:44:58.672074699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:44:58.674828 env[1739]: time="2025-05-10T00:44:58.674730912Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02 pid=2820 runtime=io.containerd.runc.v2 May 10 00:44:58.684352 env[1739]: time="2025-05-10T00:44:58.684309291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zmmgw,Uid:8e5e8ab3-5313-4bcb-b848-e5db439dfee9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f647d6c7ef166df9265c23bee25d28c93aac4ab4a66a25aebd8ddbdf0774c0e\"" May 10 00:44:58.690728 env[1739]: time="2025-05-10T00:44:58.690675955Z" level=info msg="CreateContainer within sandbox \"7f647d6c7ef166df9265c23bee25d28c93aac4ab4a66a25aebd8ddbdf0774c0e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:44:58.711799 systemd[1]: Started cri-containerd-fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02.scope. May 10 00:44:58.726290 env[1739]: time="2025-05-10T00:44:58.726247709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m5w5p,Uid:0efd06ce-a961-4d40-84e3-2dfa6b234ac5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\"" May 10 00:44:58.729250 env[1739]: time="2025-05-10T00:44:58.729213035Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 00:44:58.747613 env[1739]: time="2025-05-10T00:44:58.747508086Z" level=info msg="CreateContainer within sandbox \"7f647d6c7ef166df9265c23bee25d28c93aac4ab4a66a25aebd8ddbdf0774c0e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"073b4ceba44f41eb2489a5689c12a9793254f5eca800c246ea421ddaafc1f289\"" May 10 00:44:58.748699 env[1739]: time="2025-05-10T00:44:58.748667364Z" level=info msg="StartContainer for \"073b4ceba44f41eb2489a5689c12a9793254f5eca800c246ea421ddaafc1f289\"" May 10 00:44:58.770031 systemd[1]: Started cri-containerd-073b4ceba44f41eb2489a5689c12a9793254f5eca800c246ea421ddaafc1f289.scope. May 10 00:44:58.798304 env[1739]: time="2025-05-10T00:44:58.798271586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lf797,Uid:11e70d08-0a53-4b02-9fb4-ba63e73411e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02\"" May 10 00:44:58.826083 env[1739]: time="2025-05-10T00:44:58.825936409Z" level=info msg="StartContainer for \"073b4ceba44f41eb2489a5689c12a9793254f5eca800c246ea421ddaafc1f289\" returns successfully" May 10 00:45:06.573825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount659809790.mount: Deactivated successfully. May 10 00:45:09.621438 env[1739]: time="2025-05-10T00:45:09.621383194Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:09.624168 env[1739]: time="2025-05-10T00:45:09.624121677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:09.625995 env[1739]: time="2025-05-10T00:45:09.625963377Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:09.626519 env[1739]: time="2025-05-10T00:45:09.626487228Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 10 00:45:09.628684 env[1739]: time="2025-05-10T00:45:09.628311013Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 00:45:09.628979 env[1739]: time="2025-05-10T00:45:09.628955825Z" level=info msg="CreateContainer within sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:45:09.648815 env[1739]: time="2025-05-10T00:45:09.648773620Z" level=info msg="CreateContainer within sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec\"" May 10 00:45:09.649200 env[1739]: time="2025-05-10T00:45:09.649169823Z" level=info msg="StartContainer for \"e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec\"" May 10 00:45:09.686559 systemd[1]: Started cri-containerd-e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec.scope. May 10 00:45:09.715409 env[1739]: time="2025-05-10T00:45:09.715370580Z" level=info msg="StartContainer for \"e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec\" returns successfully" May 10 00:45:09.724658 systemd[1]: cri-containerd-e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec.scope: Deactivated successfully. May 10 00:45:09.834648 env[1739]: time="2025-05-10T00:45:09.834600253Z" level=info msg="shim disconnected" id=e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec May 10 00:45:09.834648 env[1739]: time="2025-05-10T00:45:09.834647044Z" level=warning msg="cleaning up after shim disconnected" id=e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec namespace=k8s.io May 10 00:45:09.834648 env[1739]: time="2025-05-10T00:45:09.834656925Z" level=info msg="cleaning up dead shim" May 10 00:45:09.843115 env[1739]: time="2025-05-10T00:45:09.843069869Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:45:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3096 runtime=io.containerd.runc.v2\n" May 10 00:45:10.545049 env[1739]: time="2025-05-10T00:45:10.544976902Z" level=info msg="CreateContainer within sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:45:10.563307 env[1739]: time="2025-05-10T00:45:10.563253324Z" level=info msg="CreateContainer within sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7\"" May 10 00:45:10.564409 env[1739]: time="2025-05-10T00:45:10.564376961Z" level=info msg="StartContainer for \"6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7\"" May 10 00:45:10.577815 kubelet[2579]: I0510 00:45:10.577749 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zmmgw" podStartSLOduration=17.577728928 podStartE2EDuration="17.577728928s" podCreationTimestamp="2025-05-10 00:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:44:59.464546694 +0000 UTC m=+11.287363262" watchObservedRunningTime="2025-05-10 00:45:10.577728928 +0000 UTC m=+22.400545486" May 10 00:45:10.587882 systemd[1]: Started cri-containerd-6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7.scope. May 10 00:45:10.631561 env[1739]: time="2025-05-10T00:45:10.631506112Z" level=info msg="StartContainer for \"6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7\" returns successfully" May 10 00:45:10.641460 systemd[1]: run-containerd-runc-k8s.io-e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec-runc.p74Dl4.mount: Deactivated successfully. May 10 00:45:10.646483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec-rootfs.mount: Deactivated successfully. May 10 00:45:10.660133 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:45:10.660890 systemd[1]: Stopped systemd-sysctl.service. May 10 00:45:10.661335 systemd[1]: Stopping systemd-sysctl.service... May 10 00:45:10.663902 systemd[1]: Starting systemd-sysctl.service... May 10 00:45:10.677882 systemd[1]: cri-containerd-6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7.scope: Deactivated successfully. May 10 00:45:10.696449 systemd[1]: Finished systemd-sysctl.service. May 10 00:45:10.715352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7-rootfs.mount: Deactivated successfully. May 10 00:45:10.728841 env[1739]: time="2025-05-10T00:45:10.728760320Z" level=info msg="shim disconnected" id=6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7 May 10 00:45:10.728841 env[1739]: time="2025-05-10T00:45:10.728831623Z" level=warning msg="cleaning up after shim disconnected" id=6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7 namespace=k8s.io May 10 00:45:10.729213 env[1739]: time="2025-05-10T00:45:10.728847331Z" level=info msg="cleaning up dead shim" May 10 00:45:10.738878 env[1739]: time="2025-05-10T00:45:10.738836588Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:45:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3163 runtime=io.containerd.runc.v2\n" May 10 00:45:11.564226 env[1739]: time="2025-05-10T00:45:11.564168275Z" level=info msg="CreateContainer within sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:45:11.602593 env[1739]: time="2025-05-10T00:45:11.602538756Z" level=info msg="CreateContainer within sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5\"" May 10 00:45:11.603695 env[1739]: time="2025-05-10T00:45:11.603655525Z" level=info msg="StartContainer for \"8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5\"" May 10 00:45:11.629158 systemd[1]: Started cri-containerd-8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5.scope. May 10 00:45:11.643138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194529895.mount: Deactivated successfully. May 10 00:45:11.702751 env[1739]: time="2025-05-10T00:45:11.702674659Z" level=info msg="StartContainer for \"8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5\" returns successfully" May 10 00:45:11.741077 systemd[1]: cri-containerd-8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5.scope: Deactivated successfully. May 10 00:45:11.767768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5-rootfs.mount: Deactivated successfully. May 10 00:45:11.812723 env[1739]: time="2025-05-10T00:45:11.812669655Z" level=info msg="shim disconnected" id=8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5 May 10 00:45:11.812723 env[1739]: time="2025-05-10T00:45:11.812717634Z" level=warning msg="cleaning up after shim disconnected" id=8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5 namespace=k8s.io May 10 00:45:11.813070 env[1739]: time="2025-05-10T00:45:11.812731089Z" level=info msg="cleaning up dead shim" May 10 00:45:11.822138 env[1739]: time="2025-05-10T00:45:11.821563189Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:45:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3222 runtime=io.containerd.runc.v2\n" May 10 00:45:12.563887 env[1739]: time="2025-05-10T00:45:12.563843285Z" level=info msg="CreateContainer within sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:45:12.584171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644142924.mount: Deactivated successfully. May 10 00:45:12.600651 env[1739]: time="2025-05-10T00:45:12.600603484Z" level=info msg="CreateContainer within sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec\"" May 10 00:45:12.601408 env[1739]: time="2025-05-10T00:45:12.601374240Z" level=info msg="StartContainer for \"ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec\"" May 10 00:45:12.637713 systemd[1]: Started cri-containerd-ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec.scope. May 10 00:45:12.642093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460478837.mount: Deactivated successfully. May 10 00:45:12.653753 env[1739]: time="2025-05-10T00:45:12.653715125Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:12.657366 env[1739]: time="2025-05-10T00:45:12.657326606Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:12.662736 env[1739]: time="2025-05-10T00:45:12.662693585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:12.666691 env[1739]: time="2025-05-10T00:45:12.662898541Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 10 00:45:12.669881 env[1739]: time="2025-05-10T00:45:12.669758479Z" level=info msg="CreateContainer within sandbox \"fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 00:45:12.687351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1617497592.mount: Deactivated successfully. May 10 00:45:12.698598 systemd[1]: cri-containerd-ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec.scope: Deactivated successfully. May 10 00:45:12.700178 env[1739]: time="2025-05-10T00:45:12.700123451Z" level=info msg="CreateContainer within sandbox \"fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\"" May 10 00:45:12.702444 env[1739]: time="2025-05-10T00:45:12.701048563Z" level=info msg="StartContainer for \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\"" May 10 00:45:12.703150 env[1739]: time="2025-05-10T00:45:12.703120858Z" level=info msg="StartContainer for \"ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec\" returns successfully" May 10 00:45:12.734862 systemd[1]: Started cri-containerd-d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb.scope. May 10 00:45:12.799760 env[1739]: time="2025-05-10T00:45:12.799483553Z" level=info msg="StartContainer for \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\" returns successfully" May 10 00:45:12.836640 env[1739]: time="2025-05-10T00:45:12.836523335Z" level=info msg="shim disconnected" id=ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec May 10 00:45:12.836640 env[1739]: time="2025-05-10T00:45:12.836576967Z" level=warning msg="cleaning up after shim disconnected" id=ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec namespace=k8s.io May 10 00:45:12.836640 env[1739]: time="2025-05-10T00:45:12.836586516Z" level=info msg="cleaning up dead shim" May 10 00:45:12.845448 env[1739]: time="2025-05-10T00:45:12.845399455Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:45:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3316 runtime=io.containerd.runc.v2\n" May 10 00:45:13.573386 env[1739]: time="2025-05-10T00:45:13.573163867Z" level=info msg="CreateContainer within sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:45:13.588135 env[1739]: time="2025-05-10T00:45:13.588078829Z" level=info msg="CreateContainer within sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\"" May 10 00:45:13.588968 env[1739]: time="2025-05-10T00:45:13.588933964Z" level=info msg="StartContainer for \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\"" May 10 00:45:13.617074 systemd[1]: Started cri-containerd-b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7.scope. May 10 00:45:13.642239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec-rootfs.mount: Deactivated successfully. May 10 00:45:13.714425 env[1739]: time="2025-05-10T00:45:13.714370263Z" level=info msg="StartContainer for \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\" returns successfully" May 10 00:45:13.744227 systemd[1]: run-containerd-runc-k8s.io-b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7-runc.j174Sk.mount: Deactivated successfully. May 10 00:45:13.791326 kubelet[2579]: I0510 00:45:13.791264 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-lf797" podStartSLOduration=6.923354427 podStartE2EDuration="20.791233348s" podCreationTimestamp="2025-05-10 00:44:53 +0000 UTC" firstStartedPulling="2025-05-10 00:44:58.799479363 +0000 UTC m=+10.622295903" lastFinishedPulling="2025-05-10 00:45:12.667358271 +0000 UTC m=+24.490174824" observedRunningTime="2025-05-10 00:45:13.70874172 +0000 UTC m=+25.531558278" watchObservedRunningTime="2025-05-10 00:45:13.791233348 +0000 UTC m=+25.614049897" May 10 00:45:14.200595 kubelet[2579]: I0510 00:45:14.200265 2579 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 10 00:45:14.238710 systemd[1]: Created slice kubepods-burstable-pod2d91fc56_72d9_45b9_b1ba_41e69927d346.slice. May 10 00:45:14.246312 systemd[1]: Created slice kubepods-burstable-pod16550f9a_b6e8_4784_b38d_91ae00c30ae3.slice. May 10 00:45:14.372415 kubelet[2579]: I0510 00:45:14.372380 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16550f9a-b6e8-4784-b38d-91ae00c30ae3-config-volume\") pod \"coredns-6f6b679f8f-kzxvk\" (UID: \"16550f9a-b6e8-4784-b38d-91ae00c30ae3\") " pod="kube-system/coredns-6f6b679f8f-kzxvk" May 10 00:45:14.372705 kubelet[2579]: I0510 00:45:14.372684 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk8h7\" (UniqueName: \"kubernetes.io/projected/16550f9a-b6e8-4784-b38d-91ae00c30ae3-kube-api-access-zk8h7\") pod \"coredns-6f6b679f8f-kzxvk\" (UID: \"16550f9a-b6e8-4784-b38d-91ae00c30ae3\") " pod="kube-system/coredns-6f6b679f8f-kzxvk" May 10 00:45:14.372857 kubelet[2579]: I0510 00:45:14.372843 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6xzh\" (UniqueName: \"kubernetes.io/projected/2d91fc56-72d9-45b9-b1ba-41e69927d346-kube-api-access-c6xzh\") pod \"coredns-6f6b679f8f-wzkd7\" (UID: \"2d91fc56-72d9-45b9-b1ba-41e69927d346\") " pod="kube-system/coredns-6f6b679f8f-wzkd7" May 10 00:45:14.372966 kubelet[2579]: I0510 00:45:14.372953 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d91fc56-72d9-45b9-b1ba-41e69927d346-config-volume\") pod \"coredns-6f6b679f8f-wzkd7\" (UID: \"2d91fc56-72d9-45b9-b1ba-41e69927d346\") " pod="kube-system/coredns-6f6b679f8f-wzkd7" May 10 00:45:14.545970 env[1739]: time="2025-05-10T00:45:14.545911564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzkd7,Uid:2d91fc56-72d9-45b9-b1ba-41e69927d346,Namespace:kube-system,Attempt:0,}" May 10 00:45:14.551604 env[1739]: time="2025-05-10T00:45:14.551478973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kzxvk,Uid:16550f9a-b6e8-4784-b38d-91ae00c30ae3,Namespace:kube-system,Attempt:0,}" May 10 00:45:20.973942 systemd-networkd[1465]: cilium_host: Link UP May 10 00:45:20.976996 (udev-worker)[3448]: Network interface NamePolicy= disabled on kernel command line. May 10 00:45:20.981181 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 10 00:45:20.979188 systemd-networkd[1465]: cilium_net: Link UP May 10 00:45:20.979193 systemd-networkd[1465]: cilium_net: Gained carrier May 10 00:45:20.979452 systemd-networkd[1465]: cilium_host: Gained carrier May 10 00:45:20.979809 systemd-networkd[1465]: cilium_host: Gained IPv6LL May 10 00:45:20.982698 (udev-worker)[3484]: Network interface NamePolicy= disabled on kernel command line. May 10 00:45:21.292296 systemd-networkd[1465]: cilium_vxlan: Link UP May 10 00:45:21.292306 systemd-networkd[1465]: cilium_vxlan: Gained carrier May 10 00:45:21.774323 systemd-networkd[1465]: cilium_net: Gained IPv6LL May 10 00:45:22.798246 systemd-networkd[1465]: cilium_vxlan: Gained IPv6LL May 10 00:45:25.210538 kernel: NET: Registered PF_ALG protocol family May 10 00:45:26.388062 (udev-worker)[3496]: Network interface NamePolicy= disabled on kernel command line. May 10 00:45:26.389822 systemd-networkd[1465]: lxc_health: Link UP May 10 00:45:26.412155 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:45:26.412520 systemd-networkd[1465]: lxc_health: Gained carrier May 10 00:45:26.599780 systemd[1]: Started sshd@5-172.31.16.44:22-139.178.89.65:54824.service. May 10 00:45:26.610584 kubelet[2579]: I0510 00:45:26.610518 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m5w5p" podStartSLOduration=22.710707827 podStartE2EDuration="33.610453576s" podCreationTimestamp="2025-05-10 00:44:53 +0000 UTC" firstStartedPulling="2025-05-10 00:44:58.727847543 +0000 UTC m=+10.550664090" lastFinishedPulling="2025-05-10 00:45:09.627593303 +0000 UTC m=+21.450409839" observedRunningTime="2025-05-10 00:45:14.600497102 +0000 UTC m=+26.423313677" watchObservedRunningTime="2025-05-10 00:45:26.610453576 +0000 UTC m=+38.433270131" May 10 00:45:26.702540 (udev-worker)[3812]: Network interface NamePolicy= disabled on kernel command line. May 10 00:45:26.704624 systemd-networkd[1465]: lxc73fa1607f539: Link UP May 10 00:45:26.721131 kernel: eth0: renamed from tmpb9605 May 10 00:45:26.725641 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc73fa1607f539: link becomes ready May 10 00:45:26.724784 systemd-networkd[1465]: lxc73fa1607f539: Gained carrier May 10 00:45:26.734879 systemd-networkd[1465]: lxc67cfeb4e3465: Link UP May 10 00:45:26.739291 (udev-worker)[3495]: Network interface NamePolicy= disabled on kernel command line. May 10 00:45:26.741484 kernel: eth0: renamed from tmp3ead4 May 10 00:45:26.748149 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc67cfeb4e3465: link becomes ready May 10 00:45:26.748085 systemd-networkd[1465]: lxc67cfeb4e3465: Gained carrier May 10 00:45:26.806082 sshd[3823]: Accepted publickey for core from 139.178.89.65 port 54824 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:45:26.807320 sshd[3823]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:26.817100 systemd-logind[1730]: New session 6 of user core. May 10 00:45:26.818005 systemd[1]: Started session-6.scope. May 10 00:45:27.236573 sshd[3823]: pam_unix(sshd:session): session closed for user core May 10 00:45:27.240370 systemd-logind[1730]: Session 6 logged out. Waiting for processes to exit. May 10 00:45:27.240861 systemd[1]: sshd@5-172.31.16.44:22-139.178.89.65:54824.service: Deactivated successfully. May 10 00:45:27.241851 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:45:27.244282 systemd-logind[1730]: Removed session 6. May 10 00:45:28.430234 systemd-networkd[1465]: lxc_health: Gained IPv6LL May 10 00:45:28.494210 systemd-networkd[1465]: lxc67cfeb4e3465: Gained IPv6LL May 10 00:45:28.560136 systemd-networkd[1465]: lxc73fa1607f539: Gained IPv6LL May 10 00:45:31.436100 env[1739]: time="2025-05-10T00:45:31.434066593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:45:31.436100 env[1739]: time="2025-05-10T00:45:31.434126356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:45:31.436100 env[1739]: time="2025-05-10T00:45:31.434143754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:45:31.436100 env[1739]: time="2025-05-10T00:45:31.434463109Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9605cda9a47d66ac4d98d652b1e2c5dc47c8e2168fbb9ff2fb759592ad9ce77 pid=3877 runtime=io.containerd.runc.v2 May 10 00:45:31.456581 env[1739]: time="2025-05-10T00:45:31.456461033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:45:31.456885 env[1739]: time="2025-05-10T00:45:31.456843393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:45:31.457013 env[1739]: time="2025-05-10T00:45:31.456989776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:45:31.460108 env[1739]: time="2025-05-10T00:45:31.459278518Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ead42dca18c203cbb9c8a3f05449620b58e944ca414a9397c0953ccd119568f pid=3888 runtime=io.containerd.runc.v2 May 10 00:45:31.489089 systemd[1]: Started cri-containerd-b9605cda9a47d66ac4d98d652b1e2c5dc47c8e2168fbb9ff2fb759592ad9ce77.scope. May 10 00:45:31.519804 systemd[1]: Started cri-containerd-3ead42dca18c203cbb9c8a3f05449620b58e944ca414a9397c0953ccd119568f.scope. May 10 00:45:31.588662 env[1739]: time="2025-05-10T00:45:31.588611516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzkd7,Uid:2d91fc56-72d9-45b9-b1ba-41e69927d346,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9605cda9a47d66ac4d98d652b1e2c5dc47c8e2168fbb9ff2fb759592ad9ce77\"" May 10 00:45:31.596880 env[1739]: time="2025-05-10T00:45:31.596824824Z" level=info msg="CreateContainer within sandbox \"b9605cda9a47d66ac4d98d652b1e2c5dc47c8e2168fbb9ff2fb759592ad9ce77\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:45:31.636746 env[1739]: time="2025-05-10T00:45:31.636683396Z" level=info msg="CreateContainer within sandbox \"b9605cda9a47d66ac4d98d652b1e2c5dc47c8e2168fbb9ff2fb759592ad9ce77\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce79a87d30bc8f5b731bc59d4b142293d6c8845a1ee4a537911794168d21764c\"" May 10 00:45:31.637577 env[1739]: time="2025-05-10T00:45:31.637543478Z" level=info msg="StartContainer for \"ce79a87d30bc8f5b731bc59d4b142293d6c8845a1ee4a537911794168d21764c\"" May 10 00:45:31.646846 env[1739]: time="2025-05-10T00:45:31.646796183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kzxvk,Uid:16550f9a-b6e8-4784-b38d-91ae00c30ae3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ead42dca18c203cbb9c8a3f05449620b58e944ca414a9397c0953ccd119568f\"" May 10 00:45:31.651863 env[1739]: time="2025-05-10T00:45:31.651821186Z" level=info msg="CreateContainer within sandbox \"3ead42dca18c203cbb9c8a3f05449620b58e944ca414a9397c0953ccd119568f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:45:31.674916 env[1739]: time="2025-05-10T00:45:31.674867719Z" level=info msg="CreateContainer within sandbox \"3ead42dca18c203cbb9c8a3f05449620b58e944ca414a9397c0953ccd119568f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f5446f748bd0e80f2e599bd75c7268b01f2ef589e6ca69d068dcb6af6988a113\"" May 10 00:45:31.676782 env[1739]: time="2025-05-10T00:45:31.676741206Z" level=info msg="StartContainer for \"f5446f748bd0e80f2e599bd75c7268b01f2ef589e6ca69d068dcb6af6988a113\"" May 10 00:45:31.683513 systemd[1]: Started cri-containerd-ce79a87d30bc8f5b731bc59d4b142293d6c8845a1ee4a537911794168d21764c.scope. May 10 00:45:31.716758 systemd[1]: Started cri-containerd-f5446f748bd0e80f2e599bd75c7268b01f2ef589e6ca69d068dcb6af6988a113.scope. May 10 00:45:31.785602 env[1739]: time="2025-05-10T00:45:31.785412579Z" level=info msg="StartContainer for \"ce79a87d30bc8f5b731bc59d4b142293d6c8845a1ee4a537911794168d21764c\" returns successfully" May 10 00:45:31.791794 env[1739]: time="2025-05-10T00:45:31.791749433Z" level=info msg="StartContainer for \"f5446f748bd0e80f2e599bd75c7268b01f2ef589e6ca69d068dcb6af6988a113\" returns successfully" May 10 00:45:32.261567 systemd[1]: Started sshd@6-172.31.16.44:22-139.178.89.65:54832.service. May 10 00:45:32.445060 sshd[4016]: Accepted publickey for core from 139.178.89.65 port 54832 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:45:32.445916 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:32.446591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2049484913.mount: Deactivated successfully. May 10 00:45:32.453107 systemd-logind[1730]: New session 7 of user core. May 10 00:45:32.455160 systemd[1]: Started session-7.scope. May 10 00:45:32.706313 kubelet[2579]: I0510 00:45:32.706010 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wzkd7" podStartSLOduration=39.705990824 podStartE2EDuration="39.705990824s" podCreationTimestamp="2025-05-10 00:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:45:32.663016763 +0000 UTC m=+44.485833349" watchObservedRunningTime="2025-05-10 00:45:32.705990824 +0000 UTC m=+44.528807381" May 10 00:45:32.758099 sshd[4016]: pam_unix(sshd:session): session closed for user core May 10 00:45:32.762118 systemd[1]: sshd@6-172.31.16.44:22-139.178.89.65:54832.service: Deactivated successfully. May 10 00:45:32.762852 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:45:32.763792 systemd-logind[1730]: Session 7 logged out. Waiting for processes to exit. May 10 00:45:32.764903 systemd-logind[1730]: Removed session 7. May 10 00:45:33.647129 kubelet[2579]: I0510 00:45:33.647074 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kzxvk" podStartSLOduration=39.647056774 podStartE2EDuration="39.647056774s" podCreationTimestamp="2025-05-10 00:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:45:32.70686917 +0000 UTC m=+44.529685727" watchObservedRunningTime="2025-05-10 00:45:33.647056774 +0000 UTC m=+45.469873325" May 10 00:45:37.785659 systemd[1]: Started sshd@7-172.31.16.44:22-139.178.89.65:44436.service. May 10 00:45:37.972197 sshd[4047]: Accepted publickey for core from 139.178.89.65 port 44436 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:45:37.974264 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:37.980871 systemd[1]: Started session-8.scope. May 10 00:45:37.981681 systemd-logind[1730]: New session 8 of user core. May 10 00:45:38.254022 sshd[4047]: pam_unix(sshd:session): session closed for user core May 10 00:45:38.258372 systemd[1]: sshd@7-172.31.16.44:22-139.178.89.65:44436.service: Deactivated successfully. May 10 00:45:38.259139 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:45:38.259914 systemd-logind[1730]: Session 8 logged out. Waiting for processes to exit. May 10 00:45:38.260862 systemd-logind[1730]: Removed session 8. May 10 00:45:43.281594 systemd[1]: Started sshd@8-172.31.16.44:22-139.178.89.65:44446.service. May 10 00:45:43.440728 sshd[4061]: Accepted publickey for core from 139.178.89.65 port 44446 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:45:43.442187 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:43.448221 systemd[1]: Started session-9.scope. May 10 00:45:43.448710 systemd-logind[1730]: New session 9 of user core. May 10 00:45:43.655274 sshd[4061]: pam_unix(sshd:session): session closed for user core May 10 00:45:43.659114 systemd[1]: sshd@8-172.31.16.44:22-139.178.89.65:44446.service: Deactivated successfully. May 10 00:45:43.659126 systemd-logind[1730]: Session 9 logged out. Waiting for processes to exit. May 10 00:45:43.660277 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:45:43.661663 systemd-logind[1730]: Removed session 9. May 10 00:45:48.683028 systemd[1]: Started sshd@9-172.31.16.44:22-139.178.89.65:43320.service. May 10 00:45:48.845400 sshd[4075]: Accepted publickey for core from 139.178.89.65 port 43320 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:45:48.846883 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:48.854014 systemd[1]: Started session-10.scope. May 10 00:45:48.854742 systemd-logind[1730]: New session 10 of user core. May 10 00:45:49.051480 sshd[4075]: pam_unix(sshd:session): session closed for user core May 10 00:45:49.055519 systemd[1]: sshd@9-172.31.16.44:22-139.178.89.65:43320.service: Deactivated successfully. May 10 00:45:49.056479 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:45:49.057332 systemd-logind[1730]: Session 10 logged out. Waiting for processes to exit. May 10 00:45:49.058359 systemd-logind[1730]: Removed session 10. May 10 00:45:49.077578 systemd[1]: Started sshd@10-172.31.16.44:22-139.178.89.65:43324.service. May 10 00:45:49.236049 sshd[4087]: Accepted publickey for core from 139.178.89.65 port 43324 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:45:49.237739 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:49.243120 systemd-logind[1730]: New session 11 of user core. May 10 00:45:49.243656 systemd[1]: Started session-11.scope. May 10 00:45:49.516745 sshd[4087]: pam_unix(sshd:session): session closed for user core May 10 00:45:49.521746 systemd-logind[1730]: Session 11 logged out. Waiting for processes to exit. May 10 00:45:49.523108 systemd[1]: sshd@10-172.31.16.44:22-139.178.89.65:43324.service: Deactivated successfully. May 10 00:45:49.524121 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:45:49.524712 systemd-logind[1730]: Removed session 11. May 10 00:45:49.547516 systemd[1]: Started sshd@11-172.31.16.44:22-139.178.89.65:43326.service. May 10 00:45:49.707165 sshd[4097]: Accepted publickey for core from 139.178.89.65 port 43326 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:45:49.709152 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:49.713927 systemd-logind[1730]: New session 12 of user core. May 10 00:45:49.714763 systemd[1]: Started session-12.scope. May 10 00:45:49.914690 sshd[4097]: pam_unix(sshd:session): session closed for user core May 10 00:45:49.918697 systemd-logind[1730]: Session 12 logged out. Waiting for processes to exit. May 10 00:45:49.918924 systemd[1]: sshd@11-172.31.16.44:22-139.178.89.65:43326.service: Deactivated successfully. May 10 00:45:49.920107 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:45:49.921102 systemd-logind[1730]: Removed session 12. May 10 00:45:54.940647 systemd[1]: Started sshd@12-172.31.16.44:22-139.178.89.65:43330.service. May 10 00:45:55.097629 sshd[4109]: Accepted publickey for core from 139.178.89.65 port 43330 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:45:55.099228 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:55.104867 systemd[1]: Started session-13.scope. May 10 00:45:55.105576 systemd-logind[1730]: New session 13 of user core. May 10 00:45:55.293932 sshd[4109]: pam_unix(sshd:session): session closed for user core May 10 00:45:55.297650 systemd-logind[1730]: Session 13 logged out. Waiting for processes to exit. May 10 00:45:55.297875 systemd[1]: sshd@12-172.31.16.44:22-139.178.89.65:43330.service: Deactivated successfully. May 10 00:45:55.298874 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:45:55.300028 systemd-logind[1730]: Removed session 13. May 10 00:46:00.322107 systemd[1]: Started sshd@13-172.31.16.44:22-139.178.89.65:55372.service. May 10 00:46:00.489853 sshd[4121]: Accepted publickey for core from 139.178.89.65 port 55372 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:46:00.491747 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:00.496904 systemd-logind[1730]: New session 14 of user core. May 10 00:46:00.497612 systemd[1]: Started session-14.scope. May 10 00:46:00.693902 sshd[4121]: pam_unix(sshd:session): session closed for user core May 10 00:46:00.696976 systemd[1]: sshd@13-172.31.16.44:22-139.178.89.65:55372.service: Deactivated successfully. May 10 00:46:00.697706 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:46:00.698261 systemd-logind[1730]: Session 14 logged out. Waiting for processes to exit. May 10 00:46:00.698961 systemd-logind[1730]: Removed session 14. May 10 00:46:00.719888 systemd[1]: Started sshd@14-172.31.16.44:22-139.178.89.65:55374.service. May 10 00:46:00.878721 sshd[4132]: Accepted publickey for core from 139.178.89.65 port 55374 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:46:00.880512 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:00.886285 systemd-logind[1730]: New session 15 of user core. May 10 00:46:00.886699 systemd[1]: Started session-15.scope. May 10 00:46:02.201109 sshd[4132]: pam_unix(sshd:session): session closed for user core May 10 00:46:02.216602 systemd[1]: sshd@14-172.31.16.44:22-139.178.89.65:55374.service: Deactivated successfully. May 10 00:46:02.217857 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:46:02.233958 systemd-logind[1730]: Session 15 logged out. Waiting for processes to exit. May 10 00:46:02.240829 systemd[1]: Started sshd@15-172.31.16.44:22-139.178.89.65:55388.service. May 10 00:46:02.246194 systemd-logind[1730]: Removed session 15. May 10 00:46:02.454185 sshd[4141]: Accepted publickey for core from 139.178.89.65 port 55388 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:46:02.456548 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:02.463811 systemd[1]: Started session-16.scope. May 10 00:46:02.464209 systemd-logind[1730]: New session 16 of user core. May 10 00:46:04.483906 systemd[1]: Started sshd@16-172.31.16.44:22-139.178.89.65:55398.service. May 10 00:46:04.525404 sshd[4141]: pam_unix(sshd:session): session closed for user core May 10 00:46:04.529340 systemd[1]: sshd@15-172.31.16.44:22-139.178.89.65:55388.service: Deactivated successfully. May 10 00:46:04.530448 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:46:04.530924 systemd-logind[1730]: Session 16 logged out. Waiting for processes to exit. May 10 00:46:04.531946 systemd-logind[1730]: Removed session 16. May 10 00:46:04.654875 sshd[4180]: Accepted publickey for core from 139.178.89.65 port 55398 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:46:04.656618 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:04.662726 systemd[1]: Started session-17.scope. May 10 00:46:04.663757 systemd-logind[1730]: New session 17 of user core. May 10 00:46:05.113961 sshd[4180]: pam_unix(sshd:session): session closed for user core May 10 00:46:05.117488 systemd-logind[1730]: Session 17 logged out. Waiting for processes to exit. May 10 00:46:05.117603 systemd[1]: sshd@16-172.31.16.44:22-139.178.89.65:55398.service: Deactivated successfully. May 10 00:46:05.118271 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:46:05.119887 systemd-logind[1730]: Removed session 17. May 10 00:46:05.140792 systemd[1]: Started sshd@17-172.31.16.44:22-139.178.89.65:55414.service. May 10 00:46:05.303647 sshd[4190]: Accepted publickey for core from 139.178.89.65 port 55414 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:46:05.305396 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:05.310393 systemd[1]: Started session-18.scope. May 10 00:46:05.310736 systemd-logind[1730]: New session 18 of user core. May 10 00:46:05.522672 sshd[4190]: pam_unix(sshd:session): session closed for user core May 10 00:46:05.525835 systemd[1]: sshd@17-172.31.16.44:22-139.178.89.65:55414.service: Deactivated successfully. May 10 00:46:05.526568 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:46:05.527313 systemd-logind[1730]: Session 18 logged out. Waiting for processes to exit. May 10 00:46:05.528318 systemd-logind[1730]: Removed session 18. May 10 00:46:10.559170 systemd[1]: Started sshd@18-172.31.16.44:22-139.178.89.65:54628.service. May 10 00:46:10.710267 sshd[4204]: Accepted publickey for core from 139.178.89.65 port 54628 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:46:10.711925 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:10.717676 systemd[1]: Started session-19.scope. May 10 00:46:10.718430 systemd-logind[1730]: New session 19 of user core. May 10 00:46:10.916730 sshd[4204]: pam_unix(sshd:session): session closed for user core May 10 00:46:10.920046 systemd[1]: sshd@18-172.31.16.44:22-139.178.89.65:54628.service: Deactivated successfully. May 10 00:46:10.920764 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:46:10.921423 systemd-logind[1730]: Session 19 logged out. Waiting for processes to exit. May 10 00:46:10.922229 systemd-logind[1730]: Removed session 19. May 10 00:46:15.942731 systemd[1]: Started sshd@19-172.31.16.44:22-139.178.89.65:54638.service. May 10 00:46:16.104212 sshd[4216]: Accepted publickey for core from 139.178.89.65 port 54638 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:46:16.106072 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:16.111489 systemd[1]: Started session-20.scope. May 10 00:46:16.112017 systemd-logind[1730]: New session 20 of user core. May 10 00:46:16.305281 sshd[4216]: pam_unix(sshd:session): session closed for user core May 10 00:46:16.308915 systemd[1]: sshd@19-172.31.16.44:22-139.178.89.65:54638.service: Deactivated successfully. May 10 00:46:16.309922 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:46:16.310673 systemd-logind[1730]: Session 20 logged out. Waiting for processes to exit. May 10 00:46:16.311834 systemd-logind[1730]: Removed session 20. May 10 00:46:21.333157 systemd[1]: Started sshd@20-172.31.16.44:22-139.178.89.65:41756.service. May 10 00:46:21.491442 sshd[4228]: Accepted publickey for core from 139.178.89.65 port 41756 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:46:21.493523 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:21.500397 systemd[1]: Started session-21.scope. May 10 00:46:21.501077 systemd-logind[1730]: New session 21 of user core. May 10 00:46:21.690441 sshd[4228]: pam_unix(sshd:session): session closed for user core May 10 00:46:21.693742 systemd[1]: sshd@20-172.31.16.44:22-139.178.89.65:41756.service: Deactivated successfully. May 10 00:46:21.694506 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:46:21.695699 systemd-logind[1730]: Session 21 logged out. Waiting for processes to exit. May 10 00:46:21.696570 systemd-logind[1730]: Removed session 21. May 10 00:46:26.718183 systemd[1]: Started sshd@21-172.31.16.44:22-139.178.89.65:34852.service. May 10 00:46:26.882088 sshd[4240]: Accepted publickey for core from 139.178.89.65 port 34852 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:46:26.883657 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:26.888232 systemd-logind[1730]: New session 22 of user core. May 10 00:46:26.888707 systemd[1]: Started session-22.scope. May 10 00:46:27.092106 sshd[4240]: pam_unix(sshd:session): session closed for user core May 10 00:46:27.095021 systemd[1]: sshd@21-172.31.16.44:22-139.178.89.65:34852.service: Deactivated successfully. May 10 00:46:27.095726 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:46:27.096444 systemd-logind[1730]: Session 22 logged out. Waiting for processes to exit. May 10 00:46:27.097195 systemd-logind[1730]: Removed session 22. May 10 00:46:27.117899 systemd[1]: Started sshd@22-172.31.16.44:22-139.178.89.65:34860.service. May 10 00:46:27.278236 sshd[4253]: Accepted publickey for core from 139.178.89.65 port 34860 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:46:27.280056 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:27.286153 systemd-logind[1730]: New session 23 of user core. May 10 00:46:27.286841 systemd[1]: Started session-23.scope. May 10 00:46:36.968645 env[1739]: time="2025-05-10T00:46:36.968406041Z" level=info msg="StopContainer for \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\" with timeout 30 (s)" May 10 00:46:36.969973 env[1739]: time="2025-05-10T00:46:36.969362480Z" level=info msg="Stop container \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\" with signal terminated" May 10 00:46:37.019655 systemd[1]: run-containerd-runc-k8s.io-b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7-runc.cviZAt.mount: Deactivated successfully. May 10 00:46:37.083633 systemd[1]: cri-containerd-d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb.scope: Deactivated successfully. May 10 00:46:37.113817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb-rootfs.mount: Deactivated successfully. May 10 00:46:37.117027 env[1739]: time="2025-05-10T00:46:37.116965546Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:46:37.128399 env[1739]: time="2025-05-10T00:46:37.128349637Z" level=info msg="StopContainer for \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\" with timeout 2 (s)" May 10 00:46:37.128762 env[1739]: time="2025-05-10T00:46:37.128710903Z" level=info msg="Stop container \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\" with signal terminated" May 10 00:46:37.131906 env[1739]: time="2025-05-10T00:46:37.131844988Z" level=info msg="shim disconnected" id=d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb May 10 00:46:37.131906 env[1739]: time="2025-05-10T00:46:37.131889508Z" level=warning msg="cleaning up after shim disconnected" id=d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb namespace=k8s.io May 10 00:46:37.131906 env[1739]: time="2025-05-10T00:46:37.131902903Z" level=info msg="cleaning up dead shim" May 10 00:46:37.138803 systemd-networkd[1465]: lxc_health: Link DOWN May 10 00:46:37.138813 systemd-networkd[1465]: lxc_health: Lost carrier May 10 00:46:37.154516 env[1739]: time="2025-05-10T00:46:37.154413635Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4302 runtime=io.containerd.runc.v2\n" May 10 00:46:37.159227 env[1739]: time="2025-05-10T00:46:37.159011503Z" level=info msg="StopContainer for \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\" returns successfully" May 10 00:46:37.159828 env[1739]: time="2025-05-10T00:46:37.159794856Z" level=info msg="StopPodSandbox for \"fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02\"" May 10 00:46:37.161828 env[1739]: time="2025-05-10T00:46:37.159856776Z" level=info msg="Container to stop \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:46:37.161820 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02-shm.mount: Deactivated successfully. May 10 00:46:37.173567 systemd[1]: cri-containerd-fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02.scope: Deactivated successfully. May 10 00:46:37.192650 systemd[1]: cri-containerd-b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7.scope: Deactivated successfully. May 10 00:46:37.192882 systemd[1]: cri-containerd-b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7.scope: Consumed 8.319s CPU time. May 10 00:46:37.205758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02-rootfs.mount: Deactivated successfully. May 10 00:46:37.225833 env[1739]: time="2025-05-10T00:46:37.225201634Z" level=info msg="shim disconnected" id=fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02 May 10 00:46:37.226116 env[1739]: time="2025-05-10T00:46:37.226084798Z" level=warning msg="cleaning up after shim disconnected" id=fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02 namespace=k8s.io May 10 00:46:37.226116 env[1739]: time="2025-05-10T00:46:37.226109954Z" level=info msg="cleaning up dead shim" May 10 00:46:37.226837 env[1739]: time="2025-05-10T00:46:37.226802664Z" level=info msg="shim disconnected" id=b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7 May 10 00:46:37.226972 env[1739]: time="2025-05-10T00:46:37.226839163Z" level=warning msg="cleaning up after shim disconnected" id=b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7 namespace=k8s.io May 10 00:46:37.226972 env[1739]: time="2025-05-10T00:46:37.226847426Z" level=info msg="cleaning up dead shim" May 10 00:46:37.235294 env[1739]: time="2025-05-10T00:46:37.235253947Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4354 runtime=io.containerd.runc.v2\n" May 10 00:46:37.237727 env[1739]: time="2025-05-10T00:46:37.237686396Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4353 runtime=io.containerd.runc.v2\n" May 10 00:46:37.238455 env[1739]: time="2025-05-10T00:46:37.238422387Z" level=info msg="TearDown network for sandbox \"fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02\" successfully" May 10 00:46:37.238543 env[1739]: time="2025-05-10T00:46:37.238455119Z" level=info msg="StopPodSandbox for \"fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02\" returns successfully" May 10 00:46:37.239842 env[1739]: time="2025-05-10T00:46:37.239433607Z" level=info msg="StopContainer for \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\" returns successfully" May 10 00:46:37.240312 env[1739]: time="2025-05-10T00:46:37.240284730Z" level=info msg="StopPodSandbox for \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\"" May 10 00:46:37.240420 env[1739]: time="2025-05-10T00:46:37.240347052Z" level=info msg="Container to stop \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:46:37.240420 env[1739]: time="2025-05-10T00:46:37.240368761Z" level=info msg="Container to stop \"e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:46:37.240420 env[1739]: time="2025-05-10T00:46:37.240387803Z" level=info msg="Container to stop \"8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:46:37.241329 env[1739]: time="2025-05-10T00:46:37.240403534Z" level=info msg="Container to stop \"6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:46:37.241329 env[1739]: time="2025-05-10T00:46:37.241196744Z" level=info msg="Container to stop \"ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:46:37.251164 systemd[1]: cri-containerd-e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f.scope: Deactivated successfully. May 10 00:46:37.263072 kubelet[2579]: I0510 00:46:37.262528 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11e70d08-0a53-4b02-9fb4-ba63e73411e2-cilium-config-path\") pod \"11e70d08-0a53-4b02-9fb4-ba63e73411e2\" (UID: \"11e70d08-0a53-4b02-9fb4-ba63e73411e2\") " May 10 00:46:37.263072 kubelet[2579]: I0510 00:46:37.262637 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llljd\" (UniqueName: \"kubernetes.io/projected/11e70d08-0a53-4b02-9fb4-ba63e73411e2-kube-api-access-llljd\") pod \"11e70d08-0a53-4b02-9fb4-ba63e73411e2\" (UID: \"11e70d08-0a53-4b02-9fb4-ba63e73411e2\") " May 10 00:46:37.273799 kubelet[2579]: I0510 00:46:37.271823 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e70d08-0a53-4b02-9fb4-ba63e73411e2-kube-api-access-llljd" (OuterVolumeSpecName: "kube-api-access-llljd") pod "11e70d08-0a53-4b02-9fb4-ba63e73411e2" (UID: "11e70d08-0a53-4b02-9fb4-ba63e73411e2"). InnerVolumeSpecName "kube-api-access-llljd". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:46:37.276220 kubelet[2579]: I0510 00:46:37.268693 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e70d08-0a53-4b02-9fb4-ba63e73411e2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "11e70d08-0a53-4b02-9fb4-ba63e73411e2" (UID: "11e70d08-0a53-4b02-9fb4-ba63e73411e2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:46:37.298952 env[1739]: time="2025-05-10T00:46:37.298907557Z" level=info msg="shim disconnected" id=e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f May 10 00:46:37.298952 env[1739]: time="2025-05-10T00:46:37.298952003Z" level=warning msg="cleaning up after shim disconnected" id=e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f namespace=k8s.io May 10 00:46:37.298952 env[1739]: time="2025-05-10T00:46:37.298960850Z" level=info msg="cleaning up dead shim" May 10 00:46:37.307720 env[1739]: time="2025-05-10T00:46:37.307675380Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4399 runtime=io.containerd.runc.v2\n" May 10 00:46:37.308024 env[1739]: time="2025-05-10T00:46:37.307997126Z" level=info msg="TearDown network for sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" successfully" May 10 00:46:37.308123 env[1739]: time="2025-05-10T00:46:37.308024362Z" level=info msg="StopPodSandbox for \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" returns successfully" May 10 00:46:37.362917 kubelet[2579]: I0510 00:46:37.362875 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-host-proc-sys-net\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363117 kubelet[2579]: I0510 00:46:37.362926 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-hubble-tls\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363117 kubelet[2579]: I0510 00:46:37.362947 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cilium-config-path\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363117 kubelet[2579]: I0510 00:46:37.362964 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cni-path\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363117 kubelet[2579]: I0510 00:46:37.362979 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-xtables-lock\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363117 kubelet[2579]: I0510 00:46:37.362992 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-etc-cni-netd\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363117 kubelet[2579]: I0510 00:46:37.363006 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-hostproc\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363392 kubelet[2579]: I0510 00:46:37.363020 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cilium-cgroup\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363392 kubelet[2579]: I0510 00:46:37.363056 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cilium-run\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363392 kubelet[2579]: I0510 00:46:37.363072 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-bpf-maps\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363392 kubelet[2579]: I0510 00:46:37.363090 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-host-proc-sys-kernel\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363392 kubelet[2579]: I0510 00:46:37.363107 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fbtj\" (UniqueName: \"kubernetes.io/projected/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-kube-api-access-8fbtj\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363392 kubelet[2579]: I0510 00:46:37.363123 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-clustermesh-secrets\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363553 kubelet[2579]: I0510 00:46:37.363138 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-lib-modules\") pod \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\" (UID: \"0efd06ce-a961-4d40-84e3-2dfa6b234ac5\") " May 10 00:46:37.363553 kubelet[2579]: I0510 00:46:37.363176 2579 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-llljd\" (UniqueName: \"kubernetes.io/projected/11e70d08-0a53-4b02-9fb4-ba63e73411e2-kube-api-access-llljd\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.363553 kubelet[2579]: I0510 00:46:37.363186 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11e70d08-0a53-4b02-9fb4-ba63e73411e2-cilium-config-path\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.363553 kubelet[2579]: I0510 00:46:37.363229 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:37.363553 kubelet[2579]: I0510 00:46:37.363262 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:37.363684 kubelet[2579]: I0510 00:46:37.363573 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:37.365695 kubelet[2579]: I0510 00:46:37.365665 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:46:37.365837 kubelet[2579]: I0510 00:46:37.365749 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cni-path" (OuterVolumeSpecName: "cni-path") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:37.365837 kubelet[2579]: I0510 00:46:37.365767 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:37.365837 kubelet[2579]: I0510 00:46:37.365792 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:37.365837 kubelet[2579]: I0510 00:46:37.365804 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-hostproc" (OuterVolumeSpecName: "hostproc") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:37.365837 kubelet[2579]: I0510 00:46:37.365820 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:37.365974 kubelet[2579]: I0510 00:46:37.365835 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:37.365974 kubelet[2579]: I0510 00:46:37.365861 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:37.369508 kubelet[2579]: I0510 00:46:37.369477 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-kube-api-access-8fbtj" (OuterVolumeSpecName: "kube-api-access-8fbtj") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "kube-api-access-8fbtj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:46:37.371261 kubelet[2579]: I0510 00:46:37.371236 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:46:37.372179 kubelet[2579]: I0510 00:46:37.372152 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0efd06ce-a961-4d40-84e3-2dfa6b234ac5" (UID: "0efd06ce-a961-4d40-84e3-2dfa6b234ac5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:46:37.463639 kubelet[2579]: I0510 00:46:37.463593 2579 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-host-proc-sys-net\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.463639 kubelet[2579]: I0510 00:46:37.463632 2579 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-hubble-tls\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.463639 kubelet[2579]: I0510 00:46:37.463643 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cilium-config-path\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.463639 kubelet[2579]: I0510 00:46:37.463653 2579 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-etc-cni-netd\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.463639 kubelet[2579]: I0510 00:46:37.463663 2579 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cni-path\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.463924 kubelet[2579]: I0510 00:46:37.463672 2579 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-xtables-lock\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.463924 kubelet[2579]: I0510 00:46:37.463680 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cilium-cgroup\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.463924 kubelet[2579]: I0510 00:46:37.463688 2579 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-hostproc\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.463924 kubelet[2579]: I0510 00:46:37.463696 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-cilium-run\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.463924 kubelet[2579]: I0510 00:46:37.463703 2579 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-bpf-maps\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.463924 kubelet[2579]: I0510 00:46:37.463711 2579 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-host-proc-sys-kernel\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.463924 kubelet[2579]: I0510 00:46:37.463732 2579 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-clustermesh-secrets\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.463924 kubelet[2579]: I0510 00:46:37.463742 2579 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-lib-modules\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.465170 kubelet[2579]: I0510 00:46:37.463750 2579 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8fbtj\" (UniqueName: \"kubernetes.io/projected/0efd06ce-a961-4d40-84e3-2dfa6b234ac5-kube-api-access-8fbtj\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:37.779259 kubelet[2579]: I0510 00:46:37.779021 2579 scope.go:117] "RemoveContainer" containerID="d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb" May 10 00:46:37.779926 systemd[1]: Removed slice kubepods-besteffort-pod11e70d08_0a53_4b02_9fb4_ba63e73411e2.slice. May 10 00:46:37.783182 env[1739]: time="2025-05-10T00:46:37.782501948Z" level=info msg="RemoveContainer for \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\"" May 10 00:46:37.788561 env[1739]: time="2025-05-10T00:46:37.788504793Z" level=info msg="RemoveContainer for \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\" returns successfully" May 10 00:46:37.790065 kubelet[2579]: I0510 00:46:37.790016 2579 scope.go:117] "RemoveContainer" containerID="d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb" May 10 00:46:37.792267 env[1739]: time="2025-05-10T00:46:37.791790569Z" level=error msg="ContainerStatus for \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\": not found" May 10 00:46:37.793350 kubelet[2579]: E0510 00:46:37.793129 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\": not found" containerID="d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb" May 10 00:46:37.793609 kubelet[2579]: I0510 00:46:37.793503 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb"} err="failed to get container status \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"d468ff91e528c18ca2ea36fd7aeb8da48357d39ca7a8581942cf750b2a89c0eb\": not found" May 10 00:46:37.793724 kubelet[2579]: I0510 00:46:37.793711 2579 scope.go:117] "RemoveContainer" containerID="b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7" May 10 00:46:37.796672 env[1739]: time="2025-05-10T00:46:37.796612966Z" level=info msg="RemoveContainer for \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\"" May 10 00:46:37.798440 systemd[1]: Removed slice kubepods-burstable-pod0efd06ce_a961_4d40_84e3_2dfa6b234ac5.slice. May 10 00:46:37.798702 systemd[1]: kubepods-burstable-pod0efd06ce_a961_4d40_84e3_2dfa6b234ac5.slice: Consumed 8.441s CPU time. May 10 00:46:37.805374 env[1739]: time="2025-05-10T00:46:37.803559750Z" level=info msg="RemoveContainer for \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\" returns successfully" May 10 00:46:37.805515 kubelet[2579]: I0510 00:46:37.803858 2579 scope.go:117] "RemoveContainer" containerID="ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec" May 10 00:46:37.807382 env[1739]: time="2025-05-10T00:46:37.807350597Z" level=info msg="RemoveContainer for \"ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec\"" May 10 00:46:37.813576 env[1739]: time="2025-05-10T00:46:37.813534423Z" level=info msg="RemoveContainer for \"ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec\" returns successfully" May 10 00:46:37.813967 kubelet[2579]: I0510 00:46:37.813941 2579 scope.go:117] "RemoveContainer" containerID="8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5" May 10 00:46:37.816305 env[1739]: time="2025-05-10T00:46:37.816259506Z" level=info msg="RemoveContainer for \"8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5\"" May 10 00:46:37.822148 env[1739]: time="2025-05-10T00:46:37.822102400Z" level=info msg="RemoveContainer for \"8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5\" returns successfully" May 10 00:46:37.822752 kubelet[2579]: I0510 00:46:37.822732 2579 scope.go:117] "RemoveContainer" containerID="6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7" May 10 00:46:37.825899 env[1739]: time="2025-05-10T00:46:37.825863028Z" level=info msg="RemoveContainer for \"6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7\"" May 10 00:46:37.831208 env[1739]: time="2025-05-10T00:46:37.831167454Z" level=info msg="RemoveContainer for \"6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7\" returns successfully" May 10 00:46:37.831418 kubelet[2579]: I0510 00:46:37.831397 2579 scope.go:117] "RemoveContainer" containerID="e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec" May 10 00:46:37.832488 env[1739]: time="2025-05-10T00:46:37.832462610Z" level=info msg="RemoveContainer for \"e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec\"" May 10 00:46:37.837701 env[1739]: time="2025-05-10T00:46:37.837667824Z" level=info msg="RemoveContainer for \"e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec\" returns successfully" May 10 00:46:37.838021 kubelet[2579]: I0510 00:46:37.837999 2579 scope.go:117] "RemoveContainer" containerID="b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7" May 10 00:46:37.838278 env[1739]: time="2025-05-10T00:46:37.838223670Z" level=error msg="ContainerStatus for \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\": not found" May 10 00:46:37.838416 kubelet[2579]: E0510 00:46:37.838388 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\": not found" containerID="b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7" May 10 00:46:37.838457 kubelet[2579]: I0510 00:46:37.838415 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7"} err="failed to get container status \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7\": not found" May 10 00:46:37.838457 kubelet[2579]: I0510 00:46:37.838435 2579 scope.go:117] "RemoveContainer" containerID="ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec" May 10 00:46:37.838779 env[1739]: time="2025-05-10T00:46:37.838709751Z" level=error msg="ContainerStatus for \"ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec\": not found" May 10 00:46:37.838871 kubelet[2579]: E0510 00:46:37.838843 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec\": not found" containerID="ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec" May 10 00:46:37.838871 kubelet[2579]: I0510 00:46:37.838862 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec"} err="failed to get container status \"ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae4bc2da19bda9b57febc226ff72030375c1cadbc58b11d27579c8a240a97fec\": not found" May 10 00:46:37.838946 kubelet[2579]: I0510 00:46:37.838877 2579 scope.go:117] "RemoveContainer" containerID="8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5" May 10 00:46:37.839145 env[1739]: time="2025-05-10T00:46:37.839004751Z" level=error msg="ContainerStatus for \"8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5\": not found" May 10 00:46:37.839240 kubelet[2579]: E0510 00:46:37.839159 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5\": not found" containerID="8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5" May 10 00:46:37.839240 kubelet[2579]: I0510 00:46:37.839177 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5"} err="failed to get container status \"8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"8238aa572c9c7e295aaaa5cc2e671c0e42ca4a506d6cdb0a7cfcb1e2d37eb1d5\": not found" May 10 00:46:37.839240 kubelet[2579]: I0510 00:46:37.839191 2579 scope.go:117] "RemoveContainer" containerID="6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7" May 10 00:46:37.839347 env[1739]: time="2025-05-10T00:46:37.839302966Z" level=error msg="ContainerStatus for \"6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7\": not found" May 10 00:46:37.839413 kubelet[2579]: E0510 00:46:37.839392 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7\": not found" containerID="6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7" May 10 00:46:37.839495 kubelet[2579]: I0510 00:46:37.839416 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7"} err="failed to get container status \"6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f3e8df7e809b142949e8dcdb322edb88f9b0fda8f9a5442a83bdb80e0b892d7\": not found" May 10 00:46:37.839495 kubelet[2579]: I0510 00:46:37.839428 2579 scope.go:117] "RemoveContainer" containerID="e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec" May 10 00:46:37.839741 env[1739]: time="2025-05-10T00:46:37.839698847Z" level=error msg="ContainerStatus for \"e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec\": not found" May 10 00:46:37.839986 kubelet[2579]: E0510 00:46:37.839929 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec\": not found" containerID="e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec" May 10 00:46:37.839986 kubelet[2579]: I0510 00:46:37.839958 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec"} err="failed to get container status \"e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"e77dfcf2b607d88401e3341a5434025ff9db8fdfd53896e9b9f7e61008ecb3ec\": not found" May 10 00:46:38.012090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9932e760b62dfbd0e456fb66060b330f21a460d4fd8c36be7dd2951e91ab1b7-rootfs.mount: Deactivated successfully. May 10 00:46:38.012222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f-rootfs.mount: Deactivated successfully. May 10 00:46:38.012323 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f-shm.mount: Deactivated successfully. May 10 00:46:38.012409 systemd[1]: var-lib-kubelet-pods-11e70d08\x2d0a53\x2d4b02\x2d9fb4\x2dba63e73411e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dllljd.mount: Deactivated successfully. May 10 00:46:38.012496 systemd[1]: var-lib-kubelet-pods-0efd06ce\x2da961\x2d4d40\x2d84e3\x2d2dfa6b234ac5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8fbtj.mount: Deactivated successfully. May 10 00:46:38.012576 systemd[1]: var-lib-kubelet-pods-0efd06ce\x2da961\x2d4d40\x2d84e3\x2d2dfa6b234ac5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:46:38.012665 systemd[1]: var-lib-kubelet-pods-0efd06ce\x2da961\x2d4d40\x2d84e3\x2d2dfa6b234ac5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:46:38.367544 kubelet[2579]: I0510 00:46:38.367488 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0efd06ce-a961-4d40-84e3-2dfa6b234ac5" path="/var/lib/kubelet/pods/0efd06ce-a961-4d40-84e3-2dfa6b234ac5/volumes" May 10 00:46:38.368092 kubelet[2579]: I0510 00:46:38.368073 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e70d08-0a53-4b02-9fb4-ba63e73411e2" path="/var/lib/kubelet/pods/11e70d08-0a53-4b02-9fb4-ba63e73411e2/volumes" May 10 00:46:38.491016 kubelet[2579]: E0510 00:46:38.490956 2579 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:46:38.720491 sshd[4253]: pam_unix(sshd:session): session closed for user core May 10 00:46:38.725643 systemd[1]: sshd@22-172.31.16.44:22-139.178.89.65:34860.service: Deactivated successfully. May 10 00:46:38.726652 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:46:38.728144 systemd-logind[1730]: Session 23 logged out. Waiting for processes to exit. May 10 00:46:38.729089 systemd-logind[1730]: Removed session 23. May 10 00:46:38.745799 systemd[1]: Started sshd@23-172.31.16.44:22-139.178.89.65:46014.service. May 10 00:46:38.922510 sshd[4419]: Accepted publickey for core from 139.178.89.65 port 46014 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:46:38.924182 sshd[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:38.929935 systemd[1]: Started session-24.scope. May 10 00:46:38.930478 systemd-logind[1730]: New session 24 of user core. May 10 00:46:39.365566 kubelet[2579]: E0510 00:46:39.365275 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-kzxvk" podUID="16550f9a-b6e8-4784-b38d-91ae00c30ae3" May 10 00:46:39.674518 sshd[4419]: pam_unix(sshd:session): session closed for user core May 10 00:46:39.678915 systemd-logind[1730]: Session 24 logged out. Waiting for processes to exit. May 10 00:46:39.681151 systemd[1]: sshd@23-172.31.16.44:22-139.178.89.65:46014.service: Deactivated successfully. May 10 00:46:39.682106 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:46:39.684160 systemd-logind[1730]: Removed session 24. May 10 00:46:39.685865 kubelet[2579]: E0510 00:46:39.685822 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efd06ce-a961-4d40-84e3-2dfa6b234ac5" containerName="mount-cgroup" May 10 00:46:39.686261 kubelet[2579]: E0510 00:46:39.685876 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efd06ce-a961-4d40-84e3-2dfa6b234ac5" containerName="apply-sysctl-overwrites" May 10 00:46:39.686261 kubelet[2579]: E0510 00:46:39.685886 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efd06ce-a961-4d40-84e3-2dfa6b234ac5" containerName="mount-bpf-fs" May 10 00:46:39.686261 kubelet[2579]: E0510 00:46:39.685894 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="11e70d08-0a53-4b02-9fb4-ba63e73411e2" containerName="cilium-operator" May 10 00:46:39.686261 kubelet[2579]: E0510 00:46:39.685904 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efd06ce-a961-4d40-84e3-2dfa6b234ac5" containerName="clean-cilium-state" May 10 00:46:39.686261 kubelet[2579]: E0510 00:46:39.685912 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efd06ce-a961-4d40-84e3-2dfa6b234ac5" containerName="cilium-agent" May 10 00:46:39.688371 kubelet[2579]: I0510 00:46:39.688335 2579 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e70d08-0a53-4b02-9fb4-ba63e73411e2" containerName="cilium-operator" May 10 00:46:39.688497 kubelet[2579]: I0510 00:46:39.688382 2579 memory_manager.go:354] "RemoveStaleState removing state" podUID="0efd06ce-a961-4d40-84e3-2dfa6b234ac5" containerName="cilium-agent" May 10 00:46:39.702731 systemd[1]: Started sshd@24-172.31.16.44:22-139.178.89.65:46018.service. May 10 00:46:39.718742 systemd[1]: Created slice kubepods-burstable-pod5eb93589_f7e0_4cda_8e86_dbcc39d47fbf.slice. May 10 00:46:39.775367 kubelet[2579]: I0510 00:46:39.775314 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2k4f\" (UniqueName: \"kubernetes.io/projected/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-kube-api-access-c2k4f\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.775555 kubelet[2579]: I0510 00:46:39.775538 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-config-path\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.775663 kubelet[2579]: I0510 00:46:39.775651 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-hubble-tls\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.775832 kubelet[2579]: I0510 00:46:39.775805 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-hostproc\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.775882 kubelet[2579]: I0510 00:46:39.775864 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-xtables-lock\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.775911 kubelet[2579]: I0510 00:46:39.775882 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-run\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.775911 kubelet[2579]: I0510 00:46:39.775897 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-ipsec-secrets\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.775974 kubelet[2579]: I0510 00:46:39.775934 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-cgroup\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.775974 kubelet[2579]: I0510 00:46:39.775953 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-etc-cni-netd\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.775974 kubelet[2579]: I0510 00:46:39.775969 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cni-path\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.776094 kubelet[2579]: I0510 00:46:39.776007 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-host-proc-sys-net\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.776094 kubelet[2579]: I0510 00:46:39.776024 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-bpf-maps\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.776094 kubelet[2579]: I0510 00:46:39.776074 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-lib-modules\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.776094 kubelet[2579]: I0510 00:46:39.776089 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-clustermesh-secrets\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.776210 kubelet[2579]: I0510 00:46:39.776103 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-host-proc-sys-kernel\") pod \"cilium-lw4wb\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " pod="kube-system/cilium-lw4wb" May 10 00:46:39.874869 sshd[4429]: Accepted publickey for core from 139.178.89.65 port 46018 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:46:39.876735 sshd[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:39.892667 systemd-logind[1730]: New session 25 of user core. May 10 00:46:39.893287 systemd[1]: Started session-25.scope. May 10 00:46:40.026788 env[1739]: time="2025-05-10T00:46:40.026662518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lw4wb,Uid:5eb93589-f7e0-4cda-8e86-dbcc39d47fbf,Namespace:kube-system,Attempt:0,}" May 10 00:46:40.081866 env[1739]: time="2025-05-10T00:46:40.081745811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:46:40.082058 env[1739]: time="2025-05-10T00:46:40.081891617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:46:40.082058 env[1739]: time="2025-05-10T00:46:40.081924066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:46:40.082379 env[1739]: time="2025-05-10T00:46:40.082316558Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0 pid=4449 runtime=io.containerd.runc.v2 May 10 00:46:40.101311 systemd[1]: Started cri-containerd-9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0.scope. May 10 00:46:40.150821 env[1739]: time="2025-05-10T00:46:40.150773237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lw4wb,Uid:5eb93589-f7e0-4cda-8e86-dbcc39d47fbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0\"" May 10 00:46:40.156094 env[1739]: time="2025-05-10T00:46:40.155977917Z" level=info msg="CreateContainer within sandbox \"9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:46:40.180916 env[1739]: time="2025-05-10T00:46:40.180858511Z" level=info msg="CreateContainer within sandbox \"9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f\"" May 10 00:46:40.183580 env[1739]: time="2025-05-10T00:46:40.183541526Z" level=info msg="StartContainer for \"901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f\"" May 10 00:46:40.204081 systemd[1]: Started cri-containerd-901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f.scope. May 10 00:46:40.226938 systemd[1]: cri-containerd-901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f.scope: Deactivated successfully. May 10 00:46:40.260752 env[1739]: time="2025-05-10T00:46:40.260694213Z" level=info msg="shim disconnected" id=901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f May 10 00:46:40.260997 env[1739]: time="2025-05-10T00:46:40.260761099Z" level=warning msg="cleaning up after shim disconnected" id=901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f namespace=k8s.io May 10 00:46:40.260997 env[1739]: time="2025-05-10T00:46:40.260773383Z" level=info msg="cleaning up dead shim" May 10 00:46:40.277363 env[1739]: time="2025-05-10T00:46:40.277264320Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4510 runtime=io.containerd.runc.v2\ntime=\"2025-05-10T00:46:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 10 00:46:40.277936 env[1739]: time="2025-05-10T00:46:40.277798714Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" May 10 00:46:40.278867 env[1739]: time="2025-05-10T00:46:40.278822341Z" level=error msg="Failed to pipe stderr of container \"901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f\"" error="reading from a closed fifo" May 10 00:46:40.280435 env[1739]: time="2025-05-10T00:46:40.280390029Z" level=error msg="Failed to pipe stdout of container \"901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f\"" error="reading from a closed fifo" May 10 00:46:40.288241 env[1739]: time="2025-05-10T00:46:40.288177218Z" level=error msg="StartContainer for \"901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 10 00:46:40.290078 kubelet[2579]: E0510 00:46:40.289945 2579 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f" May 10 00:46:40.294116 kubelet[2579]: E0510 00:46:40.294069 2579 kuberuntime_manager.go:1272] "Unhandled Error" err=< May 10 00:46:40.294116 kubelet[2579]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 10 00:46:40.294116 kubelet[2579]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 10 00:46:40.294116 kubelet[2579]: rm /hostbin/cilium-mount May 10 00:46:40.294359 kubelet[2579]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2k4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lw4wb_kube-system(5eb93589-f7e0-4cda-8e86-dbcc39d47fbf): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 10 00:46:40.294359 kubelet[2579]: > logger="UnhandledError" May 10 00:46:40.295258 kubelet[2579]: E0510 00:46:40.295221 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lw4wb" podUID="5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" May 10 00:46:40.312088 sshd[4429]: pam_unix(sshd:session): session closed for user core May 10 00:46:40.315004 systemd[1]: sshd@24-172.31.16.44:22-139.178.89.65:46018.service: Deactivated successfully. May 10 00:46:40.315702 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:46:40.316534 systemd-logind[1730]: Session 25 logged out. Waiting for processes to exit. May 10 00:46:40.317525 systemd-logind[1730]: Removed session 25. May 10 00:46:40.336696 systemd[1]: Started sshd@25-172.31.16.44:22-139.178.89.65:46026.service. May 10 00:46:40.494080 sshd[4527]: Accepted publickey for core from 139.178.89.65 port 46026 ssh2: RSA SHA256:qeBqllzRe8v74cvXiP1dOdqqawM7kzZ4c6tDX3pmCBQ May 10 00:46:40.495496 sshd[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:40.501355 systemd[1]: Started session-26.scope. May 10 00:46:40.502064 systemd-logind[1730]: New session 26 of user core. May 10 00:46:40.652056 kubelet[2579]: I0510 00:46:40.650154 2579 setters.go:600] "Node became not ready" node="ip-172-31-16-44" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:46:40Z","lastTransitionTime":"2025-05-10T00:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:46:40.800568 env[1739]: time="2025-05-10T00:46:40.800532784Z" level=info msg="StopPodSandbox for \"9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0\"" May 10 00:46:40.800725 env[1739]: time="2025-05-10T00:46:40.800596882Z" level=info msg="Container to stop \"901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:46:40.812555 systemd[1]: cri-containerd-9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0.scope: Deactivated successfully. May 10 00:46:40.855691 env[1739]: time="2025-05-10T00:46:40.855630867Z" level=info msg="shim disconnected" id=9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0 May 10 00:46:40.855691 env[1739]: time="2025-05-10T00:46:40.855686881Z" level=warning msg="cleaning up after shim disconnected" id=9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0 namespace=k8s.io May 10 00:46:40.855691 env[1739]: time="2025-05-10T00:46:40.855697901Z" level=info msg="cleaning up dead shim" May 10 00:46:40.865964 env[1739]: time="2025-05-10T00:46:40.865911361Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4554 runtime=io.containerd.runc.v2\n" May 10 00:46:40.866321 env[1739]: time="2025-05-10T00:46:40.866283136Z" level=info msg="TearDown network for sandbox \"9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0\" successfully" May 10 00:46:40.866418 env[1739]: time="2025-05-10T00:46:40.866319376Z" level=info msg="StopPodSandbox for \"9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0\" returns successfully" May 10 00:46:40.884602 kubelet[2579]: I0510 00:46:40.884560 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cni-path\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.885129 kubelet[2579]: I0510 00:46:40.885096 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2k4f\" (UniqueName: \"kubernetes.io/projected/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-kube-api-access-c2k4f\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.885294 kubelet[2579]: I0510 00:46:40.885244 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-run\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.885294 kubelet[2579]: I0510 00:46:40.885273 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-lib-modules\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.885561 kubelet[2579]: I0510 00:46:40.885427 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-host-proc-sys-kernel\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.885561 kubelet[2579]: I0510 00:46:40.885453 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-bpf-maps\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.885561 kubelet[2579]: I0510 00:46:40.885498 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-hubble-tls\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.885561 kubelet[2579]: I0510 00:46:40.885522 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-hostproc\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.886029 kubelet[2579]: I0510 00:46:40.885547 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-xtables-lock\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.886029 kubelet[2579]: I0510 00:46:40.885829 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-host-proc-sys-net\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.886029 kubelet[2579]: I0510 00:46:40.885855 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-etc-cni-netd\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.886029 kubelet[2579]: I0510 00:46:40.885904 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-clustermesh-secrets\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.886029 kubelet[2579]: I0510 00:46:40.885936 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-config-path\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.886029 kubelet[2579]: I0510 00:46:40.885975 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-cgroup\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.886029 kubelet[2579]: I0510 00:46:40.886003 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-ipsec-secrets\") pod \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\" (UID: \"5eb93589-f7e0-4cda-8e86-dbcc39d47fbf\") " May 10 00:46:40.887743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0-rootfs.mount: Deactivated successfully. May 10 00:46:40.887877 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0-shm.mount: Deactivated successfully. May 10 00:46:40.893164 kubelet[2579]: I0510 00:46:40.893121 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cni-path" (OuterVolumeSpecName: "cni-path") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:40.893786 kubelet[2579]: I0510 00:46:40.893758 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-hostproc" (OuterVolumeSpecName: "hostproc") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:40.899864 kubelet[2579]: I0510 00:46:40.893998 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:40.901267 kubelet[2579]: I0510 00:46:40.894020 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:40.901427 kubelet[2579]: I0510 00:46:40.894051 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:40.901521 kubelet[2579]: I0510 00:46:40.894096 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:40.901622 kubelet[2579]: I0510 00:46:40.894115 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:40.901906 systemd[1]: var-lib-kubelet-pods-5eb93589\x2df7e0\x2d4cda\x2d8e86\x2ddbcc39d47fbf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc2k4f.mount: Deactivated successfully. May 10 00:46:40.911133 systemd[1]: var-lib-kubelet-pods-5eb93589\x2df7e0\x2d4cda\x2d8e86\x2ddbcc39d47fbf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:46:40.913477 kubelet[2579]: I0510 00:46:40.894134 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:40.914415 kubelet[2579]: I0510 00:46:40.894152 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:40.914415 kubelet[2579]: I0510 00:46:40.901096 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:46:40.914415 kubelet[2579]: I0510 00:46:40.901188 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-kube-api-access-c2k4f" (OuterVolumeSpecName: "kube-api-access-c2k4f") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "kube-api-access-c2k4f". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:46:40.914415 kubelet[2579]: I0510 00:46:40.901214 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:46:40.914934 kubelet[2579]: I0510 00:46:40.914599 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:46:40.918156 kubelet[2579]: I0510 00:46:40.918116 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:46:40.919659 systemd[1]: var-lib-kubelet-pods-5eb93589\x2df7e0\x2d4cda\x2d8e86\x2ddbcc39d47fbf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:46:40.919790 systemd[1]: var-lib-kubelet-pods-5eb93589\x2df7e0\x2d4cda\x2d8e86\x2ddbcc39d47fbf-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 10 00:46:40.923176 kubelet[2579]: I0510 00:46:40.923133 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" (UID: "5eb93589-f7e0-4cda-8e86-dbcc39d47fbf"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:46:40.987326 kubelet[2579]: I0510 00:46:40.987250 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-ipsec-secrets\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987326 kubelet[2579]: I0510 00:46:40.987298 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-run\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987326 kubelet[2579]: I0510 00:46:40.987308 2579 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cni-path\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987326 kubelet[2579]: I0510 00:46:40.987316 2579 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-c2k4f\" (UniqueName: \"kubernetes.io/projected/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-kube-api-access-c2k4f\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987326 kubelet[2579]: I0510 00:46:40.987325 2579 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-lib-modules\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987326 kubelet[2579]: I0510 00:46:40.987335 2579 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-host-proc-sys-kernel\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987326 kubelet[2579]: I0510 00:46:40.987344 2579 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-bpf-maps\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987657 kubelet[2579]: I0510 00:46:40.987351 2579 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-hubble-tls\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987657 kubelet[2579]: I0510 00:46:40.987359 2579 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-xtables-lock\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987657 kubelet[2579]: I0510 00:46:40.987365 2579 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-hostproc\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987657 kubelet[2579]: I0510 00:46:40.987374 2579 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-etc-cni-netd\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987657 kubelet[2579]: I0510 00:46:40.987381 2579 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-host-proc-sys-net\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987657 kubelet[2579]: I0510 00:46:40.987388 2579 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-clustermesh-secrets\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987657 kubelet[2579]: I0510 00:46:40.987395 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-cgroup\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:40.987657 kubelet[2579]: I0510 00:46:40.987402 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf-cilium-config-path\") on node \"ip-172-31-16-44\" DevicePath \"\"" May 10 00:46:41.365091 kubelet[2579]: E0510 00:46:41.365027 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-kzxvk" podUID="16550f9a-b6e8-4784-b38d-91ae00c30ae3" May 10 00:46:41.803203 kubelet[2579]: I0510 00:46:41.803165 2579 scope.go:117] "RemoveContainer" containerID="901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f" May 10 00:46:41.805413 env[1739]: time="2025-05-10T00:46:41.805125489Z" level=info msg="RemoveContainer for \"901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f\"" May 10 00:46:41.807426 systemd[1]: Removed slice kubepods-burstable-pod5eb93589_f7e0_4cda_8e86_dbcc39d47fbf.slice. May 10 00:46:41.810409 env[1739]: time="2025-05-10T00:46:41.810362416Z" level=info msg="RemoveContainer for \"901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f\" returns successfully" May 10 00:46:41.852488 kubelet[2579]: E0510 00:46:41.852450 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" containerName="mount-cgroup" May 10 00:46:41.852686 kubelet[2579]: I0510 00:46:41.852523 2579 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" containerName="mount-cgroup" May 10 00:46:41.859403 systemd[1]: Created slice kubepods-burstable-pod53f6924f_13e2_4259_b19e_2cbf95b22155.slice. May 10 00:46:41.865084 kubelet[2579]: W0510 00:46:41.865051 2579 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-16-44" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-44' and this object May 10 00:46:41.865252 kubelet[2579]: E0510 00:46:41.865106 2579 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-16-44\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-44' and this object" logger="UnhandledError" May 10 00:46:41.892392 kubelet[2579]: I0510 00:46:41.892352 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53f6924f-13e2-4259-b19e-2cbf95b22155-cilium-config-path\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892403 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53f6924f-13e2-4259-b19e-2cbf95b22155-hostproc\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892426 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53f6924f-13e2-4259-b19e-2cbf95b22155-etc-cni-netd\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892448 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53f6924f-13e2-4259-b19e-2cbf95b22155-xtables-lock\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892468 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53f6924f-13e2-4259-b19e-2cbf95b22155-hubble-tls\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892494 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53f6924f-13e2-4259-b19e-2cbf95b22155-cilium-run\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892518 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53f6924f-13e2-4259-b19e-2cbf95b22155-bpf-maps\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892548 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53f6924f-13e2-4259-b19e-2cbf95b22155-cilium-cgroup\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892572 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53f6924f-13e2-4259-b19e-2cbf95b22155-lib-modules\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892597 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53f6924f-13e2-4259-b19e-2cbf95b22155-clustermesh-secrets\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892619 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/53f6924f-13e2-4259-b19e-2cbf95b22155-cilium-ipsec-secrets\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892645 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc89t\" (UniqueName: \"kubernetes.io/projected/53f6924f-13e2-4259-b19e-2cbf95b22155-kube-api-access-kc89t\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892672 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53f6924f-13e2-4259-b19e-2cbf95b22155-cni-path\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892699 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53f6924f-13e2-4259-b19e-2cbf95b22155-host-proc-sys-net\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:41.892846 kubelet[2579]: I0510 00:46:41.892724 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53f6924f-13e2-4259-b19e-2cbf95b22155-host-proc-sys-kernel\") pod \"cilium-btqhm\" (UID: \"53f6924f-13e2-4259-b19e-2cbf95b22155\") " pod="kube-system/cilium-btqhm" May 10 00:46:42.367759 kubelet[2579]: I0510 00:46:42.367716 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eb93589-f7e0-4cda-8e86-dbcc39d47fbf" path="/var/lib/kubelet/pods/5eb93589-f7e0-4cda-8e86-dbcc39d47fbf/volumes" May 10 00:46:43.063319 env[1739]: time="2025-05-10T00:46:43.063274355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-btqhm,Uid:53f6924f-13e2-4259-b19e-2cbf95b22155,Namespace:kube-system,Attempt:0,}" May 10 00:46:43.087767 env[1739]: time="2025-05-10T00:46:43.087685364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:46:43.087767 env[1739]: time="2025-05-10T00:46:43.087724415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:46:43.087767 env[1739]: time="2025-05-10T00:46:43.087735458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:46:43.088166 env[1739]: time="2025-05-10T00:46:43.088131425Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3 pid=4583 runtime=io.containerd.runc.v2 May 10 00:46:43.109425 systemd[1]: run-containerd-runc-k8s.io-82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3-runc.09s8QO.mount: Deactivated successfully. May 10 00:46:43.114230 systemd[1]: Started cri-containerd-82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3.scope. May 10 00:46:43.139686 env[1739]: time="2025-05-10T00:46:43.138945568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-btqhm,Uid:53f6924f-13e2-4259-b19e-2cbf95b22155,Namespace:kube-system,Attempt:0,} returns sandbox id \"82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3\"" May 10 00:46:43.141904 env[1739]: time="2025-05-10T00:46:43.141340159Z" level=info msg="CreateContainer within sandbox \"82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:46:43.161456 env[1739]: time="2025-05-10T00:46:43.161398380Z" level=info msg="CreateContainer within sandbox \"82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7bf4d6d819f8fe4e53f6e48a5b4e15ecfc9f9b38868a6f7a13c3534744faa6af\"" May 10 00:46:43.162350 env[1739]: time="2025-05-10T00:46:43.162307254Z" level=info msg="StartContainer for \"7bf4d6d819f8fe4e53f6e48a5b4e15ecfc9f9b38868a6f7a13c3534744faa6af\"" May 10 00:46:43.199651 systemd[1]: Started cri-containerd-7bf4d6d819f8fe4e53f6e48a5b4e15ecfc9f9b38868a6f7a13c3534744faa6af.scope. May 10 00:46:43.245824 env[1739]: time="2025-05-10T00:46:43.245769162Z" level=info msg="StartContainer for \"7bf4d6d819f8fe4e53f6e48a5b4e15ecfc9f9b38868a6f7a13c3534744faa6af\" returns successfully" May 10 00:46:43.366062 kubelet[2579]: E0510 00:46:43.365898 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-kzxvk" podUID="16550f9a-b6e8-4784-b38d-91ae00c30ae3" May 10 00:46:43.378996 kubelet[2579]: W0510 00:46:43.378955 2579 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5eb93589_f7e0_4cda_8e86_dbcc39d47fbf.slice/cri-containerd-901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f.scope WatchSource:0}: container "901cd7594e6b37e7a896c880b4cbadb6c392219123613af27081470fc3ff613f" in namespace "k8s.io": not found May 10 00:46:43.385830 systemd[1]: cri-containerd-7bf4d6d819f8fe4e53f6e48a5b4e15ecfc9f9b38868a6f7a13c3534744faa6af.scope: Deactivated successfully. May 10 00:46:43.450782 env[1739]: time="2025-05-10T00:46:43.450715600Z" level=info msg="shim disconnected" id=7bf4d6d819f8fe4e53f6e48a5b4e15ecfc9f9b38868a6f7a13c3534744faa6af May 10 00:46:43.450782 env[1739]: time="2025-05-10T00:46:43.450766762Z" level=warning msg="cleaning up after shim disconnected" id=7bf4d6d819f8fe4e53f6e48a5b4e15ecfc9f9b38868a6f7a13c3534744faa6af namespace=k8s.io May 10 00:46:43.450782 env[1739]: time="2025-05-10T00:46:43.450776936Z" level=info msg="cleaning up dead shim" May 10 00:46:43.459871 env[1739]: time="2025-05-10T00:46:43.459715057Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4672 runtime=io.containerd.runc.v2\n" May 10 00:46:43.494488 kubelet[2579]: E0510 00:46:43.494109 2579 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:46:43.813733 env[1739]: time="2025-05-10T00:46:43.813682344Z" level=info msg="CreateContainer within sandbox \"82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:46:43.843843 env[1739]: time="2025-05-10T00:46:43.843765082Z" level=info msg="CreateContainer within sandbox \"82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"673ad2838e4fa05e121b98fee08e6d54b7d258a567ce7cba860638ef3e8043cf\"" May 10 00:46:43.847070 env[1739]: time="2025-05-10T00:46:43.844757509Z" level=info msg="StartContainer for \"673ad2838e4fa05e121b98fee08e6d54b7d258a567ce7cba860638ef3e8043cf\"" May 10 00:46:43.864234 systemd[1]: Started cri-containerd-673ad2838e4fa05e121b98fee08e6d54b7d258a567ce7cba860638ef3e8043cf.scope. May 10 00:46:43.899462 env[1739]: time="2025-05-10T00:46:43.899373852Z" level=info msg="StartContainer for \"673ad2838e4fa05e121b98fee08e6d54b7d258a567ce7cba860638ef3e8043cf\" returns successfully" May 10 00:46:43.921261 systemd[1]: cri-containerd-673ad2838e4fa05e121b98fee08e6d54b7d258a567ce7cba860638ef3e8043cf.scope: Deactivated successfully. May 10 00:46:43.958807 env[1739]: time="2025-05-10T00:46:43.958757595Z" level=info msg="shim disconnected" id=673ad2838e4fa05e121b98fee08e6d54b7d258a567ce7cba860638ef3e8043cf May 10 00:46:43.958807 env[1739]: time="2025-05-10T00:46:43.958805344Z" level=warning msg="cleaning up after shim disconnected" id=673ad2838e4fa05e121b98fee08e6d54b7d258a567ce7cba860638ef3e8043cf namespace=k8s.io May 10 00:46:43.958807 env[1739]: time="2025-05-10T00:46:43.958814979Z" level=info msg="cleaning up dead shim" May 10 00:46:43.968029 env[1739]: time="2025-05-10T00:46:43.967964423Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4732 runtime=io.containerd.runc.v2\n" May 10 00:46:44.816062 env[1739]: time="2025-05-10T00:46:44.816007010Z" level=info msg="CreateContainer within sandbox \"82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:46:44.859453 env[1739]: time="2025-05-10T00:46:44.859244352Z" level=info msg="CreateContainer within sandbox \"82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1728af97148b7e3196445a8f86c58467ca1c7253d5d07fd96e0660bd7bec32e3\"" May 10 00:46:44.861785 env[1739]: time="2025-05-10T00:46:44.860094848Z" level=info msg="StartContainer for \"1728af97148b7e3196445a8f86c58467ca1c7253d5d07fd96e0660bd7bec32e3\"" May 10 00:46:44.892984 systemd[1]: Started cri-containerd-1728af97148b7e3196445a8f86c58467ca1c7253d5d07fd96e0660bd7bec32e3.scope. May 10 00:46:44.932487 env[1739]: time="2025-05-10T00:46:44.932442719Z" level=info msg="StartContainer for \"1728af97148b7e3196445a8f86c58467ca1c7253d5d07fd96e0660bd7bec32e3\" returns successfully" May 10 00:46:44.945116 systemd[1]: cri-containerd-1728af97148b7e3196445a8f86c58467ca1c7253d5d07fd96e0660bd7bec32e3.scope: Deactivated successfully. May 10 00:46:44.981372 env[1739]: time="2025-05-10T00:46:44.981308318Z" level=info msg="shim disconnected" id=1728af97148b7e3196445a8f86c58467ca1c7253d5d07fd96e0660bd7bec32e3 May 10 00:46:44.981372 env[1739]: time="2025-05-10T00:46:44.981370515Z" level=warning msg="cleaning up after shim disconnected" id=1728af97148b7e3196445a8f86c58467ca1c7253d5d07fd96e0660bd7bec32e3 namespace=k8s.io May 10 00:46:44.981774 env[1739]: time="2025-05-10T00:46:44.981382617Z" level=info msg="cleaning up dead shim" May 10 00:46:44.991533 env[1739]: time="2025-05-10T00:46:44.991488059Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4788 runtime=io.containerd.runc.v2\n" May 10 00:46:45.079859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1728af97148b7e3196445a8f86c58467ca1c7253d5d07fd96e0660bd7bec32e3-rootfs.mount: Deactivated successfully. May 10 00:46:45.365631 kubelet[2579]: E0510 00:46:45.365163 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-kzxvk" podUID="16550f9a-b6e8-4784-b38d-91ae00c30ae3" May 10 00:46:45.821950 env[1739]: time="2025-05-10T00:46:45.821906223Z" level=info msg="CreateContainer within sandbox \"82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:46:45.841559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025728831.mount: Deactivated successfully. May 10 00:46:45.853172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3441905249.mount: Deactivated successfully. May 10 00:46:45.862091 env[1739]: time="2025-05-10T00:46:45.860743808Z" level=info msg="CreateContainer within sandbox \"82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"de1197438f2e48f46d8dd34a8f7e5611023d3491ed81cb6ed92a7aa886c4c0a8\"" May 10 00:46:45.862091 env[1739]: time="2025-05-10T00:46:45.861532675Z" level=info msg="StartContainer for \"de1197438f2e48f46d8dd34a8f7e5611023d3491ed81cb6ed92a7aa886c4c0a8\"" May 10 00:46:45.883588 systemd[1]: Started cri-containerd-de1197438f2e48f46d8dd34a8f7e5611023d3491ed81cb6ed92a7aa886c4c0a8.scope. May 10 00:46:45.920849 systemd[1]: cri-containerd-de1197438f2e48f46d8dd34a8f7e5611023d3491ed81cb6ed92a7aa886c4c0a8.scope: Deactivated successfully. May 10 00:46:45.924546 env[1739]: time="2025-05-10T00:46:45.924447850Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53f6924f_13e2_4259_b19e_2cbf95b22155.slice/cri-containerd-de1197438f2e48f46d8dd34a8f7e5611023d3491ed81cb6ed92a7aa886c4c0a8.scope/memory.events\": no such file or directory" May 10 00:46:45.928989 env[1739]: time="2025-05-10T00:46:45.928917321Z" level=info msg="StartContainer for \"de1197438f2e48f46d8dd34a8f7e5611023d3491ed81cb6ed92a7aa886c4c0a8\" returns successfully" May 10 00:46:45.961749 env[1739]: time="2025-05-10T00:46:45.961689576Z" level=info msg="shim disconnected" id=de1197438f2e48f46d8dd34a8f7e5611023d3491ed81cb6ed92a7aa886c4c0a8 May 10 00:46:45.961749 env[1739]: time="2025-05-10T00:46:45.961734784Z" level=warning msg="cleaning up after shim disconnected" id=de1197438f2e48f46d8dd34a8f7e5611023d3491ed81cb6ed92a7aa886c4c0a8 namespace=k8s.io May 10 00:46:45.961749 env[1739]: time="2025-05-10T00:46:45.961745982Z" level=info msg="cleaning up dead shim" May 10 00:46:45.970793 env[1739]: time="2025-05-10T00:46:45.970737032Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4845 runtime=io.containerd.runc.v2\n" May 10 00:46:46.827835 env[1739]: time="2025-05-10T00:46:46.826304630Z" level=info msg="CreateContainer within sandbox \"82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:46:46.853774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683513953.mount: Deactivated successfully. May 10 00:46:46.864796 env[1739]: time="2025-05-10T00:46:46.864721631Z" level=info msg="CreateContainer within sandbox \"82d96e469910acba623828d93099e38fc23694a08731a6fdc9cff6bef1ac8cb3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d376de6359f5b9cb2b0d5c350af7b2937a90d569323394d5555e6620fe27c9ea\"" May 10 00:46:46.866925 env[1739]: time="2025-05-10T00:46:46.866875454Z" level=info msg="StartContainer for \"d376de6359f5b9cb2b0d5c350af7b2937a90d569323394d5555e6620fe27c9ea\"" May 10 00:46:46.896825 systemd[1]: Started cri-containerd-d376de6359f5b9cb2b0d5c350af7b2937a90d569323394d5555e6620fe27c9ea.scope. May 10 00:46:46.949771 env[1739]: time="2025-05-10T00:46:46.949699428Z" level=info msg="StartContainer for \"d376de6359f5b9cb2b0d5c350af7b2937a90d569323394d5555e6620fe27c9ea\" returns successfully" May 10 00:46:47.364936 kubelet[2579]: E0510 00:46:47.364877 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-kzxvk" podUID="16550f9a-b6e8-4784-b38d-91ae00c30ae3" May 10 00:46:48.411507 env[1739]: time="2025-05-10T00:46:48.411466777Z" level=info msg="StopPodSandbox for \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\"" May 10 00:46:48.411873 env[1739]: time="2025-05-10T00:46:48.411554933Z" level=info msg="TearDown network for sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" successfully" May 10 00:46:48.411873 env[1739]: time="2025-05-10T00:46:48.411585543Z" level=info msg="StopPodSandbox for \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" returns successfully" May 10 00:46:48.411943 env[1739]: time="2025-05-10T00:46:48.411883466Z" level=info msg="RemovePodSandbox for \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\"" May 10 00:46:48.411943 env[1739]: time="2025-05-10T00:46:48.411904952Z" level=info msg="Forcibly stopping sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\"" May 10 00:46:48.411999 env[1739]: time="2025-05-10T00:46:48.411969691Z" level=info msg="TearDown network for sandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" successfully" May 10 00:46:48.420303 env[1739]: time="2025-05-10T00:46:48.420158204Z" level=info msg="RemovePodSandbox \"e47f2b278cc5e42beb05e2148b95a815bf2a41be9055115fdfa288882b80fb6f\" returns successfully" May 10 00:46:48.420729 env[1739]: time="2025-05-10T00:46:48.420698273Z" level=info msg="StopPodSandbox for \"fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02\"" May 10 00:46:48.420849 env[1739]: time="2025-05-10T00:46:48.420792359Z" level=info msg="TearDown network for sandbox \"fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02\" successfully" May 10 00:46:48.420849 env[1739]: time="2025-05-10T00:46:48.420839230Z" level=info msg="StopPodSandbox for \"fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02\" returns successfully" May 10 00:46:48.421252 env[1739]: time="2025-05-10T00:46:48.421223291Z" level=info msg="RemovePodSandbox for \"fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02\"" May 10 00:46:48.421351 env[1739]: time="2025-05-10T00:46:48.421256696Z" level=info msg="Forcibly stopping sandbox \"fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02\"" May 10 00:46:48.421404 env[1739]: time="2025-05-10T00:46:48.421349204Z" level=info msg="TearDown network for sandbox \"fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02\" successfully" May 10 00:46:48.426764 env[1739]: time="2025-05-10T00:46:48.426722279Z" level=info msg="RemovePodSandbox \"fbf37d235b90835dadf5a9108860c25bef1324fde385ca97c41637d6c52d5d02\" returns successfully" May 10 00:46:48.427315 env[1739]: time="2025-05-10T00:46:48.427276809Z" level=info msg="StopPodSandbox for \"9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0\"" May 10 00:46:48.427430 env[1739]: time="2025-05-10T00:46:48.427380106Z" level=info msg="TearDown network for sandbox \"9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0\" successfully" May 10 00:46:48.427497 env[1739]: time="2025-05-10T00:46:48.427425910Z" level=info msg="StopPodSandbox for \"9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0\" returns successfully" May 10 00:46:48.427873 env[1739]: time="2025-05-10T00:46:48.427849911Z" level=info msg="RemovePodSandbox for \"9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0\"" May 10 00:46:48.427959 env[1739]: time="2025-05-10T00:46:48.427883425Z" level=info msg="Forcibly stopping sandbox \"9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0\"" May 10 00:46:48.428010 env[1739]: time="2025-05-10T00:46:48.427978673Z" level=info msg="TearDown network for sandbox \"9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0\" successfully" May 10 00:46:48.433084 env[1739]: time="2025-05-10T00:46:48.433012461Z" level=info msg="RemovePodSandbox \"9471e93018d144ea9490a676673e5f453236ae6b6400c3bf3b9f952012de2ed0\" returns successfully" May 10 00:46:48.500481 kubelet[2579]: E0510 00:46:48.500439 2579 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:46:49.365665 kubelet[2579]: E0510 00:46:49.365606 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-kzxvk" podUID="16550f9a-b6e8-4784-b38d-91ae00c30ae3" May 10 00:46:51.253788 systemd[1]: run-containerd-runc-k8s.io-d376de6359f5b9cb2b0d5c350af7b2937a90d569323394d5555e6620fe27c9ea-runc.nINQ7Z.mount: Deactivated successfully. May 10 00:46:51.365315 kubelet[2579]: E0510 00:46:51.365254 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-kzxvk" podUID="16550f9a-b6e8-4784-b38d-91ae00c30ae3" May 10 00:46:51.913069 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 10 00:46:53.365069 kubelet[2579]: E0510 00:46:53.364986 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-kzxvk" podUID="16550f9a-b6e8-4784-b38d-91ae00c30ae3" May 10 00:46:55.348484 systemd-networkd[1465]: lxc_health: Link UP May 10 00:46:55.356773 (udev-worker)[5435]: Network interface NamePolicy= disabled on kernel command line. May 10 00:46:55.359600 systemd-networkd[1465]: lxc_health: Gained carrier May 10 00:46:55.360068 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:46:56.688909 systemd-networkd[1465]: lxc_health: Gained IPv6LL May 10 00:46:57.091463 kubelet[2579]: I0510 00:46:57.091391 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-btqhm" podStartSLOduration=16.091368939 podStartE2EDuration="16.091368939s" podCreationTimestamp="2025-05-10 00:46:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:46:49.85215353 +0000 UTC m=+121.674970088" watchObservedRunningTime="2025-05-10 00:46:57.091368939 +0000 UTC m=+128.914185493" May 10 00:46:58.133768 systemd[1]: run-containerd-runc-k8s.io-d376de6359f5b9cb2b0d5c350af7b2937a90d569323394d5555e6620fe27c9ea-runc.7NvRCk.mount: Deactivated successfully. May 10 00:47:00.449502 sshd[4527]: pam_unix(sshd:session): session closed for user core May 10 00:47:00.453393 systemd[1]: sshd@25-172.31.16.44:22-139.178.89.65:46026.service: Deactivated successfully. May 10 00:47:00.454417 systemd[1]: session-26.scope: Deactivated successfully. May 10 00:47:00.454888 systemd-logind[1730]: Session 26 logged out. Waiting for processes to exit. May 10 00:47:00.456713 systemd-logind[1730]: Removed session 26. May 10 00:47:14.477556 systemd[1]: cri-containerd-e490b70eb41e9c515fc8226d7c449a4b7a1deb609aab0c9dd29849dc72d81c3f.scope: Deactivated successfully. May 10 00:47:14.477811 systemd[1]: cri-containerd-e490b70eb41e9c515fc8226d7c449a4b7a1deb609aab0c9dd29849dc72d81c3f.scope: Consumed 3.011s CPU time. May 10 00:47:14.502521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e490b70eb41e9c515fc8226d7c449a4b7a1deb609aab0c9dd29849dc72d81c3f-rootfs.mount: Deactivated successfully. May 10 00:47:14.533380 env[1739]: time="2025-05-10T00:47:14.533316879Z" level=info msg="shim disconnected" id=e490b70eb41e9c515fc8226d7c449a4b7a1deb609aab0c9dd29849dc72d81c3f May 10 00:47:14.533380 env[1739]: time="2025-05-10T00:47:14.533377022Z" level=warning msg="cleaning up after shim disconnected" id=e490b70eb41e9c515fc8226d7c449a4b7a1deb609aab0c9dd29849dc72d81c3f namespace=k8s.io May 10 00:47:14.533380 env[1739]: time="2025-05-10T00:47:14.533389234Z" level=info msg="cleaning up dead shim" May 10 00:47:14.543218 env[1739]: time="2025-05-10T00:47:14.543169971Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5551 runtime=io.containerd.runc.v2\n" May 10 00:47:14.890641 kubelet[2579]: I0510 00:47:14.890608 2579 scope.go:117] "RemoveContainer" containerID="e490b70eb41e9c515fc8226d7c449a4b7a1deb609aab0c9dd29849dc72d81c3f" May 10 00:47:14.893236 env[1739]: time="2025-05-10T00:47:14.893186387Z" level=info msg="CreateContainer within sandbox \"a76accebcd4ea1db0b4977318cfb25ba847c29706ed259bee516e5db3b436c40\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 10 00:47:14.921753 env[1739]: time="2025-05-10T00:47:14.921651886Z" level=info msg="CreateContainer within sandbox \"a76accebcd4ea1db0b4977318cfb25ba847c29706ed259bee516e5db3b436c40\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"53f41fcb4f990169248d3c15ce8693b22f00560f5e4db9fdc2b76a28661f486f\"" May 10 00:47:14.922410 env[1739]: time="2025-05-10T00:47:14.922233168Z" level=info msg="StartContainer for \"53f41fcb4f990169248d3c15ce8693b22f00560f5e4db9fdc2b76a28661f486f\"" May 10 00:47:14.951764 systemd[1]: Started cri-containerd-53f41fcb4f990169248d3c15ce8693b22f00560f5e4db9fdc2b76a28661f486f.scope. May 10 00:47:15.005501 env[1739]: time="2025-05-10T00:47:15.005393812Z" level=info msg="StartContainer for \"53f41fcb4f990169248d3c15ce8693b22f00560f5e4db9fdc2b76a28661f486f\" returns successfully" May 10 00:47:19.621968 systemd[1]: cri-containerd-77ddea4a3cd3863de286bacd8764e16730203049210bc3a9c2dde8f551058cdb.scope: Deactivated successfully. May 10 00:47:19.622406 systemd[1]: cri-containerd-77ddea4a3cd3863de286bacd8764e16730203049210bc3a9c2dde8f551058cdb.scope: Consumed 1.428s CPU time. May 10 00:47:19.647319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77ddea4a3cd3863de286bacd8764e16730203049210bc3a9c2dde8f551058cdb-rootfs.mount: Deactivated successfully. May 10 00:47:19.673489 env[1739]: time="2025-05-10T00:47:19.673442553Z" level=info msg="shim disconnected" id=77ddea4a3cd3863de286bacd8764e16730203049210bc3a9c2dde8f551058cdb May 10 00:47:19.673489 env[1739]: time="2025-05-10T00:47:19.673489347Z" level=warning msg="cleaning up after shim disconnected" id=77ddea4a3cd3863de286bacd8764e16730203049210bc3a9c2dde8f551058cdb namespace=k8s.io May 10 00:47:19.674028 env[1739]: time="2025-05-10T00:47:19.673499067Z" level=info msg="cleaning up dead shim" May 10 00:47:19.681839 env[1739]: time="2025-05-10T00:47:19.681793278Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5612 runtime=io.containerd.runc.v2\n" May 10 00:47:19.905214 kubelet[2579]: I0510 00:47:19.904874 2579 scope.go:117] "RemoveContainer" containerID="77ddea4a3cd3863de286bacd8764e16730203049210bc3a9c2dde8f551058cdb" May 10 00:47:19.907986 env[1739]: time="2025-05-10T00:47:19.907950509Z" level=info msg="CreateContainer within sandbox \"a0961314fc631f49f6120242819087bbec3a2102bf7c918c5e7e27083eba125f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 10 00:47:19.931138 env[1739]: time="2025-05-10T00:47:19.931084388Z" level=info msg="CreateContainer within sandbox \"a0961314fc631f49f6120242819087bbec3a2102bf7c918c5e7e27083eba125f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"fe344f2c4952e70babd2fdfc53372cf284b6fd27a53c0f9fd462abf970ca524f\"" May 10 00:47:19.931633 env[1739]: time="2025-05-10T00:47:19.931600649Z" level=info msg="StartContainer for \"fe344f2c4952e70babd2fdfc53372cf284b6fd27a53c0f9fd462abf970ca524f\"" May 10 00:47:19.963749 systemd[1]: Started cri-containerd-fe344f2c4952e70babd2fdfc53372cf284b6fd27a53c0f9fd462abf970ca524f.scope. May 10 00:47:20.015724 env[1739]: time="2025-05-10T00:47:20.015678412Z" level=info msg="StartContainer for \"fe344f2c4952e70babd2fdfc53372cf284b6fd27a53c0f9fd462abf970ca524f\" returns successfully" May 10 00:47:20.647059 systemd[1]: run-containerd-runc-k8s.io-fe344f2c4952e70babd2fdfc53372cf284b6fd27a53c0f9fd462abf970ca524f-runc.aHotCD.mount: Deactivated successfully. May 10 00:47:21.215864 kubelet[2579]: E0510 00:47:21.215799 2579 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-44?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 10 00:47:31.217331 kubelet[2579]: E0510 00:47:31.217261 2579 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-44?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"