May 17 00:42:28.988606 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:42:28.988638 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:42:28.988657 kernel: BIOS-provided physical RAM map: May 17 00:42:28.988668 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:42:28.988679 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 17 00:42:28.988690 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 17 00:42:28.988704 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 17 00:42:28.988716 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 17 00:42:28.988730 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 17 00:42:28.988741 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 17 00:42:28.988753 kernel: NX (Execute Disable) protection: active May 17 00:42:28.988765 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable May 17 00:42:28.988777 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable May 17 00:42:28.988789 kernel: extended physical RAM map: May 17 00:42:28.988807 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:42:28.988819 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable May 17 00:42:28.988831 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable May 17 00:42:28.988844 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable May 17 00:42:28.988857 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 17 00:42:28.988870 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 17 00:42:28.988882 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 17 00:42:28.988895 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 17 00:42:28.988908 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 17 00:42:28.988921 kernel: efi: EFI v2.70 by EDK II May 17 00:42:28.988936 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77004a98 May 17 00:42:28.988948 kernel: SMBIOS 2.7 present. May 17 00:42:28.988960 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 17 00:42:28.988972 kernel: Hypervisor detected: KVM May 17 00:42:28.988985 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:42:28.988998 kernel: kvm-clock: cpu 0, msr 1719a001, primary cpu clock May 17 00:42:28.989011 kernel: kvm-clock: using sched offset of 4099204307 cycles May 17 00:42:28.989025 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:42:28.989038 kernel: tsc: Detected 2499.994 MHz processor May 17 00:42:28.989051 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:42:28.989064 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:42:28.989080 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 17 00:42:28.989093 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:42:28.989106 kernel: Using GB pages for direct mapping May 17 00:42:28.989119 kernel: Secure boot disabled May 17 00:42:28.989132 kernel: ACPI: Early table checksum verification disabled May 17 00:42:28.989151 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 17 00:42:28.989165 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 17 00:42:28.989181 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 17 00:42:28.989196 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 17 00:42:28.989209 kernel: ACPI: FACS 0x00000000789D0000 000040 May 17 00:42:28.989223 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 17 00:42:28.989237 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 17 00:42:28.989252 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 17 00:42:28.989266 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 17 00:42:28.989282 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 17 00:42:28.989296 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:42:28.989310 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:42:28.989324 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 17 00:42:28.989338 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 17 00:42:28.989352 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 17 00:42:28.989366 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 17 00:42:28.989380 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 17 00:42:28.989394 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 17 00:42:28.989410 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 17 00:42:28.989424 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 17 00:42:28.989438 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 17 00:42:28.989452 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 17 00:42:28.989466 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 17 00:42:28.989479 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 17 00:42:28.990483 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:42:28.990522 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:42:28.990537 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 17 00:42:28.990556 kernel: NUMA: Initialized distance table, cnt=1 May 17 00:42:28.990570 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 17 00:42:28.990585 kernel: Zone ranges: May 17 00:42:28.990599 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:42:28.990613 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 17 00:42:28.990627 kernel: Normal empty May 17 00:42:28.990641 kernel: Movable zone start for each node May 17 00:42:28.990655 kernel: Early memory node ranges May 17 00:42:28.990669 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:42:28.990686 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 17 00:42:28.990700 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 17 00:42:28.990714 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 17 00:42:28.990728 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:42:28.990742 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:42:28.990756 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 17 00:42:28.990771 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 17 00:42:28.990785 kernel: ACPI: PM-Timer IO Port: 0xb008 May 17 00:42:28.990799 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:42:28.990815 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 17 00:42:28.990829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:42:28.990843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:42:28.990857 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:42:28.990870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:42:28.990884 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:42:28.990899 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:42:28.990913 kernel: TSC deadline timer available May 17 00:42:28.990927 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:42:28.990944 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 17 00:42:28.990958 kernel: Booting paravirtualized kernel on KVM May 17 00:42:28.990973 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:42:28.990988 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:42:28.991002 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:42:28.991016 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:42:28.991031 kernel: pcpu-alloc: [0] 0 1 May 17 00:42:28.991044 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 May 17 00:42:28.991058 kernel: kvm-guest: PV spinlocks enabled May 17 00:42:28.991076 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:42:28.991089 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 17 00:42:28.991104 kernel: Policy zone: DMA32 May 17 00:42:28.991120 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:42:28.991135 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:42:28.991150 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:42:28.991164 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:42:28.991179 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:42:28.991196 kernel: Memory: 1876640K/2037804K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 160904K reserved, 0K cma-reserved) May 17 00:42:28.991210 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:42:28.991224 kernel: Kernel/User page tables isolation: enabled May 17 00:42:28.991238 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:42:28.991253 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:42:28.991267 kernel: rcu: Hierarchical RCU implementation. May 17 00:42:28.991283 kernel: rcu: RCU event tracing is enabled. May 17 00:42:28.991309 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:42:28.991324 kernel: Rude variant of Tasks RCU enabled. May 17 00:42:28.991339 kernel: Tracing variant of Tasks RCU enabled. May 17 00:42:28.991354 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:42:28.991369 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:42:28.991387 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:42:28.991402 kernel: random: crng init done May 17 00:42:28.991416 kernel: Console: colour dummy device 80x25 May 17 00:42:28.991431 kernel: printk: console [tty0] enabled May 17 00:42:28.991446 kernel: printk: console [ttyS0] enabled May 17 00:42:28.991461 kernel: ACPI: Core revision 20210730 May 17 00:42:28.991477 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 17 00:42:28.991506 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:42:28.991521 kernel: x2apic enabled May 17 00:42:28.991536 kernel: Switched APIC routing to physical x2apic. May 17 00:42:28.991551 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns May 17 00:42:28.991566 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) May 17 00:42:28.991581 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:42:28.991597 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:42:28.991615 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:42:28.991629 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:42:28.991644 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:42:28.991658 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:42:28.991673 kernel: RETBleed: Vulnerable May 17 00:42:28.991688 kernel: Speculative Store Bypass: Vulnerable May 17 00:42:28.991702 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:42:28.991718 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:42:28.991732 kernel: GDS: Unknown: Dependent on hypervisor status May 17 00:42:28.991747 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:42:28.991761 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:42:28.991779 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:42:28.991793 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 00:42:28.991808 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 00:42:28.991822 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:42:28.991837 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:42:28.991853 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:42:28.991867 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 17 00:42:28.991882 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:42:28.991897 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 00:42:28.991911 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 00:42:28.991926 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 17 00:42:28.991943 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 17 00:42:28.991958 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 17 00:42:28.991972 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 17 00:42:28.991987 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 17 00:42:28.992001 kernel: Freeing SMP alternatives memory: 32K May 17 00:42:28.992016 kernel: pid_max: default: 32768 minimum: 301 May 17 00:42:28.992030 kernel: LSM: Security Framework initializing May 17 00:42:28.992045 kernel: SELinux: Initializing. May 17 00:42:28.992059 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:42:28.992074 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:42:28.992089 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 17 00:42:28.992107 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:42:28.992122 kernel: signal: max sigframe size: 3632 May 17 00:42:28.992136 kernel: rcu: Hierarchical SRCU implementation. May 17 00:42:28.992151 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:42:28.992166 kernel: smp: Bringing up secondary CPUs ... May 17 00:42:28.992181 kernel: x86: Booting SMP configuration: May 17 00:42:28.992196 kernel: .... node #0, CPUs: #1 May 17 00:42:28.992211 kernel: kvm-clock: cpu 1, msr 1719a041, secondary cpu clock May 17 00:42:28.992226 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 May 17 00:42:28.992244 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 17 00:42:28.992260 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:42:28.992274 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:42:28.992289 kernel: smpboot: Max logical packages: 1 May 17 00:42:28.992304 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) May 17 00:42:28.992319 kernel: devtmpfs: initialized May 17 00:42:28.992334 kernel: x86/mm: Memory block size: 128MB May 17 00:42:28.992349 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 17 00:42:28.992364 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:42:28.992382 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:42:28.992397 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:42:28.992412 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:42:28.992427 kernel: audit: initializing netlink subsys (disabled) May 17 00:42:28.992442 kernel: audit: type=2000 audit(1747442548.905:1): state=initialized audit_enabled=0 res=1 May 17 00:42:28.992457 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:42:28.992471 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:42:28.992486 kernel: cpuidle: using governor menu May 17 00:42:28.999860 kernel: ACPI: bus type PCI registered May 17 00:42:28.999901 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:42:28.999917 kernel: dca service started, version 1.12.1 May 17 00:42:28.999932 kernel: PCI: Using configuration type 1 for base access May 17 00:42:28.999947 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:42:28.999962 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:42:28.999977 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:42:28.999991 kernel: ACPI: Added _OSI(Module Device) May 17 00:42:29.000006 kernel: ACPI: Added _OSI(Processor Device) May 17 00:42:29.000021 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:42:29.000043 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:42:29.000059 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:42:29.000073 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:42:29.000088 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:42:29.000103 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 17 00:42:29.000117 kernel: ACPI: Interpreter enabled May 17 00:42:29.000132 kernel: ACPI: PM: (supports S0 S5) May 17 00:42:29.000145 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:42:29.000160 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:42:29.000183 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:42:29.000198 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:42:29.000417 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:42:29.000589 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 17 00:42:29.000611 kernel: acpiphp: Slot [3] registered May 17 00:42:29.000626 kernel: acpiphp: Slot [4] registered May 17 00:42:29.000640 kernel: acpiphp: Slot [5] registered May 17 00:42:29.000658 kernel: acpiphp: Slot [6] registered May 17 00:42:29.000677 kernel: acpiphp: Slot [7] registered May 17 00:42:29.000693 kernel: acpiphp: Slot [8] registered May 17 00:42:29.000707 kernel: acpiphp: Slot [9] registered May 17 00:42:29.000727 kernel: acpiphp: Slot [10] registered May 17 00:42:29.000742 kernel: acpiphp: Slot [11] registered May 17 00:42:29.000755 kernel: acpiphp: Slot [12] registered May 17 00:42:29.000770 kernel: acpiphp: Slot [13] registered May 17 00:42:29.000789 kernel: acpiphp: Slot [14] registered May 17 00:42:29.000805 kernel: acpiphp: Slot [15] registered May 17 00:42:29.000822 kernel: acpiphp: Slot [16] registered May 17 00:42:29.000836 kernel: acpiphp: Slot [17] registered May 17 00:42:29.000856 kernel: acpiphp: Slot [18] registered May 17 00:42:29.000870 kernel: acpiphp: Slot [19] registered May 17 00:42:29.000885 kernel: acpiphp: Slot [20] registered May 17 00:42:29.000905 kernel: acpiphp: Slot [21] registered May 17 00:42:29.000920 kernel: acpiphp: Slot [22] registered May 17 00:42:29.000935 kernel: acpiphp: Slot [23] registered May 17 00:42:29.000954 kernel: acpiphp: Slot [24] registered May 17 00:42:29.000972 kernel: acpiphp: Slot [25] registered May 17 00:42:29.000990 kernel: acpiphp: Slot [26] registered May 17 00:42:29.001006 kernel: acpiphp: Slot [27] registered May 17 00:42:29.001021 kernel: acpiphp: Slot [28] registered May 17 00:42:29.001040 kernel: acpiphp: Slot [29] registered May 17 00:42:29.001056 kernel: acpiphp: Slot [30] registered May 17 00:42:29.001070 kernel: acpiphp: Slot [31] registered May 17 00:42:29.001085 kernel: PCI host bridge to bus 0000:00 May 17 00:42:29.001228 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:42:29.001363 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:42:29.001486 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:42:29.001667 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:42:29.001780 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 17 00:42:29.001888 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:42:29.002025 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:42:29.002155 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:42:29.002288 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 17 00:42:29.002410 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 17 00:42:29.015567 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 17 00:42:29.015749 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 17 00:42:29.015879 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 17 00:42:29.016005 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 17 00:42:29.016137 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 17 00:42:29.016262 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 17 00:42:29.016394 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 17 00:42:29.018736 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 17 00:42:29.018909 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 17 00:42:29.019044 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 17 00:42:29.019178 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:42:29.019325 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 17 00:42:29.019462 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 17 00:42:29.019619 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 17 00:42:29.019739 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 17 00:42:29.019756 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:42:29.019770 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:42:29.019785 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:42:29.019803 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:42:29.019816 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:42:29.019831 kernel: iommu: Default domain type: Translated May 17 00:42:29.019844 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:42:29.019957 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 17 00:42:29.020072 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:42:29.020185 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 17 00:42:29.020202 kernel: vgaarb: loaded May 17 00:42:29.020215 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:42:29.020232 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:42:29.020246 kernel: PTP clock support registered May 17 00:42:29.020259 kernel: Registered efivars operations May 17 00:42:29.020273 kernel: PCI: Using ACPI for IRQ routing May 17 00:42:29.020287 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:42:29.020300 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] May 17 00:42:29.020312 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 17 00:42:29.020326 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 17 00:42:29.020340 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 17 00:42:29.020356 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 17 00:42:29.020373 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:42:29.020387 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:42:29.020402 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:42:29.020417 kernel: pnp: PnP ACPI init May 17 00:42:29.020431 kernel: pnp: PnP ACPI: found 5 devices May 17 00:42:29.020444 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:42:29.020457 kernel: NET: Registered PF_INET protocol family May 17 00:42:29.020470 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:42:29.020486 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:42:29.021560 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:42:29.021578 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:42:29.021593 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 00:42:29.021617 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:42:29.021632 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:42:29.021647 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:42:29.021661 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:42:29.021682 kernel: NET: Registered PF_XDP protocol family May 17 00:42:29.021841 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:42:29.021950 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:42:29.022056 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:42:29.022161 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:42:29.022283 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 17 00:42:29.022801 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:42:29.022972 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 17 00:42:29.022998 kernel: PCI: CLS 0 bytes, default 64 May 17 00:42:29.023014 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:42:29.023030 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns May 17 00:42:29.023045 kernel: clocksource: Switched to clocksource tsc May 17 00:42:29.023061 kernel: Initialise system trusted keyrings May 17 00:42:29.023075 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:42:29.023091 kernel: Key type asymmetric registered May 17 00:42:29.023105 kernel: Asymmetric key parser 'x509' registered May 17 00:42:29.023120 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:42:29.023138 kernel: io scheduler mq-deadline registered May 17 00:42:29.023153 kernel: io scheduler kyber registered May 17 00:42:29.023168 kernel: io scheduler bfq registered May 17 00:42:29.023183 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:42:29.023198 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:42:29.023213 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:42:29.023228 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:42:29.023243 kernel: i8042: Warning: Keylock active May 17 00:42:29.023258 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:42:29.023276 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:42:29.023423 kernel: rtc_cmos 00:00: RTC can wake from S4 May 17 00:42:29.023570 kernel: rtc_cmos 00:00: registered as rtc0 May 17 00:42:29.023692 kernel: rtc_cmos 00:00: setting system clock to 2025-05-17T00:42:28 UTC (1747442548) May 17 00:42:29.023811 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 17 00:42:29.023829 kernel: intel_pstate: CPU model not supported May 17 00:42:29.023844 kernel: efifb: probing for efifb May 17 00:42:29.023860 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 17 00:42:29.023879 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 00:42:29.023894 kernel: efifb: scrolling: redraw May 17 00:42:29.023909 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:42:29.023924 kernel: Console: switching to colour frame buffer device 100x37 May 17 00:42:29.023939 kernel: fb0: EFI VGA frame buffer device May 17 00:42:29.023955 kernel: pstore: Registered efi as persistent store backend May 17 00:42:29.023993 kernel: NET: Registered PF_INET6 protocol family May 17 00:42:29.024010 kernel: Segment Routing with IPv6 May 17 00:42:29.024026 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:42:29.024044 kernel: NET: Registered PF_PACKET protocol family May 17 00:42:29.024060 kernel: Key type dns_resolver registered May 17 00:42:29.024075 kernel: IPI shorthand broadcast: enabled May 17 00:42:29.024092 kernel: sched_clock: Marking stable (337052577, 137268772)->(600166869, -125845520) May 17 00:42:29.024108 kernel: registered taskstats version 1 May 17 00:42:29.024123 kernel: Loading compiled-in X.509 certificates May 17 00:42:29.024139 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:42:29.024157 kernel: Key type .fscrypt registered May 17 00:42:29.024172 kernel: Key type fscrypt-provisioning registered May 17 00:42:29.024190 kernel: pstore: Using crash dump compression: deflate May 17 00:42:29.024206 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:42:29.024222 kernel: ima: Allocated hash algorithm: sha1 May 17 00:42:29.024238 kernel: ima: No architecture policies found May 17 00:42:29.024253 kernel: clk: Disabling unused clocks May 17 00:42:29.024269 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:42:29.024285 kernel: Write protecting the kernel read-only data: 28672k May 17 00:42:29.024300 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:42:29.024316 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:42:29.024334 kernel: Run /init as init process May 17 00:42:29.024350 kernel: with arguments: May 17 00:42:29.024365 kernel: /init May 17 00:42:29.024381 kernel: with environment: May 17 00:42:29.024396 kernel: HOME=/ May 17 00:42:29.024412 kernel: TERM=linux May 17 00:42:29.024427 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:42:29.024447 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:42:29.024468 systemd[1]: Detected virtualization amazon. May 17 00:42:29.024485 systemd[1]: Detected architecture x86-64. May 17 00:42:29.024514 systemd[1]: Running in initrd. May 17 00:42:29.024530 systemd[1]: No hostname configured, using default hostname. May 17 00:42:29.024546 systemd[1]: Hostname set to . May 17 00:42:29.024563 systemd[1]: Initializing machine ID from VM UUID. May 17 00:42:29.024579 systemd[1]: Queued start job for default target initrd.target. May 17 00:42:29.024598 systemd[1]: Started systemd-ask-password-console.path. May 17 00:42:29.024617 systemd[1]: Reached target cryptsetup.target. May 17 00:42:29.024632 systemd[1]: Reached target paths.target. May 17 00:42:29.024648 systemd[1]: Reached target slices.target. May 17 00:42:29.024665 systemd[1]: Reached target swap.target. May 17 00:42:29.024681 systemd[1]: Reached target timers.target. May 17 00:42:29.024701 systemd[1]: Listening on iscsid.socket. May 17 00:42:29.024717 systemd[1]: Listening on iscsiuio.socket. May 17 00:42:29.024734 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:42:29.024750 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:42:29.024767 systemd[1]: Listening on systemd-journald.socket. May 17 00:42:29.024784 systemd[1]: Listening on systemd-networkd.socket. May 17 00:42:29.024800 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:42:29.024817 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:42:29.024836 systemd[1]: Reached target sockets.target. May 17 00:42:29.024852 systemd[1]: Starting kmod-static-nodes.service... May 17 00:42:29.024868 systemd[1]: Finished network-cleanup.service. May 17 00:42:29.024885 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:42:29.024901 systemd[1]: Starting systemd-journald.service... May 17 00:42:29.024918 systemd[1]: Starting systemd-modules-load.service... May 17 00:42:29.024934 systemd[1]: Starting systemd-resolved.service... May 17 00:42:29.024951 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:42:29.024967 systemd[1]: Finished kmod-static-nodes.service. May 17 00:42:29.024986 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:42:29.025003 kernel: audit: type=1130 audit(1747442549.009:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.025020 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:42:29.025037 kernel: audit: type=1130 audit(1747442549.021:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.025060 systemd-journald[185]: Journal started May 17 00:42:29.025146 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2ac0ffbca18eba9be83b116bcb25dd) is 4.8M, max 38.3M, 33.5M free. May 17 00:42:29.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.014535 systemd-resolved[187]: Positive Trust Anchors: May 17 00:42:29.040174 systemd[1]: Started systemd-resolved.service. May 17 00:42:29.040216 systemd[1]: Started systemd-journald.service. May 17 00:42:29.040236 kernel: audit: type=1130 audit(1747442549.031:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.014546 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:42:29.014594 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:42:29.061807 kernel: audit: type=1130 audit(1747442549.048:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.061845 kernel: audit: type=1130 audit(1747442549.054:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.018541 systemd-resolved[187]: Defaulting to hostname 'linux'. May 17 00:42:29.028471 systemd-modules-load[186]: Inserted module 'overlay' May 17 00:42:29.049230 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:42:29.055678 systemd[1]: Reached target nss-lookup.target. May 17 00:42:29.063952 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:42:29.071231 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:42:29.089106 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:42:29.098665 kernel: audit: type=1130 audit(1747442549.089:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.091481 systemd[1]: Starting dracut-cmdline.service... May 17 00:42:29.104535 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:42:29.106595 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:42:29.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.117383 kernel: audit: type=1130 audit(1747442549.109:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.117441 kernel: Bridge firewalling registered May 17 00:42:29.118289 systemd-modules-load[186]: Inserted module 'br_netfilter' May 17 00:42:29.123446 dracut-cmdline[201]: dracut-dracut-053 May 17 00:42:29.127971 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:42:29.149517 kernel: SCSI subsystem initialized May 17 00:42:29.170136 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:42:29.170206 kernel: device-mapper: uevent: version 1.0.3 May 17 00:42:29.172636 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:42:29.177167 systemd-modules-load[186]: Inserted module 'dm_multipath' May 17 00:42:29.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.179297 systemd[1]: Finished systemd-modules-load.service. May 17 00:42:29.189634 kernel: audit: type=1130 audit(1747442549.180:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.189212 systemd[1]: Starting systemd-sysctl.service... May 17 00:42:29.196329 systemd[1]: Finished systemd-sysctl.service. May 17 00:42:29.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.205598 kernel: audit: type=1130 audit(1747442549.197:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.218519 kernel: Loading iSCSI transport class v2.0-870. May 17 00:42:29.237517 kernel: iscsi: registered transport (tcp) May 17 00:42:29.262699 kernel: iscsi: registered transport (qla4xxx) May 17 00:42:29.262774 kernel: QLogic iSCSI HBA Driver May 17 00:42:29.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.293782 systemd[1]: Finished dracut-cmdline.service. May 17 00:42:29.295389 systemd[1]: Starting dracut-pre-udev.service... May 17 00:42:29.347550 kernel: raid6: avx512x4 gen() 18208 MB/s May 17 00:42:29.365549 kernel: raid6: avx512x4 xor() 8222 MB/s May 17 00:42:29.383542 kernel: raid6: avx512x2 gen() 18251 MB/s May 17 00:42:29.401539 kernel: raid6: avx512x2 xor() 24375 MB/s May 17 00:42:29.419550 kernel: raid6: avx512x1 gen() 18331 MB/s May 17 00:42:29.437541 kernel: raid6: avx512x1 xor() 21911 MB/s May 17 00:42:29.455545 kernel: raid6: avx2x4 gen() 18152 MB/s May 17 00:42:29.473529 kernel: raid6: avx2x4 xor() 7616 MB/s May 17 00:42:29.491542 kernel: raid6: avx2x2 gen() 18225 MB/s May 17 00:42:29.509540 kernel: raid6: avx2x2 xor() 18225 MB/s May 17 00:42:29.527532 kernel: raid6: avx2x1 gen() 14151 MB/s May 17 00:42:29.545545 kernel: raid6: avx2x1 xor() 15728 MB/s May 17 00:42:29.563534 kernel: raid6: sse2x4 gen() 9572 MB/s May 17 00:42:29.581529 kernel: raid6: sse2x4 xor() 6012 MB/s May 17 00:42:29.599530 kernel: raid6: sse2x2 gen() 10625 MB/s May 17 00:42:29.617540 kernel: raid6: sse2x2 xor() 6260 MB/s May 17 00:42:29.635529 kernel: raid6: sse2x1 gen() 9524 MB/s May 17 00:42:29.653655 kernel: raid6: sse2x1 xor() 4864 MB/s May 17 00:42:29.653698 kernel: raid6: using algorithm avx512x1 gen() 18331 MB/s May 17 00:42:29.653729 kernel: raid6: .... xor() 21911 MB/s, rmw enabled May 17 00:42:29.654738 kernel: raid6: using avx512x2 recovery algorithm May 17 00:42:29.669523 kernel: xor: automatically using best checksumming function avx May 17 00:42:29.772521 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:42:29.782026 systemd[1]: Finished dracut-pre-udev.service. May 17 00:42:29.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.782000 audit: BPF prog-id=7 op=LOAD May 17 00:42:29.782000 audit: BPF prog-id=8 op=LOAD May 17 00:42:29.783562 systemd[1]: Starting systemd-udevd.service... May 17 00:42:29.797421 systemd-udevd[384]: Using default interface naming scheme 'v252'. May 17 00:42:29.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.802823 systemd[1]: Started systemd-udevd.service. May 17 00:42:29.805375 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:42:29.824902 dracut-pre-trigger[389]: rd.md=0: removing MD RAID activation May 17 00:42:29.857151 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:42:29.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.858658 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:42:29.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:29.900936 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:42:29.957521 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:42:29.993326 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 17 00:42:30.002468 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 17 00:42:30.002654 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:42:30.002676 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 17 00:42:30.002812 kernel: AES CTR mode by8 optimization enabled May 17 00:42:30.002839 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:f5:13:0a:58:75 May 17 00:42:30.006748 (udev-worker)[440]: Network interface NamePolicy= disabled on kernel command line. May 17 00:42:30.013550 kernel: nvme nvme0: pci function 0000:00:04.0 May 17 00:42:30.016582 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:42:30.027530 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 17 00:42:30.035822 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:42:30.035901 kernel: GPT:9289727 != 16777215 May 17 00:42:30.035920 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:42:30.039012 kernel: GPT:9289727 != 16777215 May 17 00:42:30.039080 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:42:30.041346 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:42:30.107515 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (430) May 17 00:42:30.149745 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:42:30.156666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:42:30.161701 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:42:30.166917 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:42:30.167626 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:42:30.170816 systemd[1]: Starting disk-uuid.service... May 17 00:42:30.177584 disk-uuid[592]: Primary Header is updated. May 17 00:42:30.177584 disk-uuid[592]: Secondary Entries is updated. May 17 00:42:30.177584 disk-uuid[592]: Secondary Header is updated. May 17 00:42:30.184524 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:42:30.189968 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:42:31.203523 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:42:31.203579 disk-uuid[593]: The operation has completed successfully. May 17 00:42:31.327170 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:42:31.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.327300 systemd[1]: Finished disk-uuid.service. May 17 00:42:31.337749 systemd[1]: Starting verity-setup.service... May 17 00:42:31.356521 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:42:31.442274 systemd[1]: Found device dev-mapper-usr.device. May 17 00:42:31.443545 systemd[1]: Mounting sysusr-usr.mount... May 17 00:42:31.446046 systemd[1]: Finished verity-setup.service. May 17 00:42:31.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.537540 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:42:31.536980 systemd[1]: Mounted sysusr-usr.mount. May 17 00:42:31.537802 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:42:31.538830 systemd[1]: Starting ignition-setup.service... May 17 00:42:31.544632 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:42:31.566886 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:42:31.566955 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:42:31.566974 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 17 00:42:31.589527 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:42:31.604020 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:42:31.613879 systemd[1]: Finished ignition-setup.service. May 17 00:42:31.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.615634 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:42:31.632836 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:42:31.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.634000 audit: BPF prog-id=9 op=LOAD May 17 00:42:31.635690 systemd[1]: Starting systemd-networkd.service... May 17 00:42:31.658603 systemd-networkd[1021]: lo: Link UP May 17 00:42:31.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.658615 systemd-networkd[1021]: lo: Gained carrier May 17 00:42:31.659236 systemd-networkd[1021]: Enumeration completed May 17 00:42:31.659354 systemd[1]: Started systemd-networkd.service. May 17 00:42:31.659667 systemd-networkd[1021]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:42:31.660958 systemd[1]: Reached target network.target. May 17 00:42:31.663726 systemd[1]: Starting iscsiuio.service... May 17 00:42:31.671107 systemd[1]: Started iscsiuio.service. May 17 00:42:31.671288 systemd-networkd[1021]: eth0: Link UP May 17 00:42:31.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.671294 systemd-networkd[1021]: eth0: Gained carrier May 17 00:42:31.673220 systemd[1]: Starting iscsid.service... May 17 00:42:31.679012 iscsid[1026]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:42:31.679012 iscsid[1026]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:42:31.679012 iscsid[1026]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:42:31.679012 iscsid[1026]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:42:31.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.686368 iscsid[1026]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:42:31.686368 iscsid[1026]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:42:31.681233 systemd[1]: Started iscsid.service. May 17 00:42:31.685423 systemd-networkd[1021]: eth0: DHCPv4 address 172.31.31.72/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:42:31.688359 systemd[1]: Starting dracut-initqueue.service... May 17 00:42:31.701836 systemd[1]: Finished dracut-initqueue.service. May 17 00:42:31.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.702720 systemd[1]: Reached target remote-fs-pre.target. May 17 00:42:31.704301 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:42:31.705320 systemd[1]: Reached target remote-fs.target. May 17 00:42:31.707147 systemd[1]: Starting dracut-pre-mount.service... May 17 00:42:31.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.718216 systemd[1]: Finished dracut-pre-mount.service. May 17 00:42:32.015856 ignition[1006]: Ignition 2.14.0 May 17 00:42:32.015869 ignition[1006]: Stage: fetch-offline May 17 00:42:32.015983 ignition[1006]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:42:32.016015 ignition[1006]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:42:32.031341 ignition[1006]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:42:32.031977 ignition[1006]: Ignition finished successfully May 17 00:42:32.033238 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:42:32.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.035396 systemd[1]: Starting ignition-fetch.service... May 17 00:42:32.044271 ignition[1045]: Ignition 2.14.0 May 17 00:42:32.044283 ignition[1045]: Stage: fetch May 17 00:42:32.044483 ignition[1045]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:42:32.044553 ignition[1045]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:42:32.055553 ignition[1045]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:42:32.056350 ignition[1045]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:42:32.067037 ignition[1045]: INFO : PUT result: OK May 17 00:42:32.068832 ignition[1045]: DEBUG : parsed url from cmdline: "" May 17 00:42:32.069797 ignition[1045]: INFO : no config URL provided May 17 00:42:32.069797 ignition[1045]: INFO : reading system config file "/usr/lib/ignition/user.ign" May 17 00:42:32.069797 ignition[1045]: INFO : no config at "/usr/lib/ignition/user.ign" May 17 00:42:32.069797 ignition[1045]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:42:32.069797 ignition[1045]: INFO : PUT result: OK May 17 00:42:32.069797 ignition[1045]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 17 00:42:32.075279 ignition[1045]: INFO : GET result: OK May 17 00:42:32.075279 ignition[1045]: DEBUG : parsing config with SHA512: 85d2c60d4b8888d3d72c266934441301ad9e5da4a963a8b88c468f12873f806ceff45ae0e80ad09b6c775609459e4980ec9960be1cb67ac1e5c912fa159bfee7 May 17 00:42:32.078612 unknown[1045]: fetched base config from "system" May 17 00:42:32.078633 unknown[1045]: fetched base config from "system" May 17 00:42:32.080268 ignition[1045]: fetch: fetch complete May 17 00:42:32.078646 unknown[1045]: fetched user config from "aws" May 17 00:42:32.080277 ignition[1045]: fetch: fetch passed May 17 00:42:32.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.082482 systemd[1]: Finished ignition-fetch.service. May 17 00:42:32.080341 ignition[1045]: Ignition finished successfully May 17 00:42:32.084281 systemd[1]: Starting ignition-kargs.service... May 17 00:42:32.095483 ignition[1051]: Ignition 2.14.0 May 17 00:42:32.096300 ignition[1051]: Stage: kargs May 17 00:42:32.096536 ignition[1051]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:42:32.096572 ignition[1051]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:42:32.103244 ignition[1051]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:42:32.104106 ignition[1051]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:42:32.104883 ignition[1051]: INFO : PUT result: OK May 17 00:42:32.106802 ignition[1051]: kargs: kargs passed May 17 00:42:32.106865 ignition[1051]: Ignition finished successfully May 17 00:42:32.108476 systemd[1]: Finished ignition-kargs.service. May 17 00:42:32.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.110508 systemd[1]: Starting ignition-disks.service... May 17 00:42:32.119745 ignition[1057]: Ignition 2.14.0 May 17 00:42:32.119758 ignition[1057]: Stage: disks May 17 00:42:32.119975 ignition[1057]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:42:32.120008 ignition[1057]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:42:32.127338 ignition[1057]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:42:32.128294 ignition[1057]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:42:32.128962 ignition[1057]: INFO : PUT result: OK May 17 00:42:32.131845 ignition[1057]: disks: disks passed May 17 00:42:32.131924 ignition[1057]: Ignition finished successfully May 17 00:42:32.133535 systemd[1]: Finished ignition-disks.service. May 17 00:42:32.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.134354 systemd[1]: Reached target initrd-root-device.target. May 17 00:42:32.135314 systemd[1]: Reached target local-fs-pre.target. May 17 00:42:32.136256 systemd[1]: Reached target local-fs.target. May 17 00:42:32.137180 systemd[1]: Reached target sysinit.target. May 17 00:42:32.138205 systemd[1]: Reached target basic.target. May 17 00:42:32.140310 systemd[1]: Starting systemd-fsck-root.service... May 17 00:42:32.166403 systemd-fsck[1065]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 00:42:32.169412 systemd[1]: Finished systemd-fsck-root.service. May 17 00:42:32.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.171372 systemd[1]: Mounting sysroot.mount... May 17 00:42:32.191519 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:42:32.192565 systemd[1]: Mounted sysroot.mount. May 17 00:42:32.195651 systemd[1]: Reached target initrd-root-fs.target. May 17 00:42:32.199488 systemd[1]: Mounting sysroot-usr.mount... May 17 00:42:32.201106 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 17 00:42:32.201173 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:42:32.201222 systemd[1]: Reached target ignition-diskful.target. May 17 00:42:32.207201 systemd[1]: Mounted sysroot-usr.mount. May 17 00:42:32.210848 systemd[1]: Starting initrd-setup-root.service... May 17 00:42:32.216875 initrd-setup-root[1086]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:42:32.233398 initrd-setup-root[1094]: cut: /sysroot/etc/group: No such file or directory May 17 00:42:32.238265 initrd-setup-root[1102]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:42:32.243256 initrd-setup-root[1110]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:42:32.363563 systemd[1]: Finished initrd-setup-root.service. May 17 00:42:32.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.365070 systemd[1]: Starting ignition-mount.service... May 17 00:42:32.366419 systemd[1]: Starting sysroot-boot.service... May 17 00:42:32.372379 bash[1127]: umount: /sysroot/usr/share/oem: not mounted. May 17 00:42:32.382345 ignition[1128]: INFO : Ignition 2.14.0 May 17 00:42:32.382345 ignition[1128]: INFO : Stage: mount May 17 00:42:32.384571 ignition[1128]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:42:32.384571 ignition[1128]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:42:32.392334 ignition[1128]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:42:32.393942 ignition[1128]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:42:32.395806 ignition[1128]: INFO : PUT result: OK May 17 00:42:32.399897 ignition[1128]: INFO : mount: mount passed May 17 00:42:32.401346 ignition[1128]: INFO : Ignition finished successfully May 17 00:42:32.403744 systemd[1]: Finished ignition-mount.service. May 17 00:42:32.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.411601 systemd[1]: Finished sysroot-boot.service. May 17 00:42:32.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.471009 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:42:32.491514 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1137) May 17 00:42:32.495175 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:42:32.495238 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:42:32.495250 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 17 00:42:32.504543 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:42:32.515376 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:42:32.516813 systemd[1]: Starting ignition-files.service... May 17 00:42:32.533875 ignition[1157]: INFO : Ignition 2.14.0 May 17 00:42:32.533875 ignition[1157]: INFO : Stage: files May 17 00:42:32.535204 ignition[1157]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:42:32.535204 ignition[1157]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:42:32.540899 ignition[1157]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:42:32.541685 ignition[1157]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:42:32.542348 ignition[1157]: INFO : PUT result: OK May 17 00:42:32.545924 ignition[1157]: DEBUG : files: compiled without relabeling support, skipping May 17 00:42:32.552070 ignition[1157]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:42:32.552070 ignition[1157]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:42:32.565564 ignition[1157]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:42:32.566863 ignition[1157]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:42:32.568033 ignition[1157]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:42:32.567688 unknown[1157]: wrote ssh authorized keys file for user: core May 17 00:42:32.569941 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:42:32.569941 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:42:32.569941 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:42:32.569941 ignition[1157]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:42:32.634093 ignition[1157]: INFO : GET result: OK May 17 00:42:32.924666 systemd-networkd[1021]: eth0: Gained IPv6LL May 17 00:42:33.261543 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:42:33.261543 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:42:33.264408 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:42:33.264408 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:42:33.264408 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:42:33.264408 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" May 17 00:42:33.264408 ignition[1157]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 17 00:42:33.274117 ignition[1157]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2812274198" May 17 00:42:33.274117 ignition[1157]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2812274198": device or resource busy May 17 00:42:33.274117 ignition[1157]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2812274198", trying btrfs: device or resource busy May 17 00:42:33.274117 ignition[1157]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2812274198" May 17 00:42:33.274117 ignition[1157]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2812274198" May 17 00:42:33.285505 ignition[1157]: INFO : op(3): [started] unmounting "/mnt/oem2812274198" May 17 00:42:33.287298 systemd[1]: mnt-oem2812274198.mount: Deactivated successfully. May 17 00:42:33.288654 ignition[1157]: INFO : op(3): [finished] unmounting "/mnt/oem2812274198" May 17 00:42:33.288654 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" May 17 00:42:33.288654 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:42:33.294243 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:42:33.294243 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:42:33.294243 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:42:33.294243 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:42:33.294243 ignition[1157]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 00:42:33.618675 ignition[1157]: INFO : GET result: OK May 17 00:42:33.745424 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:42:33.747176 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" May 17 00:42:33.747176 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:42:33.747176 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:42:33.747176 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:42:33.747176 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:42:33.747176 ignition[1157]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 17 00:42:33.759401 ignition[1157]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3765740149" May 17 00:42:33.759401 ignition[1157]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3765740149": device or resource busy May 17 00:42:33.759401 ignition[1157]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3765740149", trying btrfs: device or resource busy May 17 00:42:33.759401 ignition[1157]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3765740149" May 17 00:42:33.769904 ignition[1157]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3765740149" May 17 00:42:33.769904 ignition[1157]: INFO : op(6): [started] unmounting "/mnt/oem3765740149" May 17 00:42:33.769133 systemd[1]: mnt-oem3765740149.mount: Deactivated successfully. May 17 00:42:33.774130 ignition[1157]: INFO : op(6): [finished] unmounting "/mnt/oem3765740149" May 17 00:42:33.774130 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:42:33.774130 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:42:33.774130 ignition[1157]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:42:34.451060 ignition[1157]: INFO : GET result: OK May 17 00:42:34.732689 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:42:34.732689 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" May 17 00:42:34.736239 ignition[1157]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 17 00:42:34.740557 ignition[1157]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4181352877" May 17 00:42:34.740557 ignition[1157]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4181352877": device or resource busy May 17 00:42:34.740557 ignition[1157]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4181352877", trying btrfs: device or resource busy May 17 00:42:34.740557 ignition[1157]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4181352877" May 17 00:42:34.740557 ignition[1157]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4181352877" May 17 00:42:34.753798 ignition[1157]: INFO : op(9): [started] unmounting "/mnt/oem4181352877" May 17 00:42:34.753798 ignition[1157]: INFO : op(9): [finished] unmounting "/mnt/oem4181352877" May 17 00:42:34.753798 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" May 17 00:42:34.753798 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" May 17 00:42:34.753798 ignition[1157]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 17 00:42:34.747899 systemd[1]: mnt-oem4181352877.mount: Deactivated successfully. May 17 00:42:34.771415 ignition[1157]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3542291007" May 17 00:42:34.774038 ignition[1157]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3542291007": device or resource busy May 17 00:42:34.774038 ignition[1157]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3542291007", trying btrfs: device or resource busy May 17 00:42:34.774038 ignition[1157]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3542291007" May 17 00:42:34.781520 ignition[1157]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3542291007" May 17 00:42:34.781520 ignition[1157]: INFO : op(c): [started] unmounting "/mnt/oem3542291007" May 17 00:42:34.781520 ignition[1157]: INFO : op(c): [finished] unmounting "/mnt/oem3542291007" May 17 00:42:34.781520 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(11): [started] processing unit "nvidia.service" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(11): [finished] processing unit "nvidia.service" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(12): [started] processing unit "coreos-metadata-sshkeys@.service" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(12): [finished] processing unit "coreos-metadata-sshkeys@.service" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(13): [started] processing unit "amazon-ssm-agent.service" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(13): op(14): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(13): op(14): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(13): [finished] processing unit "amazon-ssm-agent.service" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(15): [started] processing unit "containerd.service" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(15): [finished] processing unit "containerd.service" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(17): [started] processing unit "prepare-helm.service" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:42:34.781520 ignition[1157]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" May 17 00:42:34.847045 kernel: kauditd_printk_skb: 26 callbacks suppressed May 17 00:42:34.847083 kernel: audit: type=1130 audit(1747442554.813:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.847105 kernel: audit: type=1130 audit(1747442554.834:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.847124 kernel: audit: type=1131 audit(1747442554.834:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.785216 systemd[1]: mnt-oem3542291007.mount: Deactivated successfully. May 17 00:42:34.853877 kernel: audit: type=1130 audit(1747442554.847:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.854013 ignition[1157]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" May 17 00:42:34.854013 ignition[1157]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" May 17 00:42:34.854013 ignition[1157]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:42:34.854013 ignition[1157]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:42:34.854013 ignition[1157]: INFO : files: op(1b): [started] setting preset to enabled for "amazon-ssm-agent.service" May 17 00:42:34.854013 ignition[1157]: INFO : files: op(1b): [finished] setting preset to enabled for "amazon-ssm-agent.service" May 17 00:42:34.854013 ignition[1157]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" May 17 00:42:34.854013 ignition[1157]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:42:34.854013 ignition[1157]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:42:34.854013 ignition[1157]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:42:34.854013 ignition[1157]: INFO : files: files passed May 17 00:42:34.854013 ignition[1157]: INFO : Ignition finished successfully May 17 00:42:34.892047 kernel: audit: type=1130 audit(1747442554.879:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.892084 kernel: audit: type=1131 audit(1747442554.879:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.810915 systemd[1]: Finished ignition-files.service. May 17 00:42:34.821522 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:42:34.896848 initrd-setup-root-after-ignition[1182]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:42:34.825177 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:42:34.826382 systemd[1]: Starting ignition-quench.service... May 17 00:42:34.831169 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:42:34.831301 systemd[1]: Finished ignition-quench.service. May 17 00:42:34.840729 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:42:34.848165 systemd[1]: Reached target ignition-complete.target. May 17 00:42:34.855846 systemd[1]: Starting initrd-parse-etc.service... May 17 00:42:34.878173 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:42:34.878305 systemd[1]: Finished initrd-parse-etc.service. May 17 00:42:34.880428 systemd[1]: Reached target initrd-fs.target. May 17 00:42:34.920381 kernel: audit: type=1130 audit(1747442554.911:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.891355 systemd[1]: Reached target initrd.target. May 17 00:42:34.893004 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:42:34.894360 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:42:34.911034 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:42:34.913433 systemd[1]: Starting initrd-cleanup.service... May 17 00:42:34.929714 systemd[1]: Stopped target nss-lookup.target. May 17 00:42:34.930587 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:42:34.931870 systemd[1]: Stopped target timers.target. May 17 00:42:34.933067 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:42:34.939449 kernel: audit: type=1131 audit(1747442554.933:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.933275 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:42:34.934606 systemd[1]: Stopped target initrd.target. May 17 00:42:34.940388 systemd[1]: Stopped target basic.target. May 17 00:42:34.941605 systemd[1]: Stopped target ignition-complete.target. May 17 00:42:34.942841 systemd[1]: Stopped target ignition-diskful.target. May 17 00:42:34.943997 systemd[1]: Stopped target initrd-root-device.target. May 17 00:42:34.945165 systemd[1]: Stopped target remote-fs.target. May 17 00:42:34.946439 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:42:34.947647 systemd[1]: Stopped target sysinit.target. May 17 00:42:34.948809 systemd[1]: Stopped target local-fs.target. May 17 00:42:34.950104 systemd[1]: Stopped target local-fs-pre.target. May 17 00:42:34.951225 systemd[1]: Stopped target swap.target. May 17 00:42:34.958476 kernel: audit: type=1131 audit(1747442554.953:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.952292 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:42:34.952508 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:42:34.965724 kernel: audit: type=1131 audit(1747442554.960:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.953820 systemd[1]: Stopped target cryptsetup.target. May 17 00:42:34.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.959250 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:42:34.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.959461 systemd[1]: Stopped dracut-initqueue.service. May 17 00:42:34.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.960703 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:42:34.960912 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:42:34.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.985323 ignition[1195]: INFO : Ignition 2.14.0 May 17 00:42:34.985323 ignition[1195]: INFO : Stage: umount May 17 00:42:34.985323 ignition[1195]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:42:34.985323 ignition[1195]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:42:34.966717 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:42:34.966921 systemd[1]: Stopped ignition-files.service. May 17 00:42:34.969303 systemd[1]: Stopping ignition-mount.service... May 17 00:42:34.971828 systemd[1]: Stopping sysroot-boot.service... May 17 00:42:34.972912 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:42:34.973165 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:42:34.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:34.974356 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:42:34.975822 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:42:34.981620 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:42:34.981745 systemd[1]: Finished initrd-cleanup.service. May 17 00:42:35.004203 ignition[1195]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:42:35.008419 ignition[1195]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:42:35.010546 ignition[1195]: INFO : PUT result: OK May 17 00:42:35.015541 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:42:35.017086 ignition[1195]: INFO : umount: umount passed May 17 00:42:35.018165 ignition[1195]: INFO : Ignition finished successfully May 17 00:42:35.018425 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:42:35.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.018561 systemd[1]: Stopped ignition-mount.service. May 17 00:42:35.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.019774 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:42:35.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.019840 systemd[1]: Stopped ignition-disks.service. May 17 00:42:35.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.020890 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:42:35.020951 systemd[1]: Stopped ignition-kargs.service. May 17 00:42:35.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.022180 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:42:35.022237 systemd[1]: Stopped ignition-fetch.service. May 17 00:42:35.023370 systemd[1]: Stopped target network.target. May 17 00:42:35.024446 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:42:35.024535 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:42:35.025772 systemd[1]: Stopped target paths.target. May 17 00:42:35.026967 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:42:35.030562 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:42:35.031485 systemd[1]: Stopped target slices.target. May 17 00:42:35.032620 systemd[1]: Stopped target sockets.target. May 17 00:42:35.033870 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:42:35.033914 systemd[1]: Closed iscsid.socket. May 17 00:42:35.034972 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:42:35.035012 systemd[1]: Closed iscsiuio.socket. May 17 00:42:35.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.037675 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:42:35.037752 systemd[1]: Stopped ignition-setup.service. May 17 00:42:35.038602 systemd[1]: Stopping systemd-networkd.service... May 17 00:42:35.040103 systemd[1]: Stopping systemd-resolved.service... May 17 00:42:35.043576 systemd-networkd[1021]: eth0: DHCPv6 lease lost May 17 00:42:35.044590 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:42:35.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.044687 systemd[1]: Stopped systemd-networkd.service. May 17 00:42:35.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.052000 audit: BPF prog-id=9 op=UNLOAD May 17 00:42:35.052000 audit: BPF prog-id=6 op=UNLOAD May 17 00:42:35.050601 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:42:35.050735 systemd[1]: Stopped systemd-resolved.service. May 17 00:42:35.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.052922 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:42:35.052974 systemd[1]: Closed systemd-networkd.socket. May 17 00:42:35.054860 systemd[1]: Stopping network-cleanup.service... May 17 00:42:35.056258 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:42:35.056335 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:42:35.057121 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:42:35.057167 systemd[1]: Stopped systemd-sysctl.service. May 17 00:42:35.057868 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:42:35.057925 systemd[1]: Stopped systemd-modules-load.service. May 17 00:42:35.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.063661 systemd[1]: Stopping systemd-udevd.service... May 17 00:42:35.067046 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:42:35.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.075335 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:42:35.075471 systemd[1]: Stopped network-cleanup.service. May 17 00:42:35.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.078101 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:42:35.078236 systemd[1]: Stopped systemd-udevd.service. May 17 00:42:35.079583 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:42:35.079641 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:42:35.081938 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:42:35.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.081991 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:42:35.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.083136 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:42:35.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.083204 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:42:35.084304 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:42:35.084363 systemd[1]: Stopped dracut-cmdline.service. May 17 00:42:35.085463 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:42:35.085785 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:42:35.087959 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:42:35.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.092589 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:42:35.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.092686 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:42:35.096522 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:42:35.096605 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:42:35.098462 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:42:35.098545 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:42:35.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.101026 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:42:35.101537 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:42:35.101628 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:42:35.131426 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:42:35.131569 systemd[1]: Stopped sysroot-boot.service. May 17 00:42:35.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.133110 systemd[1]: Reached target initrd-switch-root.target. May 17 00:42:35.134473 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:42:35.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.134617 systemd[1]: Stopped initrd-setup-root.service. May 17 00:42:35.137096 systemd[1]: Starting initrd-switch-root.service... May 17 00:42:35.157471 systemd[1]: Switching root. May 17 00:42:35.157000 audit: BPF prog-id=8 op=UNLOAD May 17 00:42:35.157000 audit: BPF prog-id=7 op=UNLOAD May 17 00:42:35.160000 audit: BPF prog-id=5 op=UNLOAD May 17 00:42:35.160000 audit: BPF prog-id=4 op=UNLOAD May 17 00:42:35.160000 audit: BPF prog-id=3 op=UNLOAD May 17 00:42:35.177957 iscsid[1026]: iscsid shutting down. May 17 00:42:35.179190 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). May 17 00:42:35.179274 systemd-journald[185]: Journal stopped May 17 00:42:39.708169 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:42:39.708258 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:42:39.708283 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:42:39.708305 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:42:39.708330 kernel: SELinux: policy capability open_perms=1 May 17 00:42:39.708350 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:42:39.708373 kernel: SELinux: policy capability always_check_network=0 May 17 00:42:39.708403 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:42:39.708430 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:42:39.708452 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:42:39.708474 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:42:39.708508 systemd[1]: Successfully loaded SELinux policy in 72.037ms. May 17 00:42:39.708539 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.391ms. May 17 00:42:39.708560 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:42:39.708579 systemd[1]: Detected virtualization amazon. May 17 00:42:39.708599 systemd[1]: Detected architecture x86-64. May 17 00:42:39.708616 systemd[1]: Detected first boot. May 17 00:42:39.708634 systemd[1]: Initializing machine ID from VM UUID. May 17 00:42:39.708654 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:42:39.708676 systemd[1]: Populated /etc with preset unit settings. May 17 00:42:39.708702 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:42:39.708724 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:42:39.708746 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:42:39.708767 systemd[1]: Queued start job for default target multi-user.target. May 17 00:42:39.708787 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. May 17 00:42:39.708806 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:42:39.708829 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:42:39.708852 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 17 00:42:39.708869 systemd[1]: Created slice system-getty.slice. May 17 00:42:39.708889 systemd[1]: Created slice system-modprobe.slice. May 17 00:42:39.708910 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:42:39.708933 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:42:39.708954 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:42:39.708975 systemd[1]: Created slice user.slice. May 17 00:42:39.708996 systemd[1]: Started systemd-ask-password-console.path. May 17 00:42:39.709016 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:42:39.709039 systemd[1]: Set up automount boot.automount. May 17 00:42:39.709060 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:42:39.709080 systemd[1]: Reached target integritysetup.target. May 17 00:42:39.709101 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:42:39.709125 systemd[1]: Reached target remote-fs.target. May 17 00:42:39.709146 systemd[1]: Reached target slices.target. May 17 00:42:39.709166 systemd[1]: Reached target swap.target. May 17 00:42:39.709187 systemd[1]: Reached target torcx.target. May 17 00:42:39.709211 systemd[1]: Reached target veritysetup.target. May 17 00:42:39.709232 systemd[1]: Listening on systemd-coredump.socket. May 17 00:42:39.709257 systemd[1]: Listening on systemd-initctl.socket. May 17 00:42:39.709279 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:42:39.709302 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:42:39.709334 systemd[1]: Listening on systemd-journald.socket. May 17 00:42:39.709356 systemd[1]: Listening on systemd-networkd.socket. May 17 00:42:39.709376 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:42:39.709397 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:42:39.709418 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:42:39.709439 systemd[1]: Mounting dev-hugepages.mount... May 17 00:42:39.709460 systemd[1]: Mounting dev-mqueue.mount... May 17 00:42:39.709481 systemd[1]: Mounting media.mount... May 17 00:42:39.709533 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:39.709555 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:42:39.709579 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:42:39.709611 systemd[1]: Mounting tmp.mount... May 17 00:42:39.709632 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:42:39.709653 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:42:39.709673 systemd[1]: Starting kmod-static-nodes.service... May 17 00:42:39.709695 systemd[1]: Starting modprobe@configfs.service... May 17 00:42:39.709715 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:42:39.709734 systemd[1]: Starting modprobe@drm.service... May 17 00:42:39.709750 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:42:39.709772 systemd[1]: Starting modprobe@fuse.service... May 17 00:42:39.709792 systemd[1]: Starting modprobe@loop.service... May 17 00:42:39.709811 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:42:39.709830 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:42:39.709848 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 17 00:42:39.709867 systemd[1]: Starting systemd-journald.service... May 17 00:42:39.709885 systemd[1]: Starting systemd-modules-load.service... May 17 00:42:39.709903 systemd[1]: Starting systemd-network-generator.service... May 17 00:42:39.709922 systemd[1]: Starting systemd-remount-fs.service... May 17 00:42:39.709943 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:42:39.709962 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:39.709981 systemd[1]: Mounted dev-hugepages.mount. May 17 00:42:39.709999 systemd[1]: Mounted dev-mqueue.mount. May 17 00:42:39.710017 kernel: fuse: init (API version 7.34) May 17 00:42:39.710035 systemd[1]: Mounted media.mount. May 17 00:42:39.710054 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:42:39.710072 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:42:39.710090 systemd[1]: Mounted tmp.mount. May 17 00:42:39.710110 systemd[1]: Finished kmod-static-nodes.service. May 17 00:42:39.710129 kernel: loop: module loaded May 17 00:42:39.710146 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:42:39.710164 systemd[1]: Finished modprobe@configfs.service. May 17 00:42:39.710182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:42:39.710200 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:42:39.710218 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:42:39.710236 systemd[1]: Finished modprobe@drm.service. May 17 00:42:39.710259 systemd-journald[1355]: Journal started May 17 00:42:39.710335 systemd-journald[1355]: Runtime Journal (/run/log/journal/ec2ac0ffbca18eba9be83b116bcb25dd) is 4.8M, max 38.3M, 33.5M free. May 17 00:42:39.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.714891 systemd[1]: Started systemd-journald.service. May 17 00:42:39.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.705000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:42:39.705000 audit[1355]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffffc6395a0 a2=4000 a3=7ffffc63963c items=0 ppid=1 pid=1355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:39.705000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:42:39.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.719259 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:42:39.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.722853 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:42:39.724232 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:42:39.724451 systemd[1]: Finished modprobe@fuse.service. May 17 00:42:39.725851 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:42:39.726863 systemd[1]: Finished modprobe@loop.service. May 17 00:42:39.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.729074 systemd[1]: Finished systemd-modules-load.service. May 17 00:42:39.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.732054 systemd[1]: Finished systemd-network-generator.service. May 17 00:42:39.733517 systemd[1]: Finished systemd-remount-fs.service. May 17 00:42:39.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.736291 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:42:39.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.737755 systemd[1]: Reached target network-pre.target. May 17 00:42:39.740183 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:42:39.742783 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:42:39.746489 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:42:39.749362 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:42:39.755672 systemd[1]: Starting systemd-journal-flush.service... May 17 00:42:39.756558 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:42:39.759778 systemd[1]: Starting systemd-random-seed.service... May 17 00:42:39.761344 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:42:39.762937 systemd[1]: Starting systemd-sysctl.service... May 17 00:42:39.769446 systemd[1]: Starting systemd-sysusers.service... May 17 00:42:39.773022 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:42:39.777311 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:42:39.794382 systemd-journald[1355]: Time spent on flushing to /var/log/journal/ec2ac0ffbca18eba9be83b116bcb25dd is 67.850ms for 1146 entries. May 17 00:42:39.794382 systemd-journald[1355]: System Journal (/var/log/journal/ec2ac0ffbca18eba9be83b116bcb25dd) is 8.0M, max 195.6M, 187.6M free. May 17 00:42:39.878721 systemd-journald[1355]: Received client request to flush runtime journal. May 17 00:42:39.878788 kernel: kauditd_printk_skb: 71 callbacks suppressed May 17 00:42:39.878835 kernel: audit: type=1130 audit(1747442559.824:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.878870 kernel: audit: type=1130 audit(1747442559.855:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.797978 systemd[1]: Finished systemd-random-seed.service. May 17 00:42:39.879309 udevadm[1395]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:42:39.798897 systemd[1]: Reached target first-boot-complete.target. May 17 00:42:39.824190 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:42:39.833729 systemd[1]: Starting systemd-udev-settle.service... May 17 00:42:39.855025 systemd[1]: Finished systemd-sysctl.service. May 17 00:42:39.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.881015 systemd[1]: Finished systemd-journal-flush.service. May 17 00:42:39.883341 systemd[1]: Finished systemd-sysusers.service. May 17 00:42:39.890359 kernel: audit: type=1130 audit(1747442559.881:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.890477 kernel: audit: type=1130 audit(1747442559.887:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.890141 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:42:39.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.976542 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:42:39.983528 kernel: audit: type=1130 audit(1747442559.977:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.472736 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:42:40.474821 systemd[1]: Starting systemd-udevd.service... May 17 00:42:40.478536 kernel: audit: type=1130 audit(1747442560.472:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.497367 systemd-udevd[1405]: Using default interface naming scheme 'v252'. May 17 00:42:40.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.548683 systemd[1]: Started systemd-udevd.service. May 17 00:42:40.551214 systemd[1]: Starting systemd-networkd.service... May 17 00:42:40.555734 kernel: audit: type=1130 audit(1747442560.548:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.565547 systemd[1]: Starting systemd-userdbd.service... May 17 00:42:40.596655 systemd[1]: Found device dev-ttyS0.device. May 17 00:42:40.604242 (udev-worker)[1415]: Network interface NamePolicy= disabled on kernel command line. May 17 00:42:40.616844 systemd[1]: Started systemd-userdbd.service. May 17 00:42:40.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.623577 kernel: audit: type=1130 audit(1747442560.617:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.648000 audit[1411]: AVC avc: denied { confidentiality } for pid=1411 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:42:40.657517 kernel: audit: type=1400 audit(1747442560.648:117): avc: denied { confidentiality } for pid=1411 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:42:40.648000 audit[1411]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564c63774380 a1=338ac a2=7f4debcdcbc5 a3=5 items=110 ppid=1405 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:40.667089 kernel: audit: type=1300 audit(1747442560.648:117): arch=c000003e syscall=175 success=yes exit=0 a0=564c63774380 a1=338ac a2=7f4debcdcbc5 a3=5 items=110 ppid=1405 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:40.648000 audit: CWD cwd="/" May 17 00:42:40.648000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=1 name=(null) inode=14792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=2 name=(null) inode=14792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=3 name=(null) inode=14793 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=4 name=(null) inode=14792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=5 name=(null) inode=14794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=6 name=(null) inode=14792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=7 name=(null) inode=14795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=8 name=(null) inode=14795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=9 name=(null) inode=14796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=10 name=(null) inode=14795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=11 name=(null) inode=14797 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=12 name=(null) inode=14795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=13 name=(null) inode=14798 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=14 name=(null) inode=14795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=15 name=(null) inode=14799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=16 name=(null) inode=14795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=17 name=(null) inode=14800 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=18 name=(null) inode=14792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=19 name=(null) inode=14801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=20 name=(null) inode=14801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=21 name=(null) inode=14802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=22 name=(null) inode=14801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=23 name=(null) inode=14803 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=24 name=(null) inode=14801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=25 name=(null) inode=14804 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=26 name=(null) inode=14801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=27 name=(null) inode=14805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=28 name=(null) inode=14801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=29 name=(null) inode=14806 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=30 name=(null) inode=14792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=31 name=(null) inode=14807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=32 name=(null) inode=14807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=33 name=(null) inode=14808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=34 name=(null) inode=14807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=35 name=(null) inode=14809 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=36 name=(null) inode=14807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=37 name=(null) inode=14810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=38 name=(null) inode=14807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=39 name=(null) inode=14811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=40 name=(null) inode=14807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=41 name=(null) inode=14812 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=42 name=(null) inode=14792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=43 name=(null) inode=14813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=44 name=(null) inode=14813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=45 name=(null) inode=14814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=46 name=(null) inode=14813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=47 name=(null) inode=14815 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=48 name=(null) inode=14813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=49 name=(null) inode=14816 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=50 name=(null) inode=14813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=51 name=(null) inode=14817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=52 name=(null) inode=14813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=53 name=(null) inode=14818 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=55 name=(null) inode=14819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=56 name=(null) inode=14819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=57 name=(null) inode=14820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=58 name=(null) inode=14819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=59 name=(null) inode=14821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=60 name=(null) inode=14819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=61 name=(null) inode=14822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=62 name=(null) inode=14822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=63 name=(null) inode=14823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=64 name=(null) inode=14822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=65 name=(null) inode=14824 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=66 name=(null) inode=14822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=67 name=(null) inode=14825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=68 name=(null) inode=14822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=69 name=(null) inode=14826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=70 name=(null) inode=14822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=71 name=(null) inode=14827 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=72 name=(null) inode=14819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=73 name=(null) inode=14828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=74 name=(null) inode=14828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=75 name=(null) inode=14829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=76 name=(null) inode=14828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=77 name=(null) inode=14830 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=78 name=(null) inode=14828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=79 name=(null) inode=14831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=80 name=(null) inode=14828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=81 name=(null) inode=14832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=82 name=(null) inode=14828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=83 name=(null) inode=14833 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=84 name=(null) inode=14819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=85 name=(null) inode=14834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=86 name=(null) inode=14834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=87 name=(null) inode=14835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=88 name=(null) inode=14834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=89 name=(null) inode=14836 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=90 name=(null) inode=14834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=91 name=(null) inode=14837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=92 name=(null) inode=14834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=93 name=(null) inode=14838 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=94 name=(null) inode=14834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=95 name=(null) inode=14839 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=96 name=(null) inode=14819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=97 name=(null) inode=14840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.676508 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:42:40.648000 audit: PATH item=98 name=(null) inode=14840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=99 name=(null) inode=14841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=100 name=(null) inode=14840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=101 name=(null) inode=14842 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=102 name=(null) inode=14840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=103 name=(null) inode=14843 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=104 name=(null) inode=14840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=105 name=(null) inode=14844 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=106 name=(null) inode=14840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=107 name=(null) inode=14845 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PATH item=109 name=(null) inode=14846 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.648000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:42:40.715648 kernel: ACPI: button: Power Button [PWRF] May 17 00:42:40.724150 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 17 00:42:40.734116 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 May 17 00:42:40.734158 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:42:40.740666 kernel: ACPI: button: Sleep Button [SLPF] May 17 00:42:40.742631 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:42:40.743969 systemd-networkd[1409]: lo: Link UP May 17 00:42:40.743976 systemd-networkd[1409]: lo: Gained carrier May 17 00:42:40.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.745139 systemd-networkd[1409]: Enumeration completed May 17 00:42:40.745261 systemd[1]: Started systemd-networkd.service. May 17 00:42:40.747442 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:42:40.749680 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:42:40.754256 systemd-networkd[1409]: eth0: Link UP May 17 00:42:40.754510 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:42:40.754565 systemd-networkd[1409]: eth0: Gained carrier May 17 00:42:40.764712 systemd-networkd[1409]: eth0: DHCPv4 address 172.31.31.72/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:42:40.887383 systemd[1]: Finished systemd-udev-settle.service. May 17 00:42:40.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.893407 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:42:40.895738 systemd[1]: Starting lvm2-activation-early.service... May 17 00:42:40.952604 lvm[1520]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:42:40.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.982930 systemd[1]: Finished lvm2-activation-early.service. May 17 00:42:40.983761 systemd[1]: Reached target cryptsetup.target. May 17 00:42:40.985794 systemd[1]: Starting lvm2-activation.service... May 17 00:42:40.991584 lvm[1522]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:42:41.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.014974 systemd[1]: Finished lvm2-activation.service. May 17 00:42:41.015769 systemd[1]: Reached target local-fs-pre.target. May 17 00:42:41.016411 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:42:41.016433 systemd[1]: Reached target local-fs.target. May 17 00:42:41.017028 systemd[1]: Reached target machines.target. May 17 00:42:41.018963 systemd[1]: Starting ldconfig.service... May 17 00:42:41.020786 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:42:41.020868 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:42:41.022433 systemd[1]: Starting systemd-boot-update.service... May 17 00:42:41.024681 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:42:41.026782 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:42:41.028978 systemd[1]: Starting systemd-sysext.service... May 17 00:42:41.035026 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1525 (bootctl) May 17 00:42:41.036312 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:42:41.048690 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:42:41.054293 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:42:41.054559 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:42:41.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.056986 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:42:41.070716 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:42:41.201379 systemd-fsck[1537]: fsck.fat 4.2 (2021-01-31) May 17 00:42:41.201379 systemd-fsck[1537]: /dev/nvme0n1p1: 790 files, 120726/258078 clusters May 17 00:42:41.204117 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:42:41.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.206592 systemd[1]: Mounting boot.mount... May 17 00:42:41.232977 systemd[1]: Mounted boot.mount. May 17 00:42:41.243790 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:42:41.271531 kernel: loop1: detected capacity change from 0 to 221472 May 17 00:42:41.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.274576 systemd[1]: Finished systemd-boot-update.service. May 17 00:42:41.296767 (sd-sysext)[1555]: Using extensions 'kubernetes'. May 17 00:42:41.297692 (sd-sysext)[1555]: Merged extensions into '/usr'. May 17 00:42:41.328361 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:42:41.329205 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:42:41.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.335064 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:41.337026 systemd[1]: Mounting usr-share-oem.mount... May 17 00:42:41.339830 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:42:41.342257 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:42:41.346366 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:42:41.349223 systemd[1]: Starting modprobe@loop.service... May 17 00:42:41.355541 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:42:41.355788 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:42:41.355978 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:41.361352 systemd[1]: Mounted usr-share-oem.mount. May 17 00:42:41.363539 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:42:41.363942 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:42:41.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.365619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:42:41.365977 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:42:41.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.367761 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:42:41.368112 systemd[1]: Finished modprobe@loop.service. May 17 00:42:41.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.371292 systemd[1]: Finished systemd-sysext.service. May 17 00:42:41.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.380732 systemd[1]: Starting ensure-sysext.service... May 17 00:42:41.381667 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:42:41.381891 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:42:41.384507 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:42:41.390164 systemd[1]: Reloading. May 17 00:42:41.408138 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:42:41.410308 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:42:41.414129 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:42:41.482969 /usr/lib/systemd/system-generators/torcx-generator[1590]: time="2025-05-17T00:42:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:42:41.483008 /usr/lib/systemd/system-generators/torcx-generator[1590]: time="2025-05-17T00:42:41Z" level=info msg="torcx already run" May 17 00:42:41.661049 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:42:41.661073 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:42:41.691913 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:42:41.768972 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:42:41.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.774718 systemd[1]: Starting audit-rules.service... May 17 00:42:41.777455 systemd[1]: Starting clean-ca-certificates.service... May 17 00:42:41.780522 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:42:41.789285 systemd[1]: Starting systemd-resolved.service... May 17 00:42:41.796704 systemd[1]: Starting systemd-timesyncd.service... May 17 00:42:41.799435 systemd[1]: Starting systemd-update-utmp.service... May 17 00:42:41.805042 systemd[1]: Finished clean-ca-certificates.service. May 17 00:42:41.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.812568 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:42:41.815000 audit[1661]: SYSTEM_BOOT pid=1661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:42:41.822672 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:42:41.824845 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:42:41.828527 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:42:41.832540 systemd[1]: Starting modprobe@loop.service... May 17 00:42:41.833893 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:42:41.834201 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:42:41.834467 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:42:41.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.836953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:42:41.837217 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:42:41.839440 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:42:41.839678 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:42:41.845427 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:42:41.849856 systemd[1]: Finished systemd-update-utmp.service. May 17 00:42:41.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.857628 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:42:41.859703 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:42:41.862270 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:42:41.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.863059 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:42:41.863270 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:42:41.863433 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:42:41.865065 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:42:41.865318 systemd[1]: Finished modprobe@loop.service. May 17 00:42:41.869029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:42:41.869254 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:42:41.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.871298 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:42:41.877858 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:42:41.879954 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:42:41.883337 systemd[1]: Starting modprobe@drm.service... May 17 00:42:41.886346 systemd[1]: Starting modprobe@loop.service... May 17 00:42:41.888763 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:42:41.889000 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:42:41.889227 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:42:41.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.891713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:42:41.892332 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:42:41.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.894844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:42:41.895072 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:42:41.896412 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:42:41.898109 systemd[1]: Finished ensure-sysext.service. May 17 00:42:41.910936 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:42:41.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.911173 systemd[1]: Finished modprobe@drm.service. May 17 00:42:41.912245 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:42:41.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.912470 systemd[1]: Finished modprobe@loop.service. May 17 00:42:41.913112 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:42:41.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:41.964075 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:42:42.012000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:42:42.012000 audit[1697]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc95439c0 a2=420 a3=0 items=0 ppid=1654 pid=1697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:42.012000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:42:42.013591 augenrules[1697]: No rules May 17 00:42:42.015188 systemd[1]: Finished audit-rules.service. May 17 00:42:42.032546 systemd-resolved[1657]: Positive Trust Anchors: May 17 00:42:42.032567 systemd-resolved[1657]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:42:42.032609 systemd-resolved[1657]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:42:42.055733 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:42.055764 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:42.066120 systemd[1]: Started systemd-timesyncd.service. May 17 00:42:42.066583 systemd[1]: Reached target time-set.target. May 17 00:42:42.074482 systemd-resolved[1657]: Defaulting to hostname 'linux'. May 17 00:42:42.078581 systemd[1]: Started systemd-resolved.service. May 17 00:42:42.079021 systemd[1]: Reached target network.target. May 17 00:42:42.079448 systemd[1]: Reached target nss-lookup.target. May 17 00:42:42.111844 ldconfig[1524]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:42:42.121210 systemd[1]: Finished ldconfig.service. May 17 00:42:42.123378 systemd[1]: Starting systemd-update-done.service... May 17 00:42:42.132313 systemd[1]: Finished systemd-update-done.service. May 17 00:42:42.133118 systemd[1]: Reached target sysinit.target. May 17 00:42:42.133927 systemd[1]: Started motdgen.path. May 17 00:42:42.134541 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:42:42.135296 systemd[1]: Started logrotate.timer. May 17 00:42:42.136608 systemd[1]: Started mdadm.timer. May 17 00:42:42.137321 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:42:42.138143 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:42:42.138192 systemd[1]: Reached target paths.target. May 17 00:42:42.138633 systemd[1]: Reached target timers.target. May 17 00:42:42.139323 systemd[1]: Listening on dbus.socket. May 17 00:42:42.141059 systemd[1]: Starting docker.socket... May 17 00:42:42.143944 systemd[1]: Listening on sshd.socket. May 17 00:42:42.144822 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:42:42.145875 systemd[1]: Listening on docker.socket. May 17 00:42:42.146587 systemd[1]: Reached target sockets.target. May 17 00:42:42.147233 systemd[1]: Reached target basic.target. May 17 00:42:42.148065 systemd[1]: System is tainted: cgroupsv1 May 17 00:42:42.148225 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:42:42.148268 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:42:42.149588 systemd[1]: Starting containerd.service... May 17 00:42:42.151541 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 17 00:42:42.154066 systemd[1]: Starting dbus.service... May 17 00:42:42.156474 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:42:42.177776 jq[1713]: false May 17 00:42:42.164223 systemd[1]: Starting extend-filesystems.service... May 17 00:42:42.165221 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:42:42.168064 systemd[1]: Starting motdgen.service... May 17 00:42:42.171898 systemd[1]: Starting prepare-helm.service... May 17 00:42:42.174955 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:42:42.178410 systemd[1]: Starting sshd-keygen.service... May 17 00:42:42.184744 systemd[1]: Starting systemd-logind.service... May 17 00:42:42.187622 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:42:42.187730 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:42:42.192194 systemd[1]: Starting update-engine.service... May 17 00:42:42.196905 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:42:42.200247 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:42:42.200613 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:42:42.212075 systemd[1]: Created slice system-sshd.slice. May 17 00:42:42.215108 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:42:42.215432 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:42:42.238126 jq[1723]: true May 17 00:42:42.258833 tar[1725]: linux-amd64/helm May 17 00:42:42.259391 jq[1734]: true May 17 00:42:42.290947 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:42:42.291277 systemd[1]: Finished motdgen.service. May 17 00:42:42.315904 env[1729]: time="2025-05-17T00:42:42.315833828Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:42:42.316187 dbus-daemon[1712]: [system] SELinux support is enabled May 17 00:42:42.316759 systemd[1]: Started dbus.service. May 17 00:42:42.320392 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:42:42.320425 systemd[1]: Reached target system-config.target. May 17 00:42:42.321086 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:42:42.321113 systemd[1]: Reached target user-config.target. May 17 00:42:42.330063 dbus-daemon[1712]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1409 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:42:42.335605 dbus-daemon[1712]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:42:42.343873 extend-filesystems[1714]: Found loop1 May 17 00:42:42.343873 extend-filesystems[1714]: Found nvme0n1 May 17 00:42:42.343873 extend-filesystems[1714]: Found nvme0n1p1 May 17 00:42:42.343873 extend-filesystems[1714]: Found nvme0n1p2 May 17 00:42:42.343873 extend-filesystems[1714]: Found nvme0n1p3 May 17 00:42:42.343873 extend-filesystems[1714]: Found usr May 17 00:42:42.343873 extend-filesystems[1714]: Found nvme0n1p4 May 17 00:42:42.343873 extend-filesystems[1714]: Found nvme0n1p6 May 17 00:42:42.343873 extend-filesystems[1714]: Found nvme0n1p7 May 17 00:42:42.343873 extend-filesystems[1714]: Found nvme0n1p9 May 17 00:42:42.343873 extend-filesystems[1714]: Checking size of /dev/nvme0n1p9 May 17 00:42:42.342335 systemd[1]: Starting systemd-hostnamed.service... May 17 00:42:43.281376 systemd-resolved[1657]: Clock change detected. Flushing caches. May 17 00:42:43.281624 systemd-timesyncd[1659]: Contacted time server 45.61.187.39:123 (0.flatcar.pool.ntp.org). May 17 00:42:43.281696 systemd-timesyncd[1659]: Initial clock synchronization to Sat 2025-05-17 00:42:43.281321 UTC. May 17 00:42:43.317200 extend-filesystems[1714]: Resized partition /dev/nvme0n1p9 May 17 00:42:43.339855 extend-filesystems[1772]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:42:43.348415 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 17 00:42:43.365467 update_engine[1722]: I0517 00:42:43.364923 1722 main.cc:92] Flatcar Update Engine starting May 17 00:42:43.370479 systemd[1]: Started update-engine.service. May 17 00:42:43.375269 update_engine[1722]: I0517 00:42:43.370537 1722 update_check_scheduler.cc:74] Next update check in 6m48s May 17 00:42:43.374017 systemd[1]: Started locksmithd.service. May 17 00:42:43.436386 env[1729]: time="2025-05-17T00:42:43.436333192Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:42:43.441920 env[1729]: time="2025-05-17T00:42:43.441877689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:42:43.443421 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 17 00:42:43.445642 env[1729]: time="2025-05-17T00:42:43.445587247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:42:43.445786 env[1729]: time="2025-05-17T00:42:43.445768729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:42:43.455346 env[1729]: time="2025-05-17T00:42:43.455295078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:42:43.455346 env[1729]: time="2025-05-17T00:42:43.455341536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:42:43.455502 env[1729]: time="2025-05-17T00:42:43.455365348Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:42:43.455502 env[1729]: time="2025-05-17T00:42:43.455379763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:42:43.455592 env[1729]: time="2025-05-17T00:42:43.455512050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:42:43.455852 env[1729]: time="2025-05-17T00:42:43.455824937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:42:43.456123 env[1729]: time="2025-05-17T00:42:43.456068378Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:42:43.456123 env[1729]: time="2025-05-17T00:42:43.456100189Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:42:43.456222 env[1729]: time="2025-05-17T00:42:43.456171711Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:42:43.456222 env[1729]: time="2025-05-17T00:42:43.456191355Z" level=info msg="metadata content store policy set" policy=shared May 17 00:42:43.460156 extend-filesystems[1772]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 00:42:43.460156 extend-filesystems[1772]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:42:43.460156 extend-filesystems[1772]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 17 00:42:43.471729 extend-filesystems[1714]: Resized filesystem in /dev/nvme0n1p9 May 17 00:42:43.473122 bash[1782]: Updated "/home/core/.ssh/authorized_keys" May 17 00:42:43.461153 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:42:43.461521 systemd[1]: Finished extend-filesystems.service. May 17 00:42:43.463933 systemd-logind[1721]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:42:43.463960 systemd-logind[1721]: Watching system buttons on /dev/input/event3 (Sleep Button) May 17 00:42:43.463987 systemd-logind[1721]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:42:43.467086 systemd-logind[1721]: New seat seat0. May 17 00:42:43.469444 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:42:43.483093 systemd[1]: Started systemd-logind.service. May 17 00:42:43.486383 env[1729]: time="2025-05-17T00:42:43.486298914Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:42:43.486383 env[1729]: time="2025-05-17T00:42:43.486417818Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:42:43.486620 env[1729]: time="2025-05-17T00:42:43.486438540Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:42:43.486620 env[1729]: time="2025-05-17T00:42:43.486493339Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:42:43.486620 env[1729]: time="2025-05-17T00:42:43.486517012Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:42:43.486620 env[1729]: time="2025-05-17T00:42:43.486535684Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:42:43.486620 env[1729]: time="2025-05-17T00:42:43.486571762Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:42:43.486620 env[1729]: time="2025-05-17T00:42:43.486592109Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:42:43.486620 env[1729]: time="2025-05-17T00:42:43.486611960Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:42:43.486864 env[1729]: time="2025-05-17T00:42:43.486647364Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:42:43.486864 env[1729]: time="2025-05-17T00:42:43.486669560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:42:43.486864 env[1729]: time="2025-05-17T00:42:43.486690088Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:42:43.486976 env[1729]: time="2025-05-17T00:42:43.486862466Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:42:43.487024 env[1729]: time="2025-05-17T00:42:43.487007835Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:42:43.487635 env[1729]: time="2025-05-17T00:42:43.487610376Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:42:43.487719 env[1729]: time="2025-05-17T00:42:43.487664487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:42:43.487719 env[1729]: time="2025-05-17T00:42:43.487687288Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:42:43.487800 env[1729]: time="2025-05-17T00:42:43.487764684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:42:43.487842 env[1729]: time="2025-05-17T00:42:43.487787837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:42:43.487842 env[1729]: time="2025-05-17T00:42:43.487821408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:42:43.487928 env[1729]: time="2025-05-17T00:42:43.487841931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:42:43.487928 env[1729]: time="2025-05-17T00:42:43.487861081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:42:43.487928 env[1729]: time="2025-05-17T00:42:43.487894334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:42:43.487928 env[1729]: time="2025-05-17T00:42:43.487912875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:42:43.488067 env[1729]: time="2025-05-17T00:42:43.487931873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:42:43.488067 env[1729]: time="2025-05-17T00:42:43.487969822Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:42:43.488184 env[1729]: time="2025-05-17T00:42:43.488164401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:42:43.488247 env[1729]: time="2025-05-17T00:42:43.488207601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:42:43.488247 env[1729]: time="2025-05-17T00:42:43.488228790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:42:43.488325 env[1729]: time="2025-05-17T00:42:43.488246891Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:42:43.488325 env[1729]: time="2025-05-17T00:42:43.488284929Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:42:43.488325 env[1729]: time="2025-05-17T00:42:43.488302620Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:42:43.488600 env[1729]: time="2025-05-17T00:42:43.488329404Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:42:43.488600 env[1729]: time="2025-05-17T00:42:43.488388830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:42:43.488840 env[1729]: time="2025-05-17T00:42:43.488755929Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:42:43.492117 env[1729]: time="2025-05-17T00:42:43.488858753Z" level=info msg="Connect containerd service" May 17 00:42:43.492117 env[1729]: time="2025-05-17T00:42:43.488916026Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:42:43.502417 env[1729]: time="2025-05-17T00:42:43.500493359Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:42:43.502417 env[1729]: time="2025-05-17T00:42:43.500731075Z" level=info msg="Start subscribing containerd event" May 17 00:42:43.502417 env[1729]: time="2025-05-17T00:42:43.500803802Z" level=info msg="Start recovering state" May 17 00:42:43.502417 env[1729]: time="2025-05-17T00:42:43.500897921Z" level=info msg="Start event monitor" May 17 00:42:43.502417 env[1729]: time="2025-05-17T00:42:43.500924943Z" level=info msg="Start snapshots syncer" May 17 00:42:43.502417 env[1729]: time="2025-05-17T00:42:43.500953669Z" level=info msg="Start cni network conf syncer for default" May 17 00:42:43.502417 env[1729]: time="2025-05-17T00:42:43.500963980Z" level=info msg="Start streaming server" May 17 00:42:43.502417 env[1729]: time="2025-05-17T00:42:43.501413561Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:42:43.502417 env[1729]: time="2025-05-17T00:42:43.501510485Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:42:43.502417 env[1729]: time="2025-05-17T00:42:43.501721009Z" level=info msg="containerd successfully booted in 0.258977s" May 17 00:42:43.501772 systemd[1]: Started containerd.service. May 17 00:42:43.580704 systemd-networkd[1409]: eth0: Gained IPv6LL May 17 00:42:43.584058 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:42:43.585186 systemd[1]: Reached target network-online.target. May 17 00:42:43.588042 systemd[1]: Started amazon-ssm-agent.service. May 17 00:42:43.592525 systemd[1]: Starting kubelet.service... May 17 00:42:43.597578 systemd[1]: Started nvidia.service. May 17 00:42:43.819579 dbus-daemon[1712]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:42:43.820639 systemd[1]: Started systemd-hostnamed.service. May 17 00:42:43.824280 dbus-daemon[1712]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1757 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:42:43.829209 systemd[1]: Starting polkit.service... May 17 00:42:43.853776 amazon-ssm-agent[1805]: 2025/05/17 00:42:43 Failed to load instance info from vault. RegistrationKey does not exist. May 17 00:42:43.854032 polkitd[1850]: Started polkitd version 121 May 17 00:42:43.862235 amazon-ssm-agent[1805]: Initializing new seelog logger May 17 00:42:43.877128 amazon-ssm-agent[1805]: New Seelog Logger Creation Complete May 17 00:42:43.881162 coreos-metadata[1710]: May 17 00:42:43.878 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:42:43.882055 amazon-ssm-agent[1805]: 2025/05/17 00:42:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:42:43.882176 amazon-ssm-agent[1805]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:42:43.882732 amazon-ssm-agent[1805]: 2025/05/17 00:42:43 processing appconfig overrides May 17 00:42:43.885145 polkitd[1850]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:42:43.885344 polkitd[1850]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:42:43.888932 polkitd[1850]: Finished loading, compiling and executing 2 rules May 17 00:42:43.889681 dbus-daemon[1712]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:42:43.889864 systemd[1]: Started polkit.service. May 17 00:42:43.891991 polkitd[1850]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:42:43.893596 coreos-metadata[1710]: May 17 00:42:43.893 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 May 17 00:42:43.895544 coreos-metadata[1710]: May 17 00:42:43.895 INFO Fetch successful May 17 00:42:43.895544 coreos-metadata[1710]: May 17 00:42:43.895 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 May 17 00:42:43.897466 coreos-metadata[1710]: May 17 00:42:43.897 INFO Fetch successful May 17 00:42:43.905705 unknown[1710]: wrote ssh authorized keys file for user: core May 17 00:42:43.913222 systemd-hostnamed[1757]: Hostname set to (transient) May 17 00:42:43.913343 systemd-resolved[1657]: System hostname changed to 'ip-172-31-31-72'. May 17 00:42:43.955896 update-ssh-keys[1868]: Updated "/home/core/.ssh/authorized_keys" May 17 00:42:43.956736 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 17 00:42:44.072599 systemd[1]: nvidia.service: Deactivated successfully. May 17 00:42:44.435084 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO Create new startup processor May 17 00:42:44.435532 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [LongRunningPluginsManager] registered plugins: {} May 17 00:42:44.435666 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO Initializing bookkeeping folders May 17 00:42:44.435749 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO removing the completed state files May 17 00:42:44.435845 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO Initializing bookkeeping folders for long running plugins May 17 00:42:44.435935 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO Initializing replies folder for MDS reply requests that couldn't reach the service May 17 00:42:44.436026 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO Initializing healthcheck folders for long running plugins May 17 00:42:44.436108 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO Initializing locations for inventory plugin May 17 00:42:44.436229 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO Initializing default location for custom inventory May 17 00:42:44.436308 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO Initializing default location for file inventory May 17 00:42:44.436389 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO Initializing default location for role inventory May 17 00:42:44.436481 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO Init the cloudwatchlogs publisher May 17 00:42:44.436566 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [instanceID=i-0bdcacd56876f6ec3] Successfully loaded platform independent plugin aws:softwareInventory May 17 00:42:44.436651 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [instanceID=i-0bdcacd56876f6ec3] Successfully loaded platform independent plugin aws:configureDocker May 17 00:42:44.436736 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [instanceID=i-0bdcacd56876f6ec3] Successfully loaded platform independent plugin aws:configurePackage May 17 00:42:44.436820 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [instanceID=i-0bdcacd56876f6ec3] Successfully loaded platform independent plugin aws:downloadContent May 17 00:42:44.436897 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [instanceID=i-0bdcacd56876f6ec3] Successfully loaded platform independent plugin aws:runDocument May 17 00:42:44.436981 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [instanceID=i-0bdcacd56876f6ec3] Successfully loaded platform independent plugin aws:runPowerShellScript May 17 00:42:44.437070 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [instanceID=i-0bdcacd56876f6ec3] Successfully loaded platform independent plugin aws:updateSsmAgent May 17 00:42:44.437162 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [instanceID=i-0bdcacd56876f6ec3] Successfully loaded platform independent plugin aws:runDockerAction May 17 00:42:44.437259 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [instanceID=i-0bdcacd56876f6ec3] Successfully loaded platform independent plugin aws:refreshAssociation May 17 00:42:44.437358 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [instanceID=i-0bdcacd56876f6ec3] Successfully loaded platform dependent plugin aws:runShellScript May 17 00:42:44.437466 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 May 17 00:42:44.437571 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO OS: linux, Arch: amd64 May 17 00:42:44.448272 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessagingDeliveryService] Starting document processing engine... May 17 00:42:44.449785 amazon-ssm-agent[1805]: datastore file /var/lib/amazon/ssm/i-0bdcacd56876f6ec3/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute May 17 00:42:44.548520 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessagingDeliveryService] [EngineProcessor] Starting May 17 00:42:44.642797 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing May 17 00:42:44.685225 tar[1725]: linux-amd64/LICENSE May 17 00:42:44.685811 tar[1725]: linux-amd64/README.md May 17 00:42:44.700447 systemd[1]: Finished prepare-helm.service. May 17 00:42:44.737294 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [OfflineService] Starting document processing engine... May 17 00:42:44.831999 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [OfflineService] [EngineProcessor] Starting May 17 00:42:44.904905 locksmithd[1783]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:42:44.926906 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [OfflineService] [EngineProcessor] Initial processing May 17 00:42:45.022085 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [LongRunningPluginsManager] starting long running plugin manager May 17 00:42:45.117289 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute May 17 00:42:45.213576 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [HealthCheck] HealthCheck reporting agent health. May 17 00:42:45.309318 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [OfflineService] Starting message polling May 17 00:42:45.405289 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [OfflineService] Starting send replies to MDS May 17 00:42:45.501351 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessagingDeliveryService] Starting message polling May 17 00:42:45.597685 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessagingDeliveryService] Starting send replies to MDS May 17 00:42:45.627837 sshd_keygen[1754]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:42:45.655229 systemd[1]: Finished sshd-keygen.service. May 17 00:42:45.658435 systemd[1]: Starting issuegen.service... May 17 00:42:45.661255 systemd[1]: Started sshd@0-172.31.31.72:22-139.178.68.195:37280.service. May 17 00:42:45.670751 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:42:45.671043 systemd[1]: Finished issuegen.service. May 17 00:42:45.673635 systemd[1]: Starting systemd-user-sessions.service... May 17 00:42:45.686313 systemd[1]: Finished systemd-user-sessions.service. May 17 00:42:45.689422 systemd[1]: Started getty@tty1.service. May 17 00:42:45.693461 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:42:45.695444 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [instanceID=i-0bdcacd56876f6ec3] Starting association polling May 17 00:42:45.695058 systemd[1]: Reached target getty.target. May 17 00:42:45.791450 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting May 17 00:42:45.864323 sshd[1931]: Accepted publickey for core from 139.178.68.195 port 37280 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:42:45.868356 sshd[1931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:45.884857 systemd[1]: Created slice user-500.slice. May 17 00:42:45.888071 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:42:45.889475 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessagingDeliveryService] [Association] Launching response handler May 17 00:42:45.893720 systemd-logind[1721]: New session 1 of user core. May 17 00:42:45.906816 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:42:45.911495 systemd[1]: Starting user@500.service... May 17 00:42:45.918659 (systemd)[1943]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:45.986096 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing May 17 00:42:46.029263 systemd[1943]: Queued start job for default target default.target. May 17 00:42:46.030276 systemd[1943]: Reached target paths.target. May 17 00:42:46.030299 systemd[1943]: Reached target sockets.target. May 17 00:42:46.030312 systemd[1943]: Reached target timers.target. May 17 00:42:46.030325 systemd[1943]: Reached target basic.target. May 17 00:42:46.030375 systemd[1943]: Reached target default.target. May 17 00:42:46.030459 systemd[1943]: Startup finished in 103ms. May 17 00:42:46.030574 systemd[1]: Started user@500.service. May 17 00:42:46.032537 systemd[1]: Started session-1.scope. May 17 00:42:46.083339 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service May 17 00:42:46.173698 systemd[1]: Started sshd@1-172.31.31.72:22-139.178.68.195:56156.service. May 17 00:42:46.182378 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized May 17 00:42:46.250161 systemd[1]: Started kubelet.service. May 17 00:42:46.251263 systemd[1]: Reached target multi-user.target. May 17 00:42:46.253527 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:42:46.264947 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:42:46.265301 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:42:46.267631 systemd[1]: Startup finished in 7.464s (kernel) + 9.692s (userspace) = 17.157s. May 17 00:42:46.279987 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessageGatewayService] Starting session document processing engine... May 17 00:42:46.351781 sshd[1952]: Accepted publickey for core from 139.178.68.195 port 56156 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:42:46.351637 sshd[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:46.364097 systemd-logind[1721]: New session 2 of user core. May 17 00:42:46.364439 systemd[1]: Started session-2.scope. May 17 00:42:46.380776 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessageGatewayService] [EngineProcessor] Starting May 17 00:42:46.476215 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. May 17 00:42:46.496426 sshd[1952]: pam_unix(sshd:session): session closed for user core May 17 00:42:46.499491 systemd[1]: sshd@1-172.31.31.72:22-139.178.68.195:56156.service: Deactivated successfully. May 17 00:42:46.500797 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:42:46.501530 systemd-logind[1721]: Session 2 logged out. Waiting for processes to exit. May 17 00:42:46.502559 systemd-logind[1721]: Removed session 2. May 17 00:42:46.520331 systemd[1]: Started sshd@2-172.31.31.72:22-139.178.68.195:56172.service. May 17 00:42:46.574388 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0bdcacd56876f6ec3, requestId: 1db3437b-5759-41a0-8f3a-0fc06d12d910 May 17 00:42:46.672828 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessageGatewayService] listening reply. May 17 00:42:46.683599 sshd[1968]: Accepted publickey for core from 139.178.68.195 port 56172 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:42:46.684634 sshd[1968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:46.689455 systemd-logind[1721]: New session 3 of user core. May 17 00:42:46.690268 systemd[1]: Started session-3.scope. May 17 00:42:46.771595 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck May 17 00:42:46.815266 sshd[1968]: pam_unix(sshd:session): session closed for user core May 17 00:42:46.817960 systemd[1]: sshd@2-172.31.31.72:22-139.178.68.195:56172.service: Deactivated successfully. May 17 00:42:46.818713 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:42:46.819880 systemd-logind[1721]: Session 3 logged out. Waiting for processes to exit. May 17 00:42:46.820770 systemd-logind[1721]: Removed session 3. May 17 00:42:46.838143 systemd[1]: Started sshd@3-172.31.31.72:22-139.178.68.195:56178.service. May 17 00:42:46.870337 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [StartupProcessor] Executing startup processor tasks May 17 00:42:46.969359 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running May 17 00:42:46.996961 sshd[1979]: Accepted publickey for core from 139.178.68.195 port 56178 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:42:46.998945 sshd[1979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:47.005944 systemd-logind[1721]: New session 4 of user core. May 17 00:42:47.006421 systemd[1]: Started session-4.scope. May 17 00:42:47.068619 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk May 17 00:42:47.132164 sshd[1979]: pam_unix(sshd:session): session closed for user core May 17 00:42:47.135445 systemd[1]: sshd@3-172.31.31.72:22-139.178.68.195:56178.service: Deactivated successfully. May 17 00:42:47.136581 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:42:47.137478 systemd-logind[1721]: Session 4 logged out. Waiting for processes to exit. May 17 00:42:47.138636 systemd-logind[1721]: Removed session 4. May 17 00:42:47.156513 systemd[1]: Started sshd@4-172.31.31.72:22-139.178.68.195:56180.service. May 17 00:42:47.168321 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 May 17 00:42:47.267925 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0bdcacd56876f6ec3?role=subscribe&stream=input May 17 00:42:47.316611 sshd[1986]: Accepted publickey for core from 139.178.68.195 port 56180 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:42:47.318045 sshd[1986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:47.325714 systemd[1]: Started session-5.scope. May 17 00:42:47.325925 systemd-logind[1721]: New session 5 of user core. May 17 00:42:47.367806 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0bdcacd56876f6ec3?role=subscribe&stream=input May 17 00:42:47.376043 kubelet[1959]: E0517 00:42:47.375981 1959 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:42:47.377723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:42:47.377887 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:42:47.467794 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessageGatewayService] Starting receiving message from control channel May 17 00:42:47.470986 sudo[1991]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:42:47.471243 sudo[1991]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:42:47.496478 systemd[1]: Starting docker.service... May 17 00:42:47.541844 env[2001]: time="2025-05-17T00:42:47.541782195Z" level=info msg="Starting up" May 17 00:42:47.543183 env[2001]: time="2025-05-17T00:42:47.543144251Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:42:47.543183 env[2001]: time="2025-05-17T00:42:47.543170285Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:42:47.543351 env[2001]: time="2025-05-17T00:42:47.543196672Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:42:47.543351 env[2001]: time="2025-05-17T00:42:47.543211133Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:42:47.545221 env[2001]: time="2025-05-17T00:42:47.545185294Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:42:47.545221 env[2001]: time="2025-05-17T00:42:47.545207306Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:42:47.545365 env[2001]: time="2025-05-17T00:42:47.545226962Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:42:47.545365 env[2001]: time="2025-05-17T00:42:47.545239647Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:42:47.568002 amazon-ssm-agent[1805]: 2025-05-17 00:42:44 INFO [MessageGatewayService] [EngineProcessor] Initial processing May 17 00:42:47.714788 env[2001]: time="2025-05-17T00:42:47.712320574Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 17 00:42:47.714788 env[2001]: time="2025-05-17T00:42:47.712353066Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 17 00:42:47.714788 env[2001]: time="2025-05-17T00:42:47.712670295Z" level=info msg="Loading containers: start." May 17 00:42:47.907558 kernel: Initializing XFRM netlink socket May 17 00:42:47.963066 env[2001]: time="2025-05-17T00:42:47.963021306Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:42:47.964186 (udev-worker)[2011]: Network interface NamePolicy= disabled on kernel command line. May 17 00:42:48.063048 systemd-networkd[1409]: docker0: Link UP May 17 00:42:48.085035 env[2001]: time="2025-05-17T00:42:48.084979767Z" level=info msg="Loading containers: done." May 17 00:42:48.096385 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3532347547-merged.mount: Deactivated successfully. May 17 00:42:48.107875 env[2001]: time="2025-05-17T00:42:48.107803058Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:42:48.108084 env[2001]: time="2025-05-17T00:42:48.108020515Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:42:48.108145 env[2001]: time="2025-05-17T00:42:48.108119672Z" level=info msg="Daemon has completed initialization" May 17 00:42:48.131253 systemd[1]: Started docker.service. May 17 00:42:48.139093 env[2001]: time="2025-05-17T00:42:48.139027252Z" level=info msg="API listen on /run/docker.sock" May 17 00:42:49.161072 env[1729]: time="2025-05-17T00:42:49.161008178Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:42:49.695842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578981291.mount: Deactivated successfully. May 17 00:42:51.240550 env[1729]: time="2025-05-17T00:42:51.240472987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:51.242820 env[1729]: time="2025-05-17T00:42:51.242770899Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:51.245076 env[1729]: time="2025-05-17T00:42:51.245043255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:51.247119 env[1729]: time="2025-05-17T00:42:51.247082030Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:51.247710 env[1729]: time="2025-05-17T00:42:51.247678039Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:42:51.248287 env[1729]: time="2025-05-17T00:42:51.248262057Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:42:52.917930 env[1729]: time="2025-05-17T00:42:52.917866401Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:52.921202 env[1729]: time="2025-05-17T00:42:52.921151033Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:52.923198 env[1729]: time="2025-05-17T00:42:52.923155203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:52.926202 env[1729]: time="2025-05-17T00:42:52.926151361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:52.926876 env[1729]: time="2025-05-17T00:42:52.926840402Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:42:52.927428 env[1729]: time="2025-05-17T00:42:52.927406763Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:42:54.364838 env[1729]: time="2025-05-17T00:42:54.364767070Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:54.368993 env[1729]: time="2025-05-17T00:42:54.368942218Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:54.371591 env[1729]: time="2025-05-17T00:42:54.371548897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:54.374481 env[1729]: time="2025-05-17T00:42:54.374429170Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:54.375917 env[1729]: time="2025-05-17T00:42:54.375874927Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:42:54.376638 env[1729]: time="2025-05-17T00:42:54.376605956Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:42:55.594938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819069641.mount: Deactivated successfully. May 17 00:42:56.273143 env[1729]: time="2025-05-17T00:42:56.273094727Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:56.275310 env[1729]: time="2025-05-17T00:42:56.275265720Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:56.276410 env[1729]: time="2025-05-17T00:42:56.276363767Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:56.276978 env[1729]: time="2025-05-17T00:42:56.276950181Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:56.277674 env[1729]: time="2025-05-17T00:42:56.277592164Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:42:56.278350 env[1729]: time="2025-05-17T00:42:56.278323789Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:42:56.790573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955005997.mount: Deactivated successfully. May 17 00:42:57.629083 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:42:57.629345 systemd[1]: Stopped kubelet.service. May 17 00:42:57.631929 systemd[1]: Starting kubelet.service... May 17 00:42:57.884698 env[1729]: time="2025-05-17T00:42:57.884183610Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:57.894372 env[1729]: time="2025-05-17T00:42:57.894314284Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:57.899158 env[1729]: time="2025-05-17T00:42:57.899112095Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:57.902549 env[1729]: time="2025-05-17T00:42:57.902504973Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:57.903852 env[1729]: time="2025-05-17T00:42:57.903807498Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:42:57.904691 env[1729]: time="2025-05-17T00:42:57.904653405Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:42:58.005496 systemd[1]: Started kubelet.service. May 17 00:42:58.058191 kubelet[2132]: E0517 00:42:58.058154 2132 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:42:58.061744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:42:58.061995 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:42:58.335314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415456441.mount: Deactivated successfully. May 17 00:42:58.340079 amazon-ssm-agent[1805]: 2025-05-17 00:42:58 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. May 17 00:42:58.341931 env[1729]: time="2025-05-17T00:42:58.341873136Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:58.343772 env[1729]: time="2025-05-17T00:42:58.343725597Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:58.345643 env[1729]: time="2025-05-17T00:42:58.345459003Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:58.348133 env[1729]: time="2025-05-17T00:42:58.348081944Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:58.348643 env[1729]: time="2025-05-17T00:42:58.348600953Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:42:58.349370 env[1729]: time="2025-05-17T00:42:58.349344545Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:42:58.836697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643676815.mount: Deactivated successfully. May 17 00:43:01.287036 env[1729]: time="2025-05-17T00:43:01.286525956Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:01.289851 env[1729]: time="2025-05-17T00:43:01.289462831Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:01.295482 env[1729]: time="2025-05-17T00:43:01.295434764Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:01.299212 env[1729]: time="2025-05-17T00:43:01.299165509Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:01.300321 env[1729]: time="2025-05-17T00:43:01.300280131Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:43:04.890268 systemd[1]: Stopped kubelet.service. May 17 00:43:04.893247 systemd[1]: Starting kubelet.service... May 17 00:43:04.927038 systemd[1]: Reloading. May 17 00:43:05.042901 /usr/lib/systemd/system-generators/torcx-generator[2183]: time="2025-05-17T00:43:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:43:05.042942 /usr/lib/systemd/system-generators/torcx-generator[2183]: time="2025-05-17T00:43:05Z" level=info msg="torcx already run" May 17 00:43:05.199801 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:43:05.199826 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:43:05.231083 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:43:05.356154 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:43:05.356266 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:43:05.356601 systemd[1]: Stopped kubelet.service. May 17 00:43:05.358704 systemd[1]: Starting kubelet.service... May 17 00:43:05.694291 systemd[1]: Started kubelet.service. May 17 00:43:05.763153 kubelet[2254]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:43:05.763153 kubelet[2254]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:43:05.763153 kubelet[2254]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:43:05.763712 kubelet[2254]: I0517 00:43:05.763238 2254 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:43:06.155166 kubelet[2254]: I0517 00:43:06.155049 2254 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:43:06.155166 kubelet[2254]: I0517 00:43:06.155092 2254 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:43:06.156111 kubelet[2254]: I0517 00:43:06.156051 2254 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:43:06.201236 kubelet[2254]: E0517 00:43:06.201204 2254 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.72:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:06.202545 kubelet[2254]: I0517 00:43:06.202514 2254 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:43:06.212502 kubelet[2254]: E0517 00:43:06.212463 2254 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:43:06.212689 kubelet[2254]: I0517 00:43:06.212679 2254 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:43:06.219441 kubelet[2254]: I0517 00:43:06.219404 2254 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:43:06.219830 kubelet[2254]: I0517 00:43:06.219808 2254 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:43:06.219992 kubelet[2254]: I0517 00:43:06.219955 2254 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:43:06.220192 kubelet[2254]: I0517 00:43:06.219991 2254 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-72","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:43:06.220346 kubelet[2254]: I0517 00:43:06.220201 2254 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:43:06.220346 kubelet[2254]: I0517 00:43:06.220216 2254 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:43:06.220472 kubelet[2254]: I0517 00:43:06.220377 2254 state_mem.go:36] "Initialized new in-memory state store" May 17 00:43:06.230081 kubelet[2254]: I0517 00:43:06.230016 2254 kubelet.go:408] "Attempting to sync node with API server" May 17 00:43:06.230081 kubelet[2254]: I0517 00:43:06.230097 2254 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:43:06.230269 kubelet[2254]: I0517 00:43:06.230138 2254 kubelet.go:314] "Adding apiserver pod source" May 17 00:43:06.230269 kubelet[2254]: I0517 00:43:06.230161 2254 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:43:06.231739 kubelet[2254]: W0517 00:43:06.231674 2254 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-72&limit=500&resourceVersion=0": dial tcp 172.31.31.72:6443: connect: connection refused May 17 00:43:06.231925 kubelet[2254]: E0517 00:43:06.231901 2254 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-72&limit=500&resourceVersion=0\": dial tcp 172.31.31.72:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:06.236165 kubelet[2254]: W0517 00:43:06.235838 2254 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.72:6443: connect: connection refused May 17 00:43:06.236165 kubelet[2254]: E0517 00:43:06.235926 2254 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.72:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:06.236373 kubelet[2254]: I0517 00:43:06.236218 2254 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:43:06.236738 kubelet[2254]: I0517 00:43:06.236714 2254 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:43:06.236812 kubelet[2254]: W0517 00:43:06.236783 2254 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:43:06.244989 kubelet[2254]: I0517 00:43:06.244930 2254 server.go:1274] "Started kubelet" May 17 00:43:06.246517 kubelet[2254]: I0517 00:43:06.246460 2254 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:43:06.253943 kubelet[2254]: I0517 00:43:06.253882 2254 server.go:449] "Adding debug handlers to kubelet server" May 17 00:43:06.258587 kubelet[2254]: I0517 00:43:06.258531 2254 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:43:06.258912 kubelet[2254]: I0517 00:43:06.258891 2254 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:43:06.262218 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:43:06.262478 kubelet[2254]: I0517 00:43:06.262457 2254 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:43:06.263256 kubelet[2254]: I0517 00:43:06.263221 2254 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:43:06.265933 kubelet[2254]: I0517 00:43:06.265118 2254 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:43:06.265933 kubelet[2254]: E0517 00:43:06.265263 2254 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-72\" not found" May 17 00:43:06.266171 kubelet[2254]: I0517 00:43:06.266148 2254 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:43:06.266229 kubelet[2254]: I0517 00:43:06.266213 2254 reconciler.go:26] "Reconciler: start to sync state" May 17 00:43:06.268790 kubelet[2254]: W0517 00:43:06.268732 2254 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.72:6443: connect: connection refused May 17 00:43:06.268965 kubelet[2254]: E0517 00:43:06.268935 2254 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.72:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:06.269716 kubelet[2254]: E0517 00:43:06.269672 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-72?timeout=10s\": dial tcp 172.31.31.72:6443: connect: connection refused" interval="200ms" May 17 00:43:06.270505 kubelet[2254]: I0517 00:43:06.270478 2254 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:43:06.276040 kubelet[2254]: E0517 00:43:06.274234 2254 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.72:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.72:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-72.184029c61a845d68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-72,UID:ip-172-31-31-72,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-72,},FirstTimestamp:2025-05-17 00:43:06.244898152 +0000 UTC m=+0.539715433,LastTimestamp:2025-05-17 00:43:06.244898152 +0000 UTC m=+0.539715433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-72,}" May 17 00:43:06.276376 kubelet[2254]: I0517 00:43:06.276350 2254 factory.go:221] Registration of the containerd container factory successfully May 17 00:43:06.276376 kubelet[2254]: I0517 00:43:06.276369 2254 factory.go:221] Registration of the systemd container factory successfully May 17 00:43:06.293270 kubelet[2254]: I0517 00:43:06.293219 2254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:43:06.294829 kubelet[2254]: I0517 00:43:06.294797 2254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:43:06.294829 kubelet[2254]: I0517 00:43:06.294825 2254 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:43:06.295009 kubelet[2254]: I0517 00:43:06.294852 2254 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:43:06.295009 kubelet[2254]: E0517 00:43:06.294907 2254 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:43:06.314649 kubelet[2254]: W0517 00:43:06.314586 2254 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.72:6443: connect: connection refused May 17 00:43:06.315491 kubelet[2254]: E0517 00:43:06.315434 2254 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.72:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:06.317185 kubelet[2254]: I0517 00:43:06.317161 2254 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:43:06.317310 kubelet[2254]: I0517 00:43:06.317178 2254 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:43:06.317310 kubelet[2254]: I0517 00:43:06.317221 2254 state_mem.go:36] "Initialized new in-memory state store" May 17 00:43:06.320117 kubelet[2254]: I0517 00:43:06.320069 2254 policy_none.go:49] "None policy: Start" May 17 00:43:06.320943 kubelet[2254]: I0517 00:43:06.320917 2254 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:43:06.320943 kubelet[2254]: I0517 00:43:06.320945 2254 state_mem.go:35] "Initializing new in-memory state store" May 17 00:43:06.327255 kubelet[2254]: I0517 00:43:06.327228 2254 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:43:06.329012 kubelet[2254]: I0517 00:43:06.328991 2254 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:43:06.329197 kubelet[2254]: I0517 00:43:06.329151 2254 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:43:06.330647 kubelet[2254]: I0517 00:43:06.329667 2254 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:43:06.331878 kubelet[2254]: E0517 00:43:06.331863 2254 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-72\" not found" May 17 00:43:06.435309 kubelet[2254]: I0517 00:43:06.432849 2254 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-72" May 17 00:43:06.435309 kubelet[2254]: E0517 00:43:06.433822 2254 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.72:6443/api/v1/nodes\": dial tcp 172.31.31.72:6443: connect: connection refused" node="ip-172-31-31-72" May 17 00:43:06.471205 kubelet[2254]: E0517 00:43:06.471147 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-72?timeout=10s\": dial tcp 172.31.31.72:6443: connect: connection refused" interval="400ms" May 17 00:43:06.566614 kubelet[2254]: I0517 00:43:06.566572 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a69af9e03d5928eb182a3d0cb2cf96ca-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-72\" (UID: \"a69af9e03d5928eb182a3d0cb2cf96ca\") " pod="kube-system/kube-controller-manager-ip-172-31-31-72" May 17 00:43:06.566614 kubelet[2254]: I0517 00:43:06.566612 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a69af9e03d5928eb182a3d0cb2cf96ca-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-72\" (UID: \"a69af9e03d5928eb182a3d0cb2cf96ca\") " pod="kube-system/kube-controller-manager-ip-172-31-31-72" May 17 00:43:06.566614 kubelet[2254]: I0517 00:43:06.566637 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a69af9e03d5928eb182a3d0cb2cf96ca-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-72\" (UID: \"a69af9e03d5928eb182a3d0cb2cf96ca\") " pod="kube-system/kube-controller-manager-ip-172-31-31-72" May 17 00:43:06.566851 kubelet[2254]: I0517 00:43:06.566655 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f29e571fc44f03c63c98b9b586adf43-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-72\" (UID: \"5f29e571fc44f03c63c98b9b586adf43\") " pod="kube-system/kube-scheduler-ip-172-31-31-72" May 17 00:43:06.566851 kubelet[2254]: I0517 00:43:06.566672 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5cf16d816a94107fd1f89874868802ea-ca-certs\") pod \"kube-apiserver-ip-172-31-31-72\" (UID: \"5cf16d816a94107fd1f89874868802ea\") " pod="kube-system/kube-apiserver-ip-172-31-31-72" May 17 00:43:06.566851 kubelet[2254]: I0517 00:43:06.566687 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5cf16d816a94107fd1f89874868802ea-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-72\" (UID: \"5cf16d816a94107fd1f89874868802ea\") " pod="kube-system/kube-apiserver-ip-172-31-31-72" May 17 00:43:06.566851 kubelet[2254]: I0517 00:43:06.566703 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5cf16d816a94107fd1f89874868802ea-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-72\" (UID: \"5cf16d816a94107fd1f89874868802ea\") " pod="kube-system/kube-apiserver-ip-172-31-31-72" May 17 00:43:06.566851 kubelet[2254]: I0517 00:43:06.566722 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a69af9e03d5928eb182a3d0cb2cf96ca-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-72\" (UID: \"a69af9e03d5928eb182a3d0cb2cf96ca\") " pod="kube-system/kube-controller-manager-ip-172-31-31-72" May 17 00:43:06.566992 kubelet[2254]: I0517 00:43:06.566737 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a69af9e03d5928eb182a3d0cb2cf96ca-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-72\" (UID: \"a69af9e03d5928eb182a3d0cb2cf96ca\") " pod="kube-system/kube-controller-manager-ip-172-31-31-72" May 17 00:43:06.636296 kubelet[2254]: I0517 00:43:06.636266 2254 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-72" May 17 00:43:06.636705 kubelet[2254]: E0517 00:43:06.636669 2254 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.72:6443/api/v1/nodes\": dial tcp 172.31.31.72:6443: connect: connection refused" node="ip-172-31-31-72" May 17 00:43:06.703959 env[1729]: time="2025-05-17T00:43:06.703561415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-72,Uid:a69af9e03d5928eb182a3d0cb2cf96ca,Namespace:kube-system,Attempt:0,}" May 17 00:43:06.703959 env[1729]: time="2025-05-17T00:43:06.703809842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-72,Uid:5cf16d816a94107fd1f89874868802ea,Namespace:kube-system,Attempt:0,}" May 17 00:43:06.705425 env[1729]: time="2025-05-17T00:43:06.705372287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-72,Uid:5f29e571fc44f03c63c98b9b586adf43,Namespace:kube-system,Attempt:0,}" May 17 00:43:06.871641 kubelet[2254]: E0517 00:43:06.871591 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-72?timeout=10s\": dial tcp 172.31.31.72:6443: connect: connection refused" interval="800ms" May 17 00:43:07.039128 kubelet[2254]: I0517 00:43:07.039024 2254 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-72" May 17 00:43:07.039580 kubelet[2254]: E0517 00:43:07.039553 2254 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.72:6443/api/v1/nodes\": dial tcp 172.31.31.72:6443: connect: connection refused" node="ip-172-31-31-72" May 17 00:43:07.154525 kubelet[2254]: W0517 00:43:07.154448 2254 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-72&limit=500&resourceVersion=0": dial tcp 172.31.31.72:6443: connect: connection refused May 17 00:43:07.154525 kubelet[2254]: E0517 00:43:07.154521 2254 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-72&limit=500&resourceVersion=0\": dial tcp 172.31.31.72:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:07.186050 kubelet[2254]: W0517 00:43:07.185987 2254 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.72:6443: connect: connection refused May 17 00:43:07.186212 kubelet[2254]: E0517 00:43:07.186065 2254 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.72:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:07.193856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2372876919.mount: Deactivated successfully. May 17 00:43:07.211186 env[1729]: time="2025-05-17T00:43:07.211130788Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.213337 kubelet[2254]: W0517 00:43:07.213233 2254 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.72:6443: connect: connection refused May 17 00:43:07.213337 kubelet[2254]: E0517 00:43:07.213307 2254 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.72:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:07.213770 env[1729]: time="2025-05-17T00:43:07.213258457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.219539 env[1729]: time="2025-05-17T00:43:07.219484770Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.222166 env[1729]: time="2025-05-17T00:43:07.222113380Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.224766 env[1729]: time="2025-05-17T00:43:07.224703961Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.227831 env[1729]: time="2025-05-17T00:43:07.227784751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.229492 env[1729]: time="2025-05-17T00:43:07.229450683Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.231835 env[1729]: time="2025-05-17T00:43:07.231792562Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.236021 env[1729]: time="2025-05-17T00:43:07.235981022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.238658 env[1729]: time="2025-05-17T00:43:07.238613220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.247326 env[1729]: time="2025-05-17T00:43:07.247261145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.264750 env[1729]: time="2025-05-17T00:43:07.262524585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.302666 env[1729]: time="2025-05-17T00:43:07.301446401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:07.302666 env[1729]: time="2025-05-17T00:43:07.301546289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:07.302666 env[1729]: time="2025-05-17T00:43:07.301577482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:07.302666 env[1729]: time="2025-05-17T00:43:07.301739338Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c31fff339df08696bd1baf46fec4249ec5b4372614afb5c187e791dab9f277f4 pid=2306 runtime=io.containerd.runc.v2 May 17 00:43:07.303868 env[1729]: time="2025-05-17T00:43:07.303775661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:07.303868 env[1729]: time="2025-05-17T00:43:07.303828959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:07.303868 env[1729]: time="2025-05-17T00:43:07.303845590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:07.304295 env[1729]: time="2025-05-17T00:43:07.304214396Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db2f36e81414806344df560932cb7e9d9b0c674831867af620819b1a4493e3e2 pid=2294 runtime=io.containerd.runc.v2 May 17 00:43:07.353077 env[1729]: time="2025-05-17T00:43:07.352022588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:07.353077 env[1729]: time="2025-05-17T00:43:07.352137763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:07.353077 env[1729]: time="2025-05-17T00:43:07.352175325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:07.353077 env[1729]: time="2025-05-17T00:43:07.352463406Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a3bb49ae3591298f5d3dd97c21df72ae0cbce650f0711673ab96b935d4c4967 pid=2347 runtime=io.containerd.runc.v2 May 17 00:43:07.443814 env[1729]: time="2025-05-17T00:43:07.443766834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-72,Uid:5cf16d816a94107fd1f89874868802ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"db2f36e81414806344df560932cb7e9d9b0c674831867af620819b1a4493e3e2\"" May 17 00:43:07.447930 env[1729]: time="2025-05-17T00:43:07.447882389Z" level=info msg="CreateContainer within sandbox \"db2f36e81414806344df560932cb7e9d9b0c674831867af620819b1a4493e3e2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:43:07.453548 env[1729]: time="2025-05-17T00:43:07.453480031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-72,Uid:a69af9e03d5928eb182a3d0cb2cf96ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"c31fff339df08696bd1baf46fec4249ec5b4372614afb5c187e791dab9f277f4\"" May 17 00:43:07.459034 env[1729]: time="2025-05-17T00:43:07.458989278Z" level=info msg="CreateContainer within sandbox \"c31fff339df08696bd1baf46fec4249ec5b4372614afb5c187e791dab9f277f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:43:07.461342 env[1729]: time="2025-05-17T00:43:07.459630681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-72,Uid:5f29e571fc44f03c63c98b9b586adf43,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a3bb49ae3591298f5d3dd97c21df72ae0cbce650f0711673ab96b935d4c4967\"" May 17 00:43:07.464166 env[1729]: time="2025-05-17T00:43:07.464127095Z" level=info msg="CreateContainer within sandbox \"0a3bb49ae3591298f5d3dd97c21df72ae0cbce650f0711673ab96b935d4c4967\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:43:07.504216 env[1729]: time="2025-05-17T00:43:07.504158877Z" level=info msg="CreateContainer within sandbox \"db2f36e81414806344df560932cb7e9d9b0c674831867af620819b1a4493e3e2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"41be6b851793ddfe182592e383d45a13a992e0b4c24dad030c8bf40375455df8\"" May 17 00:43:07.504929 env[1729]: time="2025-05-17T00:43:07.504894026Z" level=info msg="StartContainer for \"41be6b851793ddfe182592e383d45a13a992e0b4c24dad030c8bf40375455df8\"" May 17 00:43:07.511284 env[1729]: time="2025-05-17T00:43:07.511239774Z" level=info msg="CreateContainer within sandbox \"c31fff339df08696bd1baf46fec4249ec5b4372614afb5c187e791dab9f277f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"621d0ef4b3dca0b653ce2fcf394ba10ad9eb212eba0302fd9a53d8b7c0faf6ff\"" May 17 00:43:07.512043 env[1729]: time="2025-05-17T00:43:07.512017584Z" level=info msg="StartContainer for \"621d0ef4b3dca0b653ce2fcf394ba10ad9eb212eba0302fd9a53d8b7c0faf6ff\"" May 17 00:43:07.513680 env[1729]: time="2025-05-17T00:43:07.513483294Z" level=info msg="CreateContainer within sandbox \"0a3bb49ae3591298f5d3dd97c21df72ae0cbce650f0711673ab96b935d4c4967\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"500d404091d652823adfb700df0edb5afc13b63f380f4107c7f5e70d36cde794\"" May 17 00:43:07.514762 env[1729]: time="2025-05-17T00:43:07.514731695Z" level=info msg="StartContainer for \"500d404091d652823adfb700df0edb5afc13b63f380f4107c7f5e70d36cde794\"" May 17 00:43:07.596828 kubelet[2254]: W0517 00:43:07.596603 2254 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.72:6443: connect: connection refused May 17 00:43:07.596828 kubelet[2254]: E0517 00:43:07.596701 2254 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.72:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:07.627692 env[1729]: time="2025-05-17T00:43:07.627625013Z" level=info msg="StartContainer for \"41be6b851793ddfe182592e383d45a13a992e0b4c24dad030c8bf40375455df8\" returns successfully" May 17 00:43:07.672695 kubelet[2254]: E0517 00:43:07.672485 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-72?timeout=10s\": dial tcp 172.31.31.72:6443: connect: connection refused" interval="1.6s" May 17 00:43:07.675960 env[1729]: time="2025-05-17T00:43:07.675911391Z" level=info msg="StartContainer for \"621d0ef4b3dca0b653ce2fcf394ba10ad9eb212eba0302fd9a53d8b7c0faf6ff\" returns successfully" May 17 00:43:07.691145 env[1729]: time="2025-05-17T00:43:07.691092546Z" level=info msg="StartContainer for \"500d404091d652823adfb700df0edb5afc13b63f380f4107c7f5e70d36cde794\" returns successfully" May 17 00:43:07.842010 kubelet[2254]: I0517 00:43:07.841493 2254 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-72" May 17 00:43:07.842010 kubelet[2254]: E0517 00:43:07.841945 2254 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.72:6443/api/v1/nodes\": dial tcp 172.31.31.72:6443: connect: connection refused" node="ip-172-31-31-72" May 17 00:43:08.317962 kubelet[2254]: E0517 00:43:08.317923 2254 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.72:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:08.349417 kubelet[2254]: E0517 00:43:08.349291 2254 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.72:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.72:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-72.184029c61a845d68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-72,UID:ip-172-31-31-72,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-72,},FirstTimestamp:2025-05-17 00:43:06.244898152 +0000 UTC m=+0.539715433,LastTimestamp:2025-05-17 00:43:06.244898152 +0000 UTC m=+0.539715433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-72,}" May 17 00:43:08.882514 kubelet[2254]: W0517 00:43:08.882473 2254 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.72:6443: connect: connection refused May 17 00:43:08.882824 kubelet[2254]: E0517 00:43:08.882804 2254 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.72:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:09.273109 kubelet[2254]: E0517 00:43:09.273037 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-72?timeout=10s\": dial tcp 172.31.31.72:6443: connect: connection refused" interval="3.2s" May 17 00:43:09.443714 kubelet[2254]: I0517 00:43:09.443690 2254 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-72" May 17 00:43:09.444593 kubelet[2254]: E0517 00:43:09.444564 2254 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.72:6443/api/v1/nodes\": dial tcp 172.31.31.72:6443: connect: connection refused" node="ip-172-31-31-72" May 17 00:43:11.235319 kubelet[2254]: I0517 00:43:11.235267 2254 apiserver.go:52] "Watching apiserver" May 17 00:43:11.266976 kubelet[2254]: I0517 00:43:11.266912 2254 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:43:11.310469 kubelet[2254]: E0517 00:43:11.310426 2254 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-31-72" not found May 17 00:43:11.684869 kubelet[2254]: E0517 00:43:11.684747 2254 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-31-72" not found May 17 00:43:12.120271 kubelet[2254]: E0517 00:43:12.120245 2254 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-31-72" not found May 17 00:43:12.478253 kubelet[2254]: E0517 00:43:12.478209 2254 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-72\" not found" node="ip-172-31-31-72" May 17 00:43:12.646999 kubelet[2254]: I0517 00:43:12.646976 2254 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-72" May 17 00:43:12.655381 kubelet[2254]: I0517 00:43:12.655316 2254 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-72" May 17 00:43:12.655381 kubelet[2254]: E0517 00:43:12.655364 2254 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-72\": node \"ip-172-31-31-72\" not found" May 17 00:43:13.180619 systemd[1]: Reloading. May 17 00:43:13.294054 /usr/lib/systemd/system-generators/torcx-generator[2546]: time="2025-05-17T00:43:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:43:13.294094 /usr/lib/systemd/system-generators/torcx-generator[2546]: time="2025-05-17T00:43:13Z" level=info msg="torcx already run" May 17 00:43:13.405842 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:43:13.405867 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:43:13.427998 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:43:13.535926 systemd[1]: Stopping kubelet.service... May 17 00:43:13.558798 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:43:13.559092 systemd[1]: Stopped kubelet.service. May 17 00:43:13.561071 systemd[1]: Starting kubelet.service... May 17 00:43:13.945972 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:43:15.070516 systemd[1]: Started kubelet.service. May 17 00:43:15.190234 kubelet[2618]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:43:15.190234 kubelet[2618]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:43:15.190234 kubelet[2618]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:43:15.190234 kubelet[2618]: I0517 00:43:15.188761 2618 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:43:15.197876 sudo[2629]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:43:15.198195 sudo[2629]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 00:43:15.207919 kubelet[2618]: I0517 00:43:15.207878 2618 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:43:15.207919 kubelet[2618]: I0517 00:43:15.207910 2618 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:43:15.208299 kubelet[2618]: I0517 00:43:15.208275 2618 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:43:15.209811 kubelet[2618]: I0517 00:43:15.209782 2618 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:43:15.220641 kubelet[2618]: I0517 00:43:15.219618 2618 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:43:15.232482 kubelet[2618]: E0517 00:43:15.232445 2618 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:43:15.232482 kubelet[2618]: I0517 00:43:15.232482 2618 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:43:15.246015 kubelet[2618]: I0517 00:43:15.245984 2618 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:43:15.250126 kubelet[2618]: I0517 00:43:15.250095 2618 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:43:15.250547 kubelet[2618]: I0517 00:43:15.250502 2618 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:43:15.251046 kubelet[2618]: I0517 00:43:15.250661 2618 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-72","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:43:15.251248 kubelet[2618]: I0517 00:43:15.251236 2618 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:43:15.251308 kubelet[2618]: I0517 00:43:15.251301 2618 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:43:15.251411 kubelet[2618]: I0517 00:43:15.251385 2618 state_mem.go:36] "Initialized new in-memory state store" May 17 00:43:15.251647 kubelet[2618]: I0517 00:43:15.251635 2618 kubelet.go:408] "Attempting to sync node with API server" May 17 00:43:15.251843 kubelet[2618]: I0517 00:43:15.251830 2618 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:43:15.251969 kubelet[2618]: I0517 00:43:15.251958 2618 kubelet.go:314] "Adding apiserver pod source" May 17 00:43:15.252067 kubelet[2618]: I0517 00:43:15.252057 2618 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:43:15.265105 kubelet[2618]: I0517 00:43:15.265074 2618 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:43:15.266006 kubelet[2618]: I0517 00:43:15.265985 2618 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:43:15.272206 kubelet[2618]: I0517 00:43:15.272179 2618 server.go:1274] "Started kubelet" May 17 00:43:15.301061 kubelet[2618]: I0517 00:43:15.301036 2618 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:43:15.323362 kubelet[2618]: I0517 00:43:15.323098 2618 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:43:15.327626 kubelet[2618]: I0517 00:43:15.327344 2618 server.go:449] "Adding debug handlers to kubelet server" May 17 00:43:15.335705 kubelet[2618]: I0517 00:43:15.335591 2618 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:43:15.336134 kubelet[2618]: I0517 00:43:15.336115 2618 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:43:15.352932 kubelet[2618]: I0517 00:43:15.352898 2618 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:43:15.364632 kubelet[2618]: I0517 00:43:15.364607 2618 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:43:15.365600 kubelet[2618]: I0517 00:43:15.365577 2618 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:43:15.365890 kubelet[2618]: I0517 00:43:15.365879 2618 reconciler.go:26] "Reconciler: start to sync state" May 17 00:43:15.376761 kubelet[2618]: I0517 00:43:15.376734 2618 factory.go:221] Registration of the systemd container factory successfully May 17 00:43:15.377069 kubelet[2618]: I0517 00:43:15.377045 2618 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:43:15.382883 kubelet[2618]: E0517 00:43:15.382857 2618 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:43:15.385835 kubelet[2618]: I0517 00:43:15.383406 2618 factory.go:221] Registration of the containerd container factory successfully May 17 00:43:15.399382 kubelet[2618]: I0517 00:43:15.399331 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:43:15.402158 kubelet[2618]: I0517 00:43:15.402121 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:43:15.402158 kubelet[2618]: I0517 00:43:15.402161 2618 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:43:15.402356 kubelet[2618]: I0517 00:43:15.402182 2618 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:43:15.402356 kubelet[2618]: E0517 00:43:15.402234 2618 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:43:15.502970 kubelet[2618]: E0517 00:43:15.502928 2618 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:43:15.507378 kubelet[2618]: I0517 00:43:15.507348 2618 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:43:15.507378 kubelet[2618]: I0517 00:43:15.507367 2618 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:43:15.507378 kubelet[2618]: I0517 00:43:15.507412 2618 state_mem.go:36] "Initialized new in-memory state store" May 17 00:43:15.507686 kubelet[2618]: I0517 00:43:15.507606 2618 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:43:15.507686 kubelet[2618]: I0517 00:43:15.507619 2618 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:43:15.507686 kubelet[2618]: I0517 00:43:15.507644 2618 policy_none.go:49] "None policy: Start" May 17 00:43:15.508369 kubelet[2618]: I0517 00:43:15.508345 2618 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:43:15.508369 kubelet[2618]: I0517 00:43:15.508372 2618 state_mem.go:35] "Initializing new in-memory state store" May 17 00:43:15.508599 kubelet[2618]: I0517 00:43:15.508581 2618 state_mem.go:75] "Updated machine memory state" May 17 00:43:15.509972 kubelet[2618]: I0517 00:43:15.509952 2618 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:43:15.510251 kubelet[2618]: I0517 00:43:15.510238 2618 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:43:15.510361 kubelet[2618]: I0517 00:43:15.510329 2618 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:43:15.512734 kubelet[2618]: I0517 00:43:15.512717 2618 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:43:15.624371 kubelet[2618]: I0517 00:43:15.624282 2618 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-72" May 17 00:43:15.636213 kubelet[2618]: I0517 00:43:15.636183 2618 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-31-72" May 17 00:43:15.636418 kubelet[2618]: I0517 00:43:15.636267 2618 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-72" May 17 00:43:15.723601 kubelet[2618]: E0517 00:43:15.723550 2618 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-31-72\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-72" May 17 00:43:15.767558 kubelet[2618]: I0517 00:43:15.767492 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5cf16d816a94107fd1f89874868802ea-ca-certs\") pod \"kube-apiserver-ip-172-31-31-72\" (UID: \"5cf16d816a94107fd1f89874868802ea\") " pod="kube-system/kube-apiserver-ip-172-31-31-72" May 17 00:43:15.767558 kubelet[2618]: I0517 00:43:15.767536 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a69af9e03d5928eb182a3d0cb2cf96ca-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-72\" (UID: \"a69af9e03d5928eb182a3d0cb2cf96ca\") " pod="kube-system/kube-controller-manager-ip-172-31-31-72" May 17 00:43:15.767558 kubelet[2618]: I0517 00:43:15.767557 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a69af9e03d5928eb182a3d0cb2cf96ca-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-72\" (UID: \"a69af9e03d5928eb182a3d0cb2cf96ca\") " pod="kube-system/kube-controller-manager-ip-172-31-31-72" May 17 00:43:15.767990 kubelet[2618]: I0517 00:43:15.767575 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a69af9e03d5928eb182a3d0cb2cf96ca-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-72\" (UID: \"a69af9e03d5928eb182a3d0cb2cf96ca\") " pod="kube-system/kube-controller-manager-ip-172-31-31-72" May 17 00:43:15.767990 kubelet[2618]: I0517 00:43:15.767594 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a69af9e03d5928eb182a3d0cb2cf96ca-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-72\" (UID: \"a69af9e03d5928eb182a3d0cb2cf96ca\") " pod="kube-system/kube-controller-manager-ip-172-31-31-72" May 17 00:43:15.767990 kubelet[2618]: I0517 00:43:15.767609 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5cf16d816a94107fd1f89874868802ea-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-72\" (UID: \"5cf16d816a94107fd1f89874868802ea\") " pod="kube-system/kube-apiserver-ip-172-31-31-72" May 17 00:43:15.767990 kubelet[2618]: I0517 00:43:15.767624 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5cf16d816a94107fd1f89874868802ea-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-72\" (UID: \"5cf16d816a94107fd1f89874868802ea\") " pod="kube-system/kube-apiserver-ip-172-31-31-72" May 17 00:43:15.767990 kubelet[2618]: I0517 00:43:15.767640 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a69af9e03d5928eb182a3d0cb2cf96ca-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-72\" (UID: \"a69af9e03d5928eb182a3d0cb2cf96ca\") " pod="kube-system/kube-controller-manager-ip-172-31-31-72" May 17 00:43:15.768135 kubelet[2618]: I0517 00:43:15.767655 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f29e571fc44f03c63c98b9b586adf43-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-72\" (UID: \"5f29e571fc44f03c63c98b9b586adf43\") " pod="kube-system/kube-scheduler-ip-172-31-31-72" May 17 00:43:16.278531 kubelet[2618]: I0517 00:43:16.278490 2618 apiserver.go:52] "Watching apiserver" May 17 00:43:16.360608 sudo[2629]: pam_unix(sudo:session): session closed for user root May 17 00:43:16.366187 kubelet[2618]: I0517 00:43:16.366154 2618 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:43:16.463205 kubelet[2618]: I0517 00:43:16.463145 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-72" podStartSLOduration=1.463127574 podStartE2EDuration="1.463127574s" podCreationTimestamp="2025-05-17 00:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:43:16.452556568 +0000 UTC m=+1.353853601" watchObservedRunningTime="2025-05-17 00:43:16.463127574 +0000 UTC m=+1.364424628" May 17 00:43:16.475764 kubelet[2618]: I0517 00:43:16.475696 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-72" podStartSLOduration=1.475676141 podStartE2EDuration="1.475676141s" podCreationTimestamp="2025-05-17 00:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:43:16.464008638 +0000 UTC m=+1.365305670" watchObservedRunningTime="2025-05-17 00:43:16.475676141 +0000 UTC m=+1.376973168" May 17 00:43:16.490303 kubelet[2618]: I0517 00:43:16.490227 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-72" podStartSLOduration=3.490201645 podStartE2EDuration="3.490201645s" podCreationTimestamp="2025-05-17 00:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:43:16.476727955 +0000 UTC m=+1.378024985" watchObservedRunningTime="2025-05-17 00:43:16.490201645 +0000 UTC m=+1.391498678" May 17 00:43:18.579243 sudo[1991]: pam_unix(sudo:session): session closed for user root May 17 00:43:18.602711 sshd[1986]: pam_unix(sshd:session): session closed for user core May 17 00:43:18.605743 systemd[1]: sshd@4-172.31.31.72:22-139.178.68.195:56180.service: Deactivated successfully. May 17 00:43:18.606760 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:43:18.606779 systemd-logind[1721]: Session 5 logged out. Waiting for processes to exit. May 17 00:43:18.608057 systemd-logind[1721]: Removed session 5. May 17 00:43:19.870027 kubelet[2618]: I0517 00:43:19.869998 2618 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:43:19.870879 env[1729]: time="2025-05-17T00:43:19.870842256Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:43:19.871219 kubelet[2618]: I0517 00:43:19.871052 2618 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:43:20.802306 kubelet[2618]: I0517 00:43:20.802120 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-lib-modules\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.802306 kubelet[2618]: I0517 00:43:20.802162 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-hostproc\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.802306 kubelet[2618]: I0517 00:43:20.802192 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-xtables-lock\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.802306 kubelet[2618]: I0517 00:43:20.802223 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-etc-cni-netd\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.802306 kubelet[2618]: I0517 00:43:20.802247 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52a091ee-ec8c-46b0-aa4f-a70034ede92f-clustermesh-secrets\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.802306 kubelet[2618]: I0517 00:43:20.802271 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cilium-config-path\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.802714 kubelet[2618]: I0517 00:43:20.802296 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-bpf-maps\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.802714 kubelet[2618]: I0517 00:43:20.802322 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cilium-cgroup\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.802714 kubelet[2618]: I0517 00:43:20.802344 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24w9z\" (UniqueName: \"kubernetes.io/projected/52a091ee-ec8c-46b0-aa4f-a70034ede92f-kube-api-access-24w9z\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.802714 kubelet[2618]: I0517 00:43:20.802364 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cni-path\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.802714 kubelet[2618]: I0517 00:43:20.802388 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-host-proc-sys-net\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.802714 kubelet[2618]: I0517 00:43:20.802429 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f5719c7-a32b-49d1-891b-fa114f90455f-xtables-lock\") pod \"kube-proxy-79pxf\" (UID: \"2f5719c7-a32b-49d1-891b-fa114f90455f\") " pod="kube-system/kube-proxy-79pxf" May 17 00:43:20.802963 kubelet[2618]: I0517 00:43:20.802450 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m9zp\" (UniqueName: \"kubernetes.io/projected/2f5719c7-a32b-49d1-891b-fa114f90455f-kube-api-access-8m9zp\") pod \"kube-proxy-79pxf\" (UID: \"2f5719c7-a32b-49d1-891b-fa114f90455f\") " pod="kube-system/kube-proxy-79pxf" May 17 00:43:20.802963 kubelet[2618]: I0517 00:43:20.802471 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cilium-run\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.802963 kubelet[2618]: I0517 00:43:20.802491 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-host-proc-sys-kernel\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.802963 kubelet[2618]: I0517 00:43:20.802518 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f5719c7-a32b-49d1-891b-fa114f90455f-lib-modules\") pod \"kube-proxy-79pxf\" (UID: \"2f5719c7-a32b-49d1-891b-fa114f90455f\") " pod="kube-system/kube-proxy-79pxf" May 17 00:43:20.802963 kubelet[2618]: I0517 00:43:20.802540 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52a091ee-ec8c-46b0-aa4f-a70034ede92f-hubble-tls\") pod \"cilium-5j8fj\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " pod="kube-system/cilium-5j8fj" May 17 00:43:20.803171 kubelet[2618]: I0517 00:43:20.802567 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f5719c7-a32b-49d1-891b-fa114f90455f-kube-proxy\") pod \"kube-proxy-79pxf\" (UID: \"2f5719c7-a32b-49d1-891b-fa114f90455f\") " pod="kube-system/kube-proxy-79pxf" May 17 00:43:20.907671 kubelet[2618]: I0517 00:43:20.907626 2618 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:43:21.077338 env[1729]: time="2025-05-17T00:43:21.077220620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5j8fj,Uid:52a091ee-ec8c-46b0-aa4f-a70034ede92f,Namespace:kube-system,Attempt:0,}" May 17 00:43:21.109155 kubelet[2618]: I0517 00:43:21.105106 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p72x7\" (UniqueName: \"kubernetes.io/projected/46d8f109-c1d8-410d-8f43-2d79c5880283-kube-api-access-p72x7\") pod \"cilium-operator-5d85765b45-v59hm\" (UID: \"46d8f109-c1d8-410d-8f43-2d79c5880283\") " pod="kube-system/cilium-operator-5d85765b45-v59hm" May 17 00:43:21.109155 kubelet[2618]: I0517 00:43:21.105185 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46d8f109-c1d8-410d-8f43-2d79c5880283-cilium-config-path\") pod \"cilium-operator-5d85765b45-v59hm\" (UID: \"46d8f109-c1d8-410d-8f43-2d79c5880283\") " pod="kube-system/cilium-operator-5d85765b45-v59hm" May 17 00:43:21.109819 env[1729]: time="2025-05-17T00:43:21.109779404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-79pxf,Uid:2f5719c7-a32b-49d1-891b-fa114f90455f,Namespace:kube-system,Attempt:0,}" May 17 00:43:21.115307 env[1729]: time="2025-05-17T00:43:21.110067533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:21.115307 env[1729]: time="2025-05-17T00:43:21.110218415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:21.115307 env[1729]: time="2025-05-17T00:43:21.110237382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:21.115307 env[1729]: time="2025-05-17T00:43:21.110423635Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495 pid=2700 runtime=io.containerd.runc.v2 May 17 00:43:21.153829 env[1729]: time="2025-05-17T00:43:21.153715642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:21.154072 env[1729]: time="2025-05-17T00:43:21.153855702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:21.154072 env[1729]: time="2025-05-17T00:43:21.153889265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:21.154421 env[1729]: time="2025-05-17T00:43:21.154338323Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9fd4b5ac579294eedfb2fce14908b70807400ad192f524c1a6880e82a5353059 pid=2734 runtime=io.containerd.runc.v2 May 17 00:43:21.182831 env[1729]: time="2025-05-17T00:43:21.181655307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5j8fj,Uid:52a091ee-ec8c-46b0-aa4f-a70034ede92f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\"" May 17 00:43:21.186179 env[1729]: time="2025-05-17T00:43:21.185114967Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:43:21.221138 env[1729]: time="2025-05-17T00:43:21.221094378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-79pxf,Uid:2f5719c7-a32b-49d1-891b-fa114f90455f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fd4b5ac579294eedfb2fce14908b70807400ad192f524c1a6880e82a5353059\"" May 17 00:43:21.224902 env[1729]: time="2025-05-17T00:43:21.224867462Z" level=info msg="CreateContainer within sandbox \"9fd4b5ac579294eedfb2fce14908b70807400ad192f524c1a6880e82a5353059\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:43:21.253795 env[1729]: time="2025-05-17T00:43:21.253738227Z" level=info msg="CreateContainer within sandbox \"9fd4b5ac579294eedfb2fce14908b70807400ad192f524c1a6880e82a5353059\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ed6221a13b6c054cc907df7d994034378c1ee1dbcd6a384b93ad028fd9c9e9ca\"" May 17 00:43:21.257240 env[1729]: time="2025-05-17T00:43:21.257205603Z" level=info msg="StartContainer for \"ed6221a13b6c054cc907df7d994034378c1ee1dbcd6a384b93ad028fd9c9e9ca\"" May 17 00:43:21.307927 env[1729]: time="2025-05-17T00:43:21.307877794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-v59hm,Uid:46d8f109-c1d8-410d-8f43-2d79c5880283,Namespace:kube-system,Attempt:0,}" May 17 00:43:21.321680 env[1729]: time="2025-05-17T00:43:21.321639666Z" level=info msg="StartContainer for \"ed6221a13b6c054cc907df7d994034378c1ee1dbcd6a384b93ad028fd9c9e9ca\" returns successfully" May 17 00:43:21.335086 env[1729]: time="2025-05-17T00:43:21.334881466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:21.335316 env[1729]: time="2025-05-17T00:43:21.335277653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:21.335485 env[1729]: time="2025-05-17T00:43:21.335459472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:21.335809 env[1729]: time="2025-05-17T00:43:21.335776366Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d pid=2817 runtime=io.containerd.runc.v2 May 17 00:43:21.409750 env[1729]: time="2025-05-17T00:43:21.409711623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-v59hm,Uid:46d8f109-c1d8-410d-8f43-2d79c5880283,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d\"" May 17 00:43:22.121661 kubelet[2618]: I0517 00:43:22.121598 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-79pxf" podStartSLOduration=2.121571128 podStartE2EDuration="2.121571128s" podCreationTimestamp="2025-05-17 00:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:43:21.468490512 +0000 UTC m=+6.369787538" watchObservedRunningTime="2025-05-17 00:43:22.121571128 +0000 UTC m=+7.022868170" May 17 00:43:26.645618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount818407301.mount: Deactivated successfully. May 17 00:43:28.387784 amazon-ssm-agent[1805]: 2025-05-17 00:43:28 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated May 17 00:43:29.123450 update_engine[1722]: I0517 00:43:29.123384 1722 update_attempter.cc:509] Updating boot flags... May 17 00:43:29.779752 env[1729]: time="2025-05-17T00:43:29.779698208Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:29.784474 env[1729]: time="2025-05-17T00:43:29.784431380Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:29.796527 env[1729]: time="2025-05-17T00:43:29.796482673Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:29.797280 env[1729]: time="2025-05-17T00:43:29.797238092Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:43:29.799795 env[1729]: time="2025-05-17T00:43:29.799439566Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:43:29.800124 env[1729]: time="2025-05-17T00:43:29.800102094Z" level=info msg="CreateContainer within sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:43:29.824739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2029411944.mount: Deactivated successfully. May 17 00:43:29.831711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1523638631.mount: Deactivated successfully. May 17 00:43:29.843512 env[1729]: time="2025-05-17T00:43:29.843450043Z" level=info msg="CreateContainer within sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5\"" May 17 00:43:29.845130 env[1729]: time="2025-05-17T00:43:29.844184324Z" level=info msg="StartContainer for \"9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5\"" May 17 00:43:29.910629 env[1729]: time="2025-05-17T00:43:29.908485141Z" level=info msg="StartContainer for \"9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5\" returns successfully" May 17 00:43:30.045982 env[1729]: time="2025-05-17T00:43:30.045024566Z" level=info msg="shim disconnected" id=9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5 May 17 00:43:30.045982 env[1729]: time="2025-05-17T00:43:30.045117115Z" level=warning msg="cleaning up after shim disconnected" id=9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5 namespace=k8s.io May 17 00:43:30.045982 env[1729]: time="2025-05-17T00:43:30.045131422Z" level=info msg="cleaning up dead shim" May 17 00:43:30.055320 env[1729]: time="2025-05-17T00:43:30.055258130Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3128 runtime=io.containerd.runc.v2\n" May 17 00:43:30.516288 env[1729]: time="2025-05-17T00:43:30.515905122Z" level=info msg="CreateContainer within sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:43:30.549882 env[1729]: time="2025-05-17T00:43:30.549691377Z" level=info msg="CreateContainer within sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679\"" May 17 00:43:30.551232 env[1729]: time="2025-05-17T00:43:30.551195849Z" level=info msg="StartContainer for \"9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679\"" May 17 00:43:30.608214 env[1729]: time="2025-05-17T00:43:30.608161145Z" level=info msg="StartContainer for \"9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679\" returns successfully" May 17 00:43:30.620625 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:43:30.621024 systemd[1]: Stopped systemd-sysctl.service. May 17 00:43:30.621659 systemd[1]: Stopping systemd-sysctl.service... May 17 00:43:30.624995 systemd[1]: Starting systemd-sysctl.service... May 17 00:43:30.648617 systemd[1]: Finished systemd-sysctl.service. May 17 00:43:30.673815 env[1729]: time="2025-05-17T00:43:30.673749695Z" level=info msg="shim disconnected" id=9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679 May 17 00:43:30.673815 env[1729]: time="2025-05-17T00:43:30.673792056Z" level=warning msg="cleaning up after shim disconnected" id=9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679 namespace=k8s.io May 17 00:43:30.673815 env[1729]: time="2025-05-17T00:43:30.673806014Z" level=info msg="cleaning up dead shim" May 17 00:43:30.687660 env[1729]: time="2025-05-17T00:43:30.687613267Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3192 runtime=io.containerd.runc.v2\n" May 17 00:43:30.822164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5-rootfs.mount: Deactivated successfully. May 17 00:43:31.063605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3964711306.mount: Deactivated successfully. May 17 00:43:31.533430 env[1729]: time="2025-05-17T00:43:31.519223752Z" level=info msg="CreateContainer within sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:43:31.545828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966370374.mount: Deactivated successfully. May 17 00:43:31.555787 env[1729]: time="2025-05-17T00:43:31.555736264Z" level=info msg="CreateContainer within sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6\"" May 17 00:43:31.557675 env[1729]: time="2025-05-17T00:43:31.556362310Z" level=info msg="StartContainer for \"beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6\"" May 17 00:43:31.653762 env[1729]: time="2025-05-17T00:43:31.653709431Z" level=info msg="StartContainer for \"beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6\" returns successfully" May 17 00:43:31.786203 env[1729]: time="2025-05-17T00:43:31.786086240Z" level=info msg="shim disconnected" id=beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6 May 17 00:43:31.786203 env[1729]: time="2025-05-17T00:43:31.786142569Z" level=warning msg="cleaning up after shim disconnected" id=beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6 namespace=k8s.io May 17 00:43:31.786203 env[1729]: time="2025-05-17T00:43:31.786155575Z" level=info msg="cleaning up dead shim" May 17 00:43:31.798184 env[1729]: time="2025-05-17T00:43:31.798134228Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3248 runtime=io.containerd.runc.v2\n" May 17 00:43:31.978441 env[1729]: time="2025-05-17T00:43:31.976354013Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:31.978441 env[1729]: time="2025-05-17T00:43:31.976989674Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:31.979757 env[1729]: time="2025-05-17T00:43:31.979709310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:31.980357 env[1729]: time="2025-05-17T00:43:31.980316607Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:43:31.983810 env[1729]: time="2025-05-17T00:43:31.983436913Z" level=info msg="CreateContainer within sandbox \"fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:43:32.010372 env[1729]: time="2025-05-17T00:43:32.010316885Z" level=info msg="CreateContainer within sandbox \"fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\"" May 17 00:43:32.012442 env[1729]: time="2025-05-17T00:43:32.011147231Z" level=info msg="StartContainer for \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\"" May 17 00:43:32.078638 env[1729]: time="2025-05-17T00:43:32.078532967Z" level=info msg="StartContainer for \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\" returns successfully" May 17 00:43:32.527035 env[1729]: time="2025-05-17T00:43:32.527000377Z" level=info msg="CreateContainer within sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:43:32.547428 env[1729]: time="2025-05-17T00:43:32.547372515Z" level=info msg="CreateContainer within sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4\"" May 17 00:43:32.548444 env[1729]: time="2025-05-17T00:43:32.548420088Z" level=info msg="StartContainer for \"296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4\"" May 17 00:43:32.586159 kubelet[2618]: I0517 00:43:32.586029 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-v59hm" podStartSLOduration=2.015540663 podStartE2EDuration="12.586004113s" podCreationTimestamp="2025-05-17 00:43:20 +0000 UTC" firstStartedPulling="2025-05-17 00:43:21.411420388 +0000 UTC m=+6.312717413" lastFinishedPulling="2025-05-17 00:43:31.981883839 +0000 UTC m=+16.883180863" observedRunningTime="2025-05-17 00:43:32.581192294 +0000 UTC m=+17.482489322" watchObservedRunningTime="2025-05-17 00:43:32.586004113 +0000 UTC m=+17.487301143" May 17 00:43:32.632249 env[1729]: time="2025-05-17T00:43:32.632207376Z" level=info msg="StartContainer for \"296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4\" returns successfully" May 17 00:43:32.729941 env[1729]: time="2025-05-17T00:43:32.729886257Z" level=info msg="shim disconnected" id=296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4 May 17 00:43:32.730299 env[1729]: time="2025-05-17T00:43:32.730276707Z" level=warning msg="cleaning up after shim disconnected" id=296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4 namespace=k8s.io May 17 00:43:32.730429 env[1729]: time="2025-05-17T00:43:32.730414104Z" level=info msg="cleaning up dead shim" May 17 00:43:32.748611 env[1729]: time="2025-05-17T00:43:32.748562814Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3341 runtime=io.containerd.runc.v2\n" May 17 00:43:32.824954 systemd[1]: run-containerd-runc-k8s.io-ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171-runc.tWUTaM.mount: Deactivated successfully. May 17 00:43:33.532761 env[1729]: time="2025-05-17T00:43:33.532717156Z" level=info msg="CreateContainer within sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:43:33.569492 env[1729]: time="2025-05-17T00:43:33.562516366Z" level=info msg="CreateContainer within sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\"" May 17 00:43:33.569492 env[1729]: time="2025-05-17T00:43:33.563179116Z" level=info msg="StartContainer for \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\"" May 17 00:43:33.736630 env[1729]: time="2025-05-17T00:43:33.736575484Z" level=info msg="StartContainer for \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\" returns successfully" May 17 00:43:33.824305 systemd[1]: run-containerd-runc-k8s.io-72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d-runc.5QaCYC.mount: Deactivated successfully. May 17 00:43:33.906818 kubelet[2618]: I0517 00:43:33.906774 2618 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:43:34.109934 kubelet[2618]: I0517 00:43:34.109833 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2eabe8d1-be14-457d-8403-61a1071170e6-config-volume\") pod \"coredns-7c65d6cfc9-5jgfh\" (UID: \"2eabe8d1-be14-457d-8403-61a1071170e6\") " pod="kube-system/coredns-7c65d6cfc9-5jgfh" May 17 00:43:34.110176 kubelet[2618]: I0517 00:43:34.110159 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff7st\" (UniqueName: \"kubernetes.io/projected/2eabe8d1-be14-457d-8403-61a1071170e6-kube-api-access-ff7st\") pod \"coredns-7c65d6cfc9-5jgfh\" (UID: \"2eabe8d1-be14-457d-8403-61a1071170e6\") " pod="kube-system/coredns-7c65d6cfc9-5jgfh" May 17 00:43:34.110278 kubelet[2618]: I0517 00:43:34.110262 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct26x\" (UniqueName: \"kubernetes.io/projected/35aa318f-1b35-4351-8adf-5e7ea7bad653-kube-api-access-ct26x\") pod \"coredns-7c65d6cfc9-652ct\" (UID: \"35aa318f-1b35-4351-8adf-5e7ea7bad653\") " pod="kube-system/coredns-7c65d6cfc9-652ct" May 17 00:43:34.110365 kubelet[2618]: I0517 00:43:34.110355 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35aa318f-1b35-4351-8adf-5e7ea7bad653-config-volume\") pod \"coredns-7c65d6cfc9-652ct\" (UID: \"35aa318f-1b35-4351-8adf-5e7ea7bad653\") " pod="kube-system/coredns-7c65d6cfc9-652ct" May 17 00:43:34.263328 env[1729]: time="2025-05-17T00:43:34.263281276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5jgfh,Uid:2eabe8d1-be14-457d-8403-61a1071170e6,Namespace:kube-system,Attempt:0,}" May 17 00:43:34.264721 env[1729]: time="2025-05-17T00:43:34.264676367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-652ct,Uid:35aa318f-1b35-4351-8adf-5e7ea7bad653,Namespace:kube-system,Attempt:0,}" May 17 00:43:34.557490 kubelet[2618]: I0517 00:43:34.557263 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5j8fj" podStartSLOduration=5.94237135 podStartE2EDuration="14.557237619s" podCreationTimestamp="2025-05-17 00:43:20 +0000 UTC" firstStartedPulling="2025-05-17 00:43:21.183849588 +0000 UTC m=+6.085146614" lastFinishedPulling="2025-05-17 00:43:29.798715873 +0000 UTC m=+14.700012883" observedRunningTime="2025-05-17 00:43:34.556062623 +0000 UTC m=+19.457359694" watchObservedRunningTime="2025-05-17 00:43:34.557237619 +0000 UTC m=+19.458534654" May 17 00:43:36.307835 (udev-worker)[3498]: Network interface NamePolicy= disabled on kernel command line. May 17 00:43:36.309699 systemd-networkd[1409]: cilium_host: Link UP May 17 00:43:36.314614 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:43:36.310103 systemd-networkd[1409]: cilium_net: Link UP May 17 00:43:36.310108 systemd-networkd[1409]: cilium_net: Gained carrier May 17 00:43:36.310601 systemd-networkd[1409]: cilium_host: Gained carrier May 17 00:43:36.312558 systemd-networkd[1409]: cilium_host: Gained IPv6LL May 17 00:43:36.315245 (udev-worker)[3499]: Network interface NamePolicy= disabled on kernel command line. May 17 00:43:36.433504 (udev-worker)[3524]: Network interface NamePolicy= disabled on kernel command line. May 17 00:43:36.439611 systemd-networkd[1409]: cilium_vxlan: Link UP May 17 00:43:36.439617 systemd-networkd[1409]: cilium_vxlan: Gained carrier May 17 00:43:36.994424 kernel: NET: Registered PF_ALG protocol family May 17 00:43:37.022472 systemd-networkd[1409]: cilium_net: Gained IPv6LL May 17 00:43:37.819103 (udev-worker)[3526]: Network interface NamePolicy= disabled on kernel command line. May 17 00:43:37.851720 systemd-networkd[1409]: lxc_health: Link UP May 17 00:43:37.859508 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:43:37.856525 systemd-networkd[1409]: lxc_health: Gained carrier May 17 00:43:38.431277 systemd-networkd[1409]: cilium_vxlan: Gained IPv6LL May 17 00:43:38.435549 systemd-networkd[1409]: lxc7ba18b51165a: Link UP May 17 00:43:38.444499 kernel: eth0: renamed from tmp28cdc May 17 00:43:38.454452 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7ba18b51165a: link becomes ready May 17 00:43:38.454192 systemd-networkd[1409]: lxc7ba18b51165a: Gained carrier May 17 00:43:38.455658 systemd-networkd[1409]: lxc8af761cf076b: Link UP May 17 00:43:38.478416 kernel: eth0: renamed from tmpd07f4 May 17 00:43:38.492773 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8af761cf076b: link becomes ready May 17 00:43:38.483859 systemd-networkd[1409]: lxc8af761cf076b: Gained carrier May 17 00:43:39.774583 systemd-networkd[1409]: lxc_health: Gained IPv6LL May 17 00:43:40.092635 systemd-networkd[1409]: lxc7ba18b51165a: Gained IPv6LL May 17 00:43:40.156609 systemd-networkd[1409]: lxc8af761cf076b: Gained IPv6LL May 17 00:43:42.856452 env[1729]: time="2025-05-17T00:43:42.854817944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:42.856452 env[1729]: time="2025-05-17T00:43:42.854876105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:42.856452 env[1729]: time="2025-05-17T00:43:42.854891191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:42.856452 env[1729]: time="2025-05-17T00:43:42.855293367Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28cdca290681dc64abf01afbcd2f2144ea963383c796951ef8133fc266e77367 pid=3878 runtime=io.containerd.runc.v2 May 17 00:43:42.899690 env[1729]: time="2025-05-17T00:43:42.876129801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:42.899690 env[1729]: time="2025-05-17T00:43:42.876201288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:42.899690 env[1729]: time="2025-05-17T00:43:42.876217957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:42.899690 env[1729]: time="2025-05-17T00:43:42.876438221Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d07f49a37754db3d2624463b4f939d8ff95df1f240a48bd3d1799e19947296f3 pid=3887 runtime=io.containerd.runc.v2 May 17 00:43:42.916430 systemd[1]: run-containerd-runc-k8s.io-28cdca290681dc64abf01afbcd2f2144ea963383c796951ef8133fc266e77367-runc.8rqnPG.mount: Deactivated successfully. May 17 00:43:43.032865 env[1729]: time="2025-05-17T00:43:43.032812166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-652ct,Uid:35aa318f-1b35-4351-8adf-5e7ea7bad653,Namespace:kube-system,Attempt:0,} returns sandbox id \"d07f49a37754db3d2624463b4f939d8ff95df1f240a48bd3d1799e19947296f3\"" May 17 00:43:43.042442 env[1729]: time="2025-05-17T00:43:43.042374066Z" level=info msg="CreateContainer within sandbox \"d07f49a37754db3d2624463b4f939d8ff95df1f240a48bd3d1799e19947296f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:43:43.070419 env[1729]: time="2025-05-17T00:43:43.069702185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5jgfh,Uid:2eabe8d1-be14-457d-8403-61a1071170e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"28cdca290681dc64abf01afbcd2f2144ea963383c796951ef8133fc266e77367\"" May 17 00:43:43.076989 env[1729]: time="2025-05-17T00:43:43.076929386Z" level=info msg="CreateContainer within sandbox \"28cdca290681dc64abf01afbcd2f2144ea963383c796951ef8133fc266e77367\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:43:43.093616 env[1729]: time="2025-05-17T00:43:43.093537789Z" level=info msg="CreateContainer within sandbox \"d07f49a37754db3d2624463b4f939d8ff95df1f240a48bd3d1799e19947296f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d611b63885dd4550bc54fa773e839cfc993575677389f9a2315bc58ee54e6722\"" May 17 00:43:43.096005 env[1729]: time="2025-05-17T00:43:43.095958415Z" level=info msg="StartContainer for \"d611b63885dd4550bc54fa773e839cfc993575677389f9a2315bc58ee54e6722\"" May 17 00:43:43.115356 env[1729]: time="2025-05-17T00:43:43.115208450Z" level=info msg="CreateContainer within sandbox \"28cdca290681dc64abf01afbcd2f2144ea963383c796951ef8133fc266e77367\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"06054ace77b5b5c3c2a46012e18359bca9eb941c5baec25cdd5e80e71b235e8a\"" May 17 00:43:43.116947 env[1729]: time="2025-05-17T00:43:43.116905053Z" level=info msg="StartContainer for \"06054ace77b5b5c3c2a46012e18359bca9eb941c5baec25cdd5e80e71b235e8a\"" May 17 00:43:43.197796 env[1729]: time="2025-05-17T00:43:43.197747341Z" level=info msg="StartContainer for \"d611b63885dd4550bc54fa773e839cfc993575677389f9a2315bc58ee54e6722\" returns successfully" May 17 00:43:43.207591 env[1729]: time="2025-05-17T00:43:43.207547701Z" level=info msg="StartContainer for \"06054ace77b5b5c3c2a46012e18359bca9eb941c5baec25cdd5e80e71b235e8a\" returns successfully" May 17 00:43:43.604991 kubelet[2618]: I0517 00:43:43.604931 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5jgfh" podStartSLOduration=23.604902711 podStartE2EDuration="23.604902711s" podCreationTimestamp="2025-05-17 00:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:43:43.572920961 +0000 UTC m=+28.474217994" watchObservedRunningTime="2025-05-17 00:43:43.604902711 +0000 UTC m=+28.506199746" May 17 00:43:44.276426 kubelet[2618]: I0517 00:43:44.276344 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-652ct" podStartSLOduration=24.276321648 podStartE2EDuration="24.276321648s" podCreationTimestamp="2025-05-17 00:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:43:43.610703763 +0000 UTC m=+28.512000796" watchObservedRunningTime="2025-05-17 00:43:44.276321648 +0000 UTC m=+29.177618688" May 17 00:43:47.555940 systemd[1]: Started sshd@5-172.31.31.72:22-139.178.68.195:55638.service. May 17 00:43:47.747315 sshd[4038]: Accepted publickey for core from 139.178.68.195 port 55638 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:43:47.750483 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:47.761510 systemd[1]: Started session-6.scope. May 17 00:43:47.763342 systemd-logind[1721]: New session 6 of user core. May 17 00:43:48.065843 sshd[4038]: pam_unix(sshd:session): session closed for user core May 17 00:43:48.068655 systemd[1]: sshd@5-172.31.31.72:22-139.178.68.195:55638.service: Deactivated successfully. May 17 00:43:48.072237 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:43:48.073134 systemd-logind[1721]: Session 6 logged out. Waiting for processes to exit. May 17 00:43:48.075060 systemd-logind[1721]: Removed session 6. May 17 00:43:53.091244 systemd[1]: Started sshd@6-172.31.31.72:22-139.178.68.195:55640.service. May 17 00:43:53.261522 sshd[4054]: Accepted publickey for core from 139.178.68.195 port 55640 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:43:53.264211 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:53.270557 systemd[1]: Started session-7.scope. May 17 00:43:53.271873 systemd-logind[1721]: New session 7 of user core. May 17 00:43:53.484330 sshd[4054]: pam_unix(sshd:session): session closed for user core May 17 00:43:53.487646 systemd[1]: sshd@6-172.31.31.72:22-139.178.68.195:55640.service: Deactivated successfully. May 17 00:43:53.488408 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:43:53.489576 systemd-logind[1721]: Session 7 logged out. Waiting for processes to exit. May 17 00:43:53.490339 systemd-logind[1721]: Removed session 7. May 17 00:43:58.509717 systemd[1]: Started sshd@7-172.31.31.72:22-139.178.68.195:48426.service. May 17 00:43:58.673107 sshd[4069]: Accepted publickey for core from 139.178.68.195 port 48426 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:43:58.674943 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:58.680967 systemd[1]: Started session-8.scope. May 17 00:43:58.681958 systemd-logind[1721]: New session 8 of user core. May 17 00:43:58.889986 sshd[4069]: pam_unix(sshd:session): session closed for user core May 17 00:43:58.893603 systemd[1]: sshd@7-172.31.31.72:22-139.178.68.195:48426.service: Deactivated successfully. May 17 00:43:58.894593 systemd-logind[1721]: Session 8 logged out. Waiting for processes to exit. May 17 00:43:58.894653 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:43:58.895712 systemd-logind[1721]: Removed session 8. May 17 00:44:03.915205 systemd[1]: Started sshd@8-172.31.31.72:22-139.178.68.195:55446.service. May 17 00:44:04.087751 sshd[4082]: Accepted publickey for core from 139.178.68.195 port 55446 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:04.089167 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:04.094754 systemd[1]: Started session-9.scope. May 17 00:44:04.095289 systemd-logind[1721]: New session 9 of user core. May 17 00:44:04.312571 sshd[4082]: pam_unix(sshd:session): session closed for user core May 17 00:44:04.317759 systemd[1]: sshd@8-172.31.31.72:22-139.178.68.195:55446.service: Deactivated successfully. May 17 00:44:04.319356 systemd-logind[1721]: Session 9 logged out. Waiting for processes to exit. May 17 00:44:04.320130 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:44:04.321140 systemd-logind[1721]: Removed session 9. May 17 00:44:09.335996 systemd[1]: Started sshd@9-172.31.31.72:22-139.178.68.195:55448.service. May 17 00:44:09.500284 sshd[4096]: Accepted publickey for core from 139.178.68.195 port 55448 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:09.502112 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:09.508285 systemd[1]: Started session-10.scope. May 17 00:44:09.508804 systemd-logind[1721]: New session 10 of user core. May 17 00:44:09.704754 sshd[4096]: pam_unix(sshd:session): session closed for user core May 17 00:44:09.707833 systemd[1]: sshd@9-172.31.31.72:22-139.178.68.195:55448.service: Deactivated successfully. May 17 00:44:09.709100 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:44:09.709670 systemd-logind[1721]: Session 10 logged out. Waiting for processes to exit. May 17 00:44:09.710352 systemd-logind[1721]: Removed session 10. May 17 00:44:09.729659 systemd[1]: Started sshd@10-172.31.31.72:22-139.178.68.195:55460.service. May 17 00:44:09.890068 sshd[4110]: Accepted publickey for core from 139.178.68.195 port 55460 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:09.891572 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:09.898119 systemd[1]: Started session-11.scope. May 17 00:44:09.898720 systemd-logind[1721]: New session 11 of user core. May 17 00:44:10.179090 sshd[4110]: pam_unix(sshd:session): session closed for user core May 17 00:44:10.189978 systemd[1]: sshd@10-172.31.31.72:22-139.178.68.195:55460.service: Deactivated successfully. May 17 00:44:10.192315 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:44:10.193837 systemd-logind[1721]: Session 11 logged out. Waiting for processes to exit. May 17 00:44:10.196009 systemd-logind[1721]: Removed session 11. May 17 00:44:10.202847 systemd[1]: Started sshd@11-172.31.31.72:22-139.178.68.195:55464.service. May 17 00:44:10.396475 sshd[4121]: Accepted publickey for core from 139.178.68.195 port 55464 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:10.398438 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:10.404904 systemd-logind[1721]: New session 12 of user core. May 17 00:44:10.405682 systemd[1]: Started session-12.scope. May 17 00:44:10.616208 sshd[4121]: pam_unix(sshd:session): session closed for user core May 17 00:44:10.619829 systemd-logind[1721]: Session 12 logged out. Waiting for processes to exit. May 17 00:44:10.620325 systemd[1]: sshd@11-172.31.31.72:22-139.178.68.195:55464.service: Deactivated successfully. May 17 00:44:10.621468 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:44:10.627337 systemd-logind[1721]: Removed session 12. May 17 00:44:15.641932 systemd[1]: Started sshd@12-172.31.31.72:22-139.178.68.195:39952.service. May 17 00:44:15.802546 sshd[4135]: Accepted publickey for core from 139.178.68.195 port 39952 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:15.803977 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:15.810687 systemd[1]: Started session-13.scope. May 17 00:44:15.810892 systemd-logind[1721]: New session 13 of user core. May 17 00:44:16.060311 sshd[4135]: pam_unix(sshd:session): session closed for user core May 17 00:44:16.066199 systemd[1]: sshd@12-172.31.31.72:22-139.178.68.195:39952.service: Deactivated successfully. May 17 00:44:16.067617 systemd-logind[1721]: Session 13 logged out. Waiting for processes to exit. May 17 00:44:16.067730 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:44:16.069920 systemd-logind[1721]: Removed session 13. May 17 00:44:21.084560 systemd[1]: Started sshd@13-172.31.31.72:22-139.178.68.195:39958.service. May 17 00:44:21.256452 sshd[4148]: Accepted publickey for core from 139.178.68.195 port 39958 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:21.258071 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:21.264044 systemd[1]: Started session-14.scope. May 17 00:44:21.264451 systemd-logind[1721]: New session 14 of user core. May 17 00:44:21.475633 sshd[4148]: pam_unix(sshd:session): session closed for user core May 17 00:44:21.479327 systemd[1]: sshd@13-172.31.31.72:22-139.178.68.195:39958.service: Deactivated successfully. May 17 00:44:21.480088 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:44:21.480472 systemd-logind[1721]: Session 14 logged out. Waiting for processes to exit. May 17 00:44:21.481743 systemd-logind[1721]: Removed session 14. May 17 00:44:21.499385 systemd[1]: Started sshd@14-172.31.31.72:22-139.178.68.195:39970.service. May 17 00:44:21.658482 sshd[4160]: Accepted publickey for core from 139.178.68.195 port 39970 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:21.659866 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:21.666295 systemd[1]: Started session-15.scope. May 17 00:44:21.666633 systemd-logind[1721]: New session 15 of user core. May 17 00:44:22.394569 sshd[4160]: pam_unix(sshd:session): session closed for user core May 17 00:44:22.400879 systemd[1]: sshd@14-172.31.31.72:22-139.178.68.195:39970.service: Deactivated successfully. May 17 00:44:22.400879 systemd-logind[1721]: Session 15 logged out. Waiting for processes to exit. May 17 00:44:22.402034 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:44:22.403923 systemd-logind[1721]: Removed session 15. May 17 00:44:22.419138 systemd[1]: Started sshd@15-172.31.31.72:22-139.178.68.195:39980.service. May 17 00:44:22.611664 sshd[4173]: Accepted publickey for core from 139.178.68.195 port 39980 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:22.613620 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:22.620076 systemd[1]: Started session-16.scope. May 17 00:44:22.621369 systemd-logind[1721]: New session 16 of user core. May 17 00:44:24.355324 sshd[4173]: pam_unix(sshd:session): session closed for user core May 17 00:44:24.365344 systemd-logind[1721]: Session 16 logged out. Waiting for processes to exit. May 17 00:44:24.365448 systemd[1]: sshd@15-172.31.31.72:22-139.178.68.195:39980.service: Deactivated successfully. May 17 00:44:24.366583 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:44:24.367175 systemd-logind[1721]: Removed session 16. May 17 00:44:24.380646 systemd[1]: Started sshd@16-172.31.31.72:22-139.178.68.195:50410.service. May 17 00:44:24.547060 sshd[4195]: Accepted publickey for core from 139.178.68.195 port 50410 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:24.548889 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:24.556207 systemd[1]: Started session-17.scope. May 17 00:44:24.556881 systemd-logind[1721]: New session 17 of user core. May 17 00:44:24.961847 sshd[4195]: pam_unix(sshd:session): session closed for user core May 17 00:44:24.966694 systemd[1]: sshd@16-172.31.31.72:22-139.178.68.195:50410.service: Deactivated successfully. May 17 00:44:24.967463 systemd-logind[1721]: Session 17 logged out. Waiting for processes to exit. May 17 00:44:24.967470 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:44:24.968504 systemd-logind[1721]: Removed session 17. May 17 00:44:24.985022 systemd[1]: Started sshd@17-172.31.31.72:22-139.178.68.195:50426.service. May 17 00:44:25.156166 sshd[4206]: Accepted publickey for core from 139.178.68.195 port 50426 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:25.157723 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:25.162918 systemd[1]: Started session-18.scope. May 17 00:44:25.163448 systemd-logind[1721]: New session 18 of user core. May 17 00:44:25.369508 sshd[4206]: pam_unix(sshd:session): session closed for user core May 17 00:44:25.373248 systemd[1]: sshd@17-172.31.31.72:22-139.178.68.195:50426.service: Deactivated successfully. May 17 00:44:25.374835 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:44:25.375940 systemd-logind[1721]: Session 18 logged out. Waiting for processes to exit. May 17 00:44:25.377667 systemd-logind[1721]: Removed session 18. May 17 00:44:30.395085 systemd[1]: Started sshd@18-172.31.31.72:22-139.178.68.195:50432.service. May 17 00:44:30.560938 sshd[4219]: Accepted publickey for core from 139.178.68.195 port 50432 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:30.562633 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:30.569171 systemd[1]: Started session-19.scope. May 17 00:44:30.569479 systemd-logind[1721]: New session 19 of user core. May 17 00:44:30.762150 sshd[4219]: pam_unix(sshd:session): session closed for user core May 17 00:44:30.765417 systemd[1]: sshd@18-172.31.31.72:22-139.178.68.195:50432.service: Deactivated successfully. May 17 00:44:30.765594 systemd-logind[1721]: Session 19 logged out. Waiting for processes to exit. May 17 00:44:30.766187 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:44:30.766750 systemd-logind[1721]: Removed session 19. May 17 00:44:35.787246 systemd[1]: Started sshd@19-172.31.31.72:22-139.178.68.195:58812.service. May 17 00:44:35.951564 sshd[4235]: Accepted publickey for core from 139.178.68.195 port 58812 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:35.953194 sshd[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:35.959135 systemd[1]: Started session-20.scope. May 17 00:44:35.959643 systemd-logind[1721]: New session 20 of user core. May 17 00:44:36.148664 sshd[4235]: pam_unix(sshd:session): session closed for user core May 17 00:44:36.151904 systemd[1]: sshd@19-172.31.31.72:22-139.178.68.195:58812.service: Deactivated successfully. May 17 00:44:36.152870 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:44:36.152877 systemd-logind[1721]: Session 20 logged out. Waiting for processes to exit. May 17 00:44:36.154059 systemd-logind[1721]: Removed session 20. May 17 00:44:41.172249 systemd[1]: Started sshd@20-172.31.31.72:22-139.178.68.195:58814.service. May 17 00:44:41.330961 sshd[4248]: Accepted publickey for core from 139.178.68.195 port 58814 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:41.332532 sshd[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:41.339041 systemd[1]: Started session-21.scope. May 17 00:44:41.339974 systemd-logind[1721]: New session 21 of user core. May 17 00:44:41.535386 sshd[4248]: pam_unix(sshd:session): session closed for user core May 17 00:44:41.539557 systemd[1]: sshd@20-172.31.31.72:22-139.178.68.195:58814.service: Deactivated successfully. May 17 00:44:41.541007 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:44:41.541336 systemd-logind[1721]: Session 21 logged out. Waiting for processes to exit. May 17 00:44:41.543355 systemd-logind[1721]: Removed session 21. May 17 00:44:46.559922 systemd[1]: Started sshd@21-172.31.31.72:22-139.178.68.195:54270.service. May 17 00:44:46.722082 sshd[4261]: Accepted publickey for core from 139.178.68.195 port 54270 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:46.723901 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:46.730524 systemd[1]: Started session-22.scope. May 17 00:44:46.731812 systemd-logind[1721]: New session 22 of user core. May 17 00:44:46.917286 sshd[4261]: pam_unix(sshd:session): session closed for user core May 17 00:44:46.921287 systemd[1]: sshd@21-172.31.31.72:22-139.178.68.195:54270.service: Deactivated successfully. May 17 00:44:46.923021 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:44:46.923636 systemd-logind[1721]: Session 22 logged out. Waiting for processes to exit. May 17 00:44:46.925062 systemd-logind[1721]: Removed session 22. May 17 00:44:46.943346 systemd[1]: Started sshd@22-172.31.31.72:22-139.178.68.195:54284.service. May 17 00:44:47.105721 sshd[4274]: Accepted publickey for core from 139.178.68.195 port 54284 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:47.107296 sshd[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:47.113214 systemd[1]: Started session-23.scope. May 17 00:44:47.114053 systemd-logind[1721]: New session 23 of user core. May 17 00:44:48.871462 env[1729]: time="2025-05-17T00:44:48.870793685Z" level=info msg="StopContainer for \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\" with timeout 30 (s)" May 17 00:44:48.871462 env[1729]: time="2025-05-17T00:44:48.871313422Z" level=info msg="Stop container \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\" with signal terminated" May 17 00:44:48.876453 env[1729]: time="2025-05-17T00:44:48.876169693Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:44:48.897189 env[1729]: time="2025-05-17T00:44:48.897150533Z" level=info msg="StopContainer for \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\" with timeout 2 (s)" May 17 00:44:48.897902 env[1729]: time="2025-05-17T00:44:48.897872396Z" level=info msg="Stop container \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\" with signal terminated" May 17 00:44:48.909586 systemd-networkd[1409]: lxc_health: Link DOWN May 17 00:44:48.909595 systemd-networkd[1409]: lxc_health: Lost carrier May 17 00:44:48.922053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171-rootfs.mount: Deactivated successfully. May 17 00:44:48.954626 env[1729]: time="2025-05-17T00:44:48.954572101Z" level=info msg="shim disconnected" id=ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171 May 17 00:44:48.956009 env[1729]: time="2025-05-17T00:44:48.955974700Z" level=warning msg="cleaning up after shim disconnected" id=ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171 namespace=k8s.io May 17 00:44:48.956179 env[1729]: time="2025-05-17T00:44:48.956162833Z" level=info msg="cleaning up dead shim" May 17 00:44:48.964841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d-rootfs.mount: Deactivated successfully. May 17 00:44:48.976548 env[1729]: time="2025-05-17T00:44:48.976508230Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4344 runtime=io.containerd.runc.v2\n" May 17 00:44:48.983117 env[1729]: time="2025-05-17T00:44:48.983070386Z" level=info msg="StopContainer for \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\" returns successfully" May 17 00:44:48.984094 env[1729]: time="2025-05-17T00:44:48.983958448Z" level=info msg="shim disconnected" id=72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d May 17 00:44:48.984232 env[1729]: time="2025-05-17T00:44:48.984100866Z" level=warning msg="cleaning up after shim disconnected" id=72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d namespace=k8s.io May 17 00:44:48.984232 env[1729]: time="2025-05-17T00:44:48.984113884Z" level=info msg="cleaning up dead shim" May 17 00:44:48.985345 env[1729]: time="2025-05-17T00:44:48.985304644Z" level=info msg="StopPodSandbox for \"fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d\"" May 17 00:44:48.985599 env[1729]: time="2025-05-17T00:44:48.985571052Z" level=info msg="Container to stop \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:48.988915 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d-shm.mount: Deactivated successfully. May 17 00:44:49.010131 env[1729]: time="2025-05-17T00:44:49.010089819Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4356 runtime=io.containerd.runc.v2\n" May 17 00:44:49.014165 env[1729]: time="2025-05-17T00:44:49.014122776Z" level=info msg="StopContainer for \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\" returns successfully" May 17 00:44:49.014943 env[1729]: time="2025-05-17T00:44:49.014910107Z" level=info msg="StopPodSandbox for \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\"" May 17 00:44:49.015213 env[1729]: time="2025-05-17T00:44:49.015180878Z" level=info msg="Container to stop \"9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:49.015334 env[1729]: time="2025-05-17T00:44:49.015311496Z" level=info msg="Container to stop \"beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:49.015531 env[1729]: time="2025-05-17T00:44:49.015507201Z" level=info msg="Container to stop \"9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:49.015772 env[1729]: time="2025-05-17T00:44:49.015746568Z" level=info msg="Container to stop \"296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:49.015896 env[1729]: time="2025-05-17T00:44:49.015874983Z" level=info msg="Container to stop \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:49.019108 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495-shm.mount: Deactivated successfully. May 17 00:44:49.051052 env[1729]: time="2025-05-17T00:44:49.050991331Z" level=info msg="shim disconnected" id=fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d May 17 00:44:49.051290 env[1729]: time="2025-05-17T00:44:49.051058834Z" level=warning msg="cleaning up after shim disconnected" id=fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d namespace=k8s.io May 17 00:44:49.051290 env[1729]: time="2025-05-17T00:44:49.051072223Z" level=info msg="cleaning up dead shim" May 17 00:44:49.069458 env[1729]: time="2025-05-17T00:44:49.069361623Z" level=info msg="shim disconnected" id=ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495 May 17 00:44:49.069458 env[1729]: time="2025-05-17T00:44:49.069440374Z" level=warning msg="cleaning up after shim disconnected" id=ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495 namespace=k8s.io May 17 00:44:49.069458 env[1729]: time="2025-05-17T00:44:49.069453273Z" level=info msg="cleaning up dead shim" May 17 00:44:49.078455 env[1729]: time="2025-05-17T00:44:49.078369637Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4408 runtime=io.containerd.runc.v2\n" May 17 00:44:49.078833 env[1729]: time="2025-05-17T00:44:49.078795747Z" level=info msg="TearDown network for sandbox \"fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d\" successfully" May 17 00:44:49.078950 env[1729]: time="2025-05-17T00:44:49.078831136Z" level=info msg="StopPodSandbox for \"fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d\" returns successfully" May 17 00:44:49.084579 env[1729]: time="2025-05-17T00:44:49.084530585Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4420 runtime=io.containerd.runc.v2\n" May 17 00:44:49.085674 env[1729]: time="2025-05-17T00:44:49.085166269Z" level=info msg="TearDown network for sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" successfully" May 17 00:44:49.085674 env[1729]: time="2025-05-17T00:44:49.085196588Z" level=info msg="StopPodSandbox for \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" returns successfully" May 17 00:44:49.274804 kubelet[2618]: I0517 00:44:49.274754 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-etc-cni-netd\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.274804 kubelet[2618]: I0517 00:44:49.274807 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cilium-config-path\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.275359 kubelet[2618]: I0517 00:44:49.274829 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-xtables-lock\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.275359 kubelet[2618]: I0517 00:44:49.274848 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p72x7\" (UniqueName: \"kubernetes.io/projected/46d8f109-c1d8-410d-8f43-2d79c5880283-kube-api-access-p72x7\") pod \"46d8f109-c1d8-410d-8f43-2d79c5880283\" (UID: \"46d8f109-c1d8-410d-8f43-2d79c5880283\") " May 17 00:44:49.275359 kubelet[2618]: I0517 00:44:49.274861 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-lib-modules\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.275359 kubelet[2618]: I0517 00:44:49.274876 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-bpf-maps\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.275359 kubelet[2618]: I0517 00:44:49.274889 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cni-path\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.275359 kubelet[2618]: I0517 00:44:49.274903 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-hostproc\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.275569 kubelet[2618]: I0517 00:44:49.274919 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52a091ee-ec8c-46b0-aa4f-a70034ede92f-clustermesh-secrets\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.275569 kubelet[2618]: I0517 00:44:49.274933 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52a091ee-ec8c-46b0-aa4f-a70034ede92f-hubble-tls\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.275569 kubelet[2618]: I0517 00:44:49.274948 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46d8f109-c1d8-410d-8f43-2d79c5880283-cilium-config-path\") pod \"46d8f109-c1d8-410d-8f43-2d79c5880283\" (UID: \"46d8f109-c1d8-410d-8f43-2d79c5880283\") " May 17 00:44:49.275569 kubelet[2618]: I0517 00:44:49.274962 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cilium-run\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.275569 kubelet[2618]: I0517 00:44:49.274978 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-host-proc-sys-kernel\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.275569 kubelet[2618]: I0517 00:44:49.274993 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cilium-cgroup\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.275744 kubelet[2618]: I0517 00:44:49.275009 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24w9z\" (UniqueName: \"kubernetes.io/projected/52a091ee-ec8c-46b0-aa4f-a70034ede92f-kube-api-access-24w9z\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.275744 kubelet[2618]: I0517 00:44:49.275025 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-host-proc-sys-net\") pod \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\" (UID: \"52a091ee-ec8c-46b0-aa4f-a70034ede92f\") " May 17 00:44:49.280121 kubelet[2618]: I0517 00:44:49.277314 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-hostproc" (OuterVolumeSpecName: "hostproc") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:49.280297 kubelet[2618]: I0517 00:44:49.277315 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:49.280458 kubelet[2618]: I0517 00:44:49.280386 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:49.289900 kubelet[2618]: I0517 00:44:49.289857 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:44:49.290039 kubelet[2618]: I0517 00:44:49.289992 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:49.292688 kubelet[2618]: I0517 00:44:49.292643 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:49.292688 kubelet[2618]: I0517 00:44:49.292785 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:49.292688 kubelet[2618]: I0517 00:44:49.292806 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cni-path" (OuterVolumeSpecName: "cni-path") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:49.292688 kubelet[2618]: I0517 00:44:49.292829 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:49.294831 kubelet[2618]: I0517 00:44:49.294797 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d8f109-c1d8-410d-8f43-2d79c5880283-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "46d8f109-c1d8-410d-8f43-2d79c5880283" (UID: "46d8f109-c1d8-410d-8f43-2d79c5880283"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:44:49.294961 kubelet[2618]: I0517 00:44:49.294844 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:49.295598 kubelet[2618]: I0517 00:44:49.295571 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:49.295806 kubelet[2618]: I0517 00:44:49.295779 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52a091ee-ec8c-46b0-aa4f-a70034ede92f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:44:49.297930 kubelet[2618]: I0517 00:44:49.297903 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d8f109-c1d8-410d-8f43-2d79c5880283-kube-api-access-p72x7" (OuterVolumeSpecName: "kube-api-access-p72x7") pod "46d8f109-c1d8-410d-8f43-2d79c5880283" (UID: "46d8f109-c1d8-410d-8f43-2d79c5880283"). InnerVolumeSpecName "kube-api-access-p72x7". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:44:49.298078 kubelet[2618]: I0517 00:44:49.297950 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52a091ee-ec8c-46b0-aa4f-a70034ede92f-kube-api-access-24w9z" (OuterVolumeSpecName: "kube-api-access-24w9z") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "kube-api-access-24w9z". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:44:49.300451 kubelet[2618]: I0517 00:44:49.300359 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52a091ee-ec8c-46b0-aa4f-a70034ede92f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "52a091ee-ec8c-46b0-aa4f-a70034ede92f" (UID: "52a091ee-ec8c-46b0-aa4f-a70034ede92f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:44:49.375844 kubelet[2618]: I0517 00:44:49.375783 2618 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cilium-run\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.375844 kubelet[2618]: I0517 00:44:49.375818 2618 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-host-proc-sys-kernel\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.375844 kubelet[2618]: I0517 00:44:49.375830 2618 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cilium-cgroup\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.375844 kubelet[2618]: I0517 00:44:49.375839 2618 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24w9z\" (UniqueName: \"kubernetes.io/projected/52a091ee-ec8c-46b0-aa4f-a70034ede92f-kube-api-access-24w9z\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.375844 kubelet[2618]: I0517 00:44:49.375848 2618 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-host-proc-sys-net\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.375844 kubelet[2618]: I0517 00:44:49.375859 2618 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-etc-cni-netd\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.376158 kubelet[2618]: I0517 00:44:49.375867 2618 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cilium-config-path\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.376158 kubelet[2618]: I0517 00:44:49.375901 2618 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-xtables-lock\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.376158 kubelet[2618]: I0517 00:44:49.375910 2618 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p72x7\" (UniqueName: \"kubernetes.io/projected/46d8f109-c1d8-410d-8f43-2d79c5880283-kube-api-access-p72x7\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.376158 kubelet[2618]: I0517 00:44:49.375919 2618 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-lib-modules\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.376158 kubelet[2618]: I0517 00:44:49.375929 2618 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-bpf-maps\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.376158 kubelet[2618]: I0517 00:44:49.375937 2618 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-cni-path\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.376158 kubelet[2618]: I0517 00:44:49.375944 2618 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52a091ee-ec8c-46b0-aa4f-a70034ede92f-hostproc\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.376158 kubelet[2618]: I0517 00:44:49.375952 2618 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52a091ee-ec8c-46b0-aa4f-a70034ede92f-clustermesh-secrets\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.376364 kubelet[2618]: I0517 00:44:49.375961 2618 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52a091ee-ec8c-46b0-aa4f-a70034ede92f-hubble-tls\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.376364 kubelet[2618]: I0517 00:44:49.375969 2618 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46d8f109-c1d8-410d-8f43-2d79c5880283-cilium-config-path\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:49.726986 kubelet[2618]: I0517 00:44:49.720523 2618 scope.go:117] "RemoveContainer" containerID="ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171" May 17 00:44:49.743131 env[1729]: time="2025-05-17T00:44:49.743065161Z" level=info msg="RemoveContainer for \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\"" May 17 00:44:49.754649 env[1729]: time="2025-05-17T00:44:49.754504652Z" level=info msg="RemoveContainer for \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\" returns successfully" May 17 00:44:49.755047 kubelet[2618]: I0517 00:44:49.755026 2618 scope.go:117] "RemoveContainer" containerID="ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171" May 17 00:44:49.755552 env[1729]: time="2025-05-17T00:44:49.755463463Z" level=error msg="ContainerStatus for \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\": not found" May 17 00:44:49.759954 kubelet[2618]: E0517 00:44:49.759905 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\": not found" containerID="ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171" May 17 00:44:49.762037 kubelet[2618]: I0517 00:44:49.760192 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171"} err="failed to get container status \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba6dd25be385d3255a74e667fc757080940c4a3da29aa2d9daa746b743771171\": not found" May 17 00:44:49.762037 kubelet[2618]: I0517 00:44:49.762028 2618 scope.go:117] "RemoveContainer" containerID="72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d" May 17 00:44:49.763698 env[1729]: time="2025-05-17T00:44:49.763411468Z" level=info msg="RemoveContainer for \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\"" May 17 00:44:49.769568 env[1729]: time="2025-05-17T00:44:49.769520942Z" level=info msg="RemoveContainer for \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\" returns successfully" May 17 00:44:49.769873 kubelet[2618]: I0517 00:44:49.769833 2618 scope.go:117] "RemoveContainer" containerID="296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4" May 17 00:44:49.771163 env[1729]: time="2025-05-17T00:44:49.771118943Z" level=info msg="RemoveContainer for \"296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4\"" May 17 00:44:49.776415 env[1729]: time="2025-05-17T00:44:49.776356026Z" level=info msg="RemoveContainer for \"296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4\" returns successfully" May 17 00:44:49.776662 kubelet[2618]: I0517 00:44:49.776625 2618 scope.go:117] "RemoveContainer" containerID="beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6" May 17 00:44:49.778447 env[1729]: time="2025-05-17T00:44:49.778376043Z" level=info msg="RemoveContainer for \"beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6\"" May 17 00:44:49.786179 env[1729]: time="2025-05-17T00:44:49.786126575Z" level=info msg="RemoveContainer for \"beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6\" returns successfully" May 17 00:44:49.786459 kubelet[2618]: I0517 00:44:49.786433 2618 scope.go:117] "RemoveContainer" containerID="9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679" May 17 00:44:49.787626 env[1729]: time="2025-05-17T00:44:49.787586360Z" level=info msg="RemoveContainer for \"9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679\"" May 17 00:44:49.793140 env[1729]: time="2025-05-17T00:44:49.793094529Z" level=info msg="RemoveContainer for \"9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679\" returns successfully" May 17 00:44:49.793329 kubelet[2618]: I0517 00:44:49.793302 2618 scope.go:117] "RemoveContainer" containerID="9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5" May 17 00:44:49.794460 env[1729]: time="2025-05-17T00:44:49.794376656Z" level=info msg="RemoveContainer for \"9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5\"" May 17 00:44:49.805906 env[1729]: time="2025-05-17T00:44:49.805856958Z" level=info msg="RemoveContainer for \"9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5\" returns successfully" May 17 00:44:49.806154 kubelet[2618]: I0517 00:44:49.806118 2618 scope.go:117] "RemoveContainer" containerID="72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d" May 17 00:44:49.806489 env[1729]: time="2025-05-17T00:44:49.806424761Z" level=error msg="ContainerStatus for \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\": not found" May 17 00:44:49.806663 kubelet[2618]: E0517 00:44:49.806632 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\": not found" containerID="72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d" May 17 00:44:49.806746 kubelet[2618]: I0517 00:44:49.806671 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d"} err="failed to get container status \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\": rpc error: code = NotFound desc = an error occurred when try to find container \"72567df1b00e90ed05d85f9f3eb7632737f38019ce31cf0f738774adfc2b341d\": not found" May 17 00:44:49.806746 kubelet[2618]: I0517 00:44:49.806700 2618 scope.go:117] "RemoveContainer" containerID="296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4" May 17 00:44:49.806954 env[1729]: time="2025-05-17T00:44:49.806902915Z" level=error msg="ContainerStatus for \"296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4\": not found" May 17 00:44:49.807078 kubelet[2618]: E0517 00:44:49.807051 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4\": not found" containerID="296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4" May 17 00:44:49.807147 kubelet[2618]: I0517 00:44:49.807082 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4"} err="failed to get container status \"296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4\": rpc error: code = NotFound desc = an error occurred when try to find container \"296f0b967b05912ddac623087eebde1e9453e160cc719d13a959ca1f6cd5adc4\": not found" May 17 00:44:49.807147 kubelet[2618]: I0517 00:44:49.807101 2618 scope.go:117] "RemoveContainer" containerID="beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6" May 17 00:44:49.807343 env[1729]: time="2025-05-17T00:44:49.807290318Z" level=error msg="ContainerStatus for \"beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6\": not found" May 17 00:44:49.807487 kubelet[2618]: E0517 00:44:49.807463 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6\": not found" containerID="beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6" May 17 00:44:49.807572 kubelet[2618]: I0517 00:44:49.807490 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6"} err="failed to get container status \"beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"beb2482a2abbce62b21e1edf1d55d10afe07f7f521ca8cc2d44204efea9519f6\": not found" May 17 00:44:49.807572 kubelet[2618]: I0517 00:44:49.807509 2618 scope.go:117] "RemoveContainer" containerID="9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679" May 17 00:44:49.807737 env[1729]: time="2025-05-17T00:44:49.807684771Z" level=error msg="ContainerStatus for \"9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679\": not found" May 17 00:44:49.807856 kubelet[2618]: E0517 00:44:49.807830 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679\": not found" containerID="9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679" May 17 00:44:49.807930 kubelet[2618]: I0517 00:44:49.807858 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679"} err="failed to get container status \"9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c641448c6f77f39d567f190104c7bdc9e4016e78a470aefd65c8acaca0ae679\": not found" May 17 00:44:49.807930 kubelet[2618]: I0517 00:44:49.807877 2618 scope.go:117] "RemoveContainer" containerID="9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5" May 17 00:44:49.808117 env[1729]: time="2025-05-17T00:44:49.808063091Z" level=error msg="ContainerStatus for \"9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5\": not found" May 17 00:44:49.808228 kubelet[2618]: E0517 00:44:49.808202 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5\": not found" containerID="9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5" May 17 00:44:49.808303 kubelet[2618]: I0517 00:44:49.808229 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5"} err="failed to get container status \"9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c356b95dcd39248ab09dfdf2eb89102c2f7a5267fed1dacc01c93de9ef6adb5\": not found" May 17 00:44:49.846240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d-rootfs.mount: Deactivated successfully. May 17 00:44:49.846403 systemd[1]: var-lib-kubelet-pods-46d8f109\x2dc1d8\x2d410d\x2d8f43\x2d2d79c5880283-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp72x7.mount: Deactivated successfully. May 17 00:44:49.846493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495-rootfs.mount: Deactivated successfully. May 17 00:44:49.846580 systemd[1]: var-lib-kubelet-pods-52a091ee\x2dec8c\x2d46b0\x2daa4f\x2da70034ede92f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d24w9z.mount: Deactivated successfully. May 17 00:44:49.846669 systemd[1]: var-lib-kubelet-pods-52a091ee\x2dec8c\x2d46b0\x2daa4f\x2da70034ede92f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:44:49.846759 systemd[1]: var-lib-kubelet-pods-52a091ee\x2dec8c\x2d46b0\x2daa4f\x2da70034ede92f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:44:50.544974 kubelet[2618]: E0517 00:44:50.544902 2618 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:44:50.780996 sshd[4274]: pam_unix(sshd:session): session closed for user core May 17 00:44:50.783805 systemd[1]: sshd@22-172.31.31.72:22-139.178.68.195:54284.service: Deactivated successfully. May 17 00:44:50.784808 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:44:50.784972 systemd-logind[1721]: Session 23 logged out. Waiting for processes to exit. May 17 00:44:50.786087 systemd-logind[1721]: Removed session 23. May 17 00:44:50.803762 systemd[1]: Started sshd@23-172.31.31.72:22-139.178.68.195:54288.service. May 17 00:44:50.984967 sshd[4444]: Accepted publickey for core from 139.178.68.195 port 54288 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:50.986430 sshd[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:50.992524 systemd-logind[1721]: New session 24 of user core. May 17 00:44:50.993474 systemd[1]: Started session-24.scope. May 17 00:44:51.405872 kubelet[2618]: I0517 00:44:51.405839 2618 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46d8f109-c1d8-410d-8f43-2d79c5880283" path="/var/lib/kubelet/pods/46d8f109-c1d8-410d-8f43-2d79c5880283/volumes" May 17 00:44:51.406596 kubelet[2618]: I0517 00:44:51.406578 2618 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52a091ee-ec8c-46b0-aa4f-a70034ede92f" path="/var/lib/kubelet/pods/52a091ee-ec8c-46b0-aa4f-a70034ede92f/volumes" May 17 00:44:51.821109 sshd[4444]: pam_unix(sshd:session): session closed for user core May 17 00:44:51.823895 systemd[1]: sshd@23-172.31.31.72:22-139.178.68.195:54288.service: Deactivated successfully. May 17 00:44:51.824959 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:44:51.824978 systemd-logind[1721]: Session 24 logged out. Waiting for processes to exit. May 17 00:44:51.826373 systemd-logind[1721]: Removed session 24. May 17 00:44:51.846588 systemd[1]: Started sshd@24-172.31.31.72:22-139.178.68.195:54290.service. May 17 00:44:51.855226 kubelet[2618]: E0517 00:44:51.855191 2618 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="52a091ee-ec8c-46b0-aa4f-a70034ede92f" containerName="mount-bpf-fs" May 17 00:44:51.855226 kubelet[2618]: E0517 00:44:51.855223 2618 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46d8f109-c1d8-410d-8f43-2d79c5880283" containerName="cilium-operator" May 17 00:44:51.855226 kubelet[2618]: E0517 00:44:51.855230 2618 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="52a091ee-ec8c-46b0-aa4f-a70034ede92f" containerName="cilium-agent" May 17 00:44:51.855644 kubelet[2618]: E0517 00:44:51.855236 2618 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="52a091ee-ec8c-46b0-aa4f-a70034ede92f" containerName="mount-cgroup" May 17 00:44:51.855644 kubelet[2618]: E0517 00:44:51.855244 2618 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="52a091ee-ec8c-46b0-aa4f-a70034ede92f" containerName="apply-sysctl-overwrites" May 17 00:44:51.855644 kubelet[2618]: E0517 00:44:51.855250 2618 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="52a091ee-ec8c-46b0-aa4f-a70034ede92f" containerName="clean-cilium-state" May 17 00:44:51.855644 kubelet[2618]: I0517 00:44:51.855276 2618 memory_manager.go:354] "RemoveStaleState removing state" podUID="52a091ee-ec8c-46b0-aa4f-a70034ede92f" containerName="cilium-agent" May 17 00:44:51.855644 kubelet[2618]: I0517 00:44:51.855282 2618 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d8f109-c1d8-410d-8f43-2d79c5880283" containerName="cilium-operator" May 17 00:44:52.005954 kubelet[2618]: I0517 00:44:52.005898 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36a3bb78-cde5-4726-8498-69c08a353a51-clustermesh-secrets\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.005954 kubelet[2618]: I0517 00:44:52.005946 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22zqj\" (UniqueName: \"kubernetes.io/projected/36a3bb78-cde5-4726-8498-69c08a353a51-kube-api-access-22zqj\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.005954 kubelet[2618]: I0517 00:44:52.005965 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-bpf-maps\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.006172 kubelet[2618]: I0517 00:44:52.005981 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-ipsec-secrets\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.006172 kubelet[2618]: I0517 00:44:52.005999 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-host-proc-sys-net\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.006172 kubelet[2618]: I0517 00:44:52.006013 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36a3bb78-cde5-4726-8498-69c08a353a51-hubble-tls\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.006172 kubelet[2618]: I0517 00:44:52.006027 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-cni-path\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.006172 kubelet[2618]: I0517 00:44:52.006041 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-etc-cni-netd\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.006172 kubelet[2618]: I0517 00:44:52.006056 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-cgroup\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.006383 kubelet[2618]: I0517 00:44:52.006072 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-lib-modules\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.006383 kubelet[2618]: I0517 00:44:52.006086 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-config-path\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.006383 kubelet[2618]: I0517 00:44:52.006103 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-host-proc-sys-kernel\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.006383 kubelet[2618]: I0517 00:44:52.006117 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-hostproc\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.006383 kubelet[2618]: I0517 00:44:52.006138 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-run\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.006383 kubelet[2618]: I0517 00:44:52.006157 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-xtables-lock\") pod \"cilium-72zsf\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " pod="kube-system/cilium-72zsf" May 17 00:44:52.021379 sshd[4455]: Accepted publickey for core from 139.178.68.195 port 54290 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:52.022847 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:52.028567 systemd[1]: Started session-25.scope. May 17 00:44:52.029279 systemd-logind[1721]: New session 25 of user core. May 17 00:44:52.205717 env[1729]: time="2025-05-17T00:44:52.205668126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-72zsf,Uid:36a3bb78-cde5-4726-8498-69c08a353a51,Namespace:kube-system,Attempt:0,}" May 17 00:44:52.235008 env[1729]: time="2025-05-17T00:44:52.234755891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:44:52.235008 env[1729]: time="2025-05-17T00:44:52.234802184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:44:52.235008 env[1729]: time="2025-05-17T00:44:52.234816639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:44:52.235272 env[1729]: time="2025-05-17T00:44:52.235149091Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed pid=4479 runtime=io.containerd.runc.v2 May 17 00:44:52.299067 env[1729]: time="2025-05-17T00:44:52.299026563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-72zsf,Uid:36a3bb78-cde5-4726-8498-69c08a353a51,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed\"" May 17 00:44:52.304370 env[1729]: time="2025-05-17T00:44:52.304323511Z" level=info msg="CreateContainer within sandbox \"3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:44:52.337302 sshd[4455]: pam_unix(sshd:session): session closed for user core May 17 00:44:52.342102 env[1729]: time="2025-05-17T00:44:52.342052970Z" level=info msg="CreateContainer within sandbox \"3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7956b42198ec74a373b5619604de8687e7d324a5e944848b8d7e5dad8b26c9a6\"" May 17 00:44:52.342460 systemd-logind[1721]: Session 25 logged out. Waiting for processes to exit. May 17 00:44:52.343838 systemd[1]: sshd@24-172.31.31.72:22-139.178.68.195:54290.service: Deactivated successfully. May 17 00:44:52.345069 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:44:52.346574 systemd-logind[1721]: Removed session 25. May 17 00:44:52.352995 env[1729]: time="2025-05-17T00:44:52.352957103Z" level=info msg="StartContainer for \"7956b42198ec74a373b5619604de8687e7d324a5e944848b8d7e5dad8b26c9a6\"" May 17 00:44:52.359237 systemd[1]: Started sshd@25-172.31.31.72:22-139.178.68.195:54292.service. May 17 00:44:52.533683 env[1729]: time="2025-05-17T00:44:52.532969742Z" level=info msg="StartContainer for \"7956b42198ec74a373b5619604de8687e7d324a5e944848b8d7e5dad8b26c9a6\" returns successfully" May 17 00:44:52.562057 sshd[4515]: Accepted publickey for core from 139.178.68.195 port 54292 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:44:52.563369 sshd[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:52.574910 systemd[1]: Started session-26.scope. May 17 00:44:52.576109 systemd-logind[1721]: New session 26 of user core. May 17 00:44:52.635972 env[1729]: time="2025-05-17T00:44:52.635915215Z" level=info msg="shim disconnected" id=7956b42198ec74a373b5619604de8687e7d324a5e944848b8d7e5dad8b26c9a6 May 17 00:44:52.635972 env[1729]: time="2025-05-17T00:44:52.635971451Z" level=warning msg="cleaning up after shim disconnected" id=7956b42198ec74a373b5619604de8687e7d324a5e944848b8d7e5dad8b26c9a6 namespace=k8s.io May 17 00:44:52.636298 env[1729]: time="2025-05-17T00:44:52.635984539Z" level=info msg="cleaning up dead shim" May 17 00:44:52.648161 env[1729]: time="2025-05-17T00:44:52.648112286Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4567 runtime=io.containerd.runc.v2\n" May 17 00:44:52.737727 env[1729]: time="2025-05-17T00:44:52.737688766Z" level=info msg="StopPodSandbox for \"3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed\"" May 17 00:44:52.737973 env[1729]: time="2025-05-17T00:44:52.737750998Z" level=info msg="Container to stop \"7956b42198ec74a373b5619604de8687e7d324a5e944848b8d7e5dad8b26c9a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:52.801766 env[1729]: time="2025-05-17T00:44:52.801653261Z" level=info msg="shim disconnected" id=3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed May 17 00:44:52.802048 env[1729]: time="2025-05-17T00:44:52.802021660Z" level=warning msg="cleaning up after shim disconnected" id=3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed namespace=k8s.io May 17 00:44:52.802186 env[1729]: time="2025-05-17T00:44:52.802167980Z" level=info msg="cleaning up dead shim" May 17 00:44:52.813555 env[1729]: time="2025-05-17T00:44:52.813503842Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4605 runtime=io.containerd.runc.v2\n" May 17 00:44:52.814349 env[1729]: time="2025-05-17T00:44:52.814280699Z" level=info msg="TearDown network for sandbox \"3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed\" successfully" May 17 00:44:52.814581 env[1729]: time="2025-05-17T00:44:52.814534523Z" level=info msg="StopPodSandbox for \"3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed\" returns successfully" May 17 00:44:53.017654 kubelet[2618]: I0517 00:44:53.017604 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22zqj\" (UniqueName: \"kubernetes.io/projected/36a3bb78-cde5-4726-8498-69c08a353a51-kube-api-access-22zqj\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.017654 kubelet[2618]: I0517 00:44:53.017645 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-host-proc-sys-net\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.017654 kubelet[2618]: I0517 00:44:53.017665 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-hostproc\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.018150 kubelet[2618]: I0517 00:44:53.017696 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-config-path\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.018150 kubelet[2618]: I0517 00:44:53.017716 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-ipsec-secrets\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.018150 kubelet[2618]: I0517 00:44:53.017730 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-etc-cni-netd\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.018150 kubelet[2618]: I0517 00:44:53.017746 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-cni-path\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.018150 kubelet[2618]: I0517 00:44:53.017762 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36a3bb78-cde5-4726-8498-69c08a353a51-clustermesh-secrets\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.018150 kubelet[2618]: I0517 00:44:53.017775 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-bpf-maps\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.018338 kubelet[2618]: I0517 00:44:53.017792 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-lib-modules\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.018338 kubelet[2618]: I0517 00:44:53.017805 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-run\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.018338 kubelet[2618]: I0517 00:44:53.017821 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-xtables-lock\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.018338 kubelet[2618]: I0517 00:44:53.017835 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-host-proc-sys-kernel\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.018338 kubelet[2618]: I0517 00:44:53.017850 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36a3bb78-cde5-4726-8498-69c08a353a51-hubble-tls\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.018338 kubelet[2618]: I0517 00:44:53.017862 2618 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-cgroup\") pod \"36a3bb78-cde5-4726-8498-69c08a353a51\" (UID: \"36a3bb78-cde5-4726-8498-69c08a353a51\") " May 17 00:44:53.018643 kubelet[2618]: I0517 00:44:53.017923 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:53.020908 kubelet[2618]: I0517 00:44:53.019698 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:53.020908 kubelet[2618]: I0517 00:44:53.019739 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-hostproc" (OuterVolumeSpecName: "hostproc") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:53.020908 kubelet[2618]: I0517 00:44:53.019845 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:53.020908 kubelet[2618]: I0517 00:44:53.019883 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:53.020908 kubelet[2618]: I0517 00:44:53.019898 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:53.021146 kubelet[2618]: I0517 00:44:53.019919 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:53.021146 kubelet[2618]: I0517 00:44:53.019932 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:53.021863 kubelet[2618]: I0517 00:44:53.021827 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:44:53.022928 kubelet[2618]: I0517 00:44:53.022895 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:53.023014 kubelet[2618]: I0517 00:44:53.022937 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-cni-path" (OuterVolumeSpecName: "cni-path") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:44:53.023802 kubelet[2618]: I0517 00:44:53.023776 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a3bb78-cde5-4726-8498-69c08a353a51-kube-api-access-22zqj" (OuterVolumeSpecName: "kube-api-access-22zqj") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "kube-api-access-22zqj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:44:53.025341 kubelet[2618]: I0517 00:44:53.025306 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36a3bb78-cde5-4726-8498-69c08a353a51-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:44:53.026574 kubelet[2618]: I0517 00:44:53.026548 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:44:53.026905 kubelet[2618]: I0517 00:44:53.026876 2618 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a3bb78-cde5-4726-8498-69c08a353a51-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "36a3bb78-cde5-4726-8498-69c08a353a51" (UID: "36a3bb78-cde5-4726-8498-69c08a353a51"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:44:53.117752 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed-shm.mount: Deactivated successfully. May 17 00:44:53.118643 kubelet[2618]: I0517 00:44:53.118059 2618 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-xtables-lock\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118643 kubelet[2618]: I0517 00:44:53.118089 2618 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-host-proc-sys-kernel\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118643 kubelet[2618]: I0517 00:44:53.118098 2618 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36a3bb78-cde5-4726-8498-69c08a353a51-hubble-tls\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118643 kubelet[2618]: I0517 00:44:53.118107 2618 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-cgroup\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118643 kubelet[2618]: I0517 00:44:53.118117 2618 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22zqj\" (UniqueName: \"kubernetes.io/projected/36a3bb78-cde5-4726-8498-69c08a353a51-kube-api-access-22zqj\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118643 kubelet[2618]: I0517 00:44:53.118125 2618 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-host-proc-sys-net\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118643 kubelet[2618]: I0517 00:44:53.118134 2618 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-hostproc\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118643 kubelet[2618]: I0517 00:44:53.118144 2618 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-config-path\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.117906 systemd[1]: var-lib-kubelet-pods-36a3bb78\x2dcde5\x2d4726\x2d8498\x2d69c08a353a51-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d22zqj.mount: Deactivated successfully. May 17 00:44:53.118941 kubelet[2618]: I0517 00:44:53.118152 2618 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-etc-cni-netd\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118941 kubelet[2618]: I0517 00:44:53.118160 2618 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-ipsec-secrets\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118941 kubelet[2618]: I0517 00:44:53.118168 2618 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-cni-path\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118941 kubelet[2618]: I0517 00:44:53.118175 2618 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-lib-modules\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118941 kubelet[2618]: I0517 00:44:53.118182 2618 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-cilium-run\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118941 kubelet[2618]: I0517 00:44:53.118190 2618 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36a3bb78-cde5-4726-8498-69c08a353a51-clustermesh-secrets\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118941 kubelet[2618]: I0517 00:44:53.118198 2618 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36a3bb78-cde5-4726-8498-69c08a353a51-bpf-maps\") on node \"ip-172-31-31-72\" DevicePath \"\"" May 17 00:44:53.118011 systemd[1]: var-lib-kubelet-pods-36a3bb78\x2dcde5\x2d4726\x2d8498\x2d69c08a353a51-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:44:53.118115 systemd[1]: var-lib-kubelet-pods-36a3bb78\x2dcde5\x2d4726\x2d8498\x2d69c08a353a51-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:44:53.118201 systemd[1]: var-lib-kubelet-pods-36a3bb78\x2dcde5\x2d4726\x2d8498\x2d69c08a353a51-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:44:53.739639 kubelet[2618]: I0517 00:44:53.739603 2618 scope.go:117] "RemoveContainer" containerID="7956b42198ec74a373b5619604de8687e7d324a5e944848b8d7e5dad8b26c9a6" May 17 00:44:53.742079 env[1729]: time="2025-05-17T00:44:53.742035357Z" level=info msg="RemoveContainer for \"7956b42198ec74a373b5619604de8687e7d324a5e944848b8d7e5dad8b26c9a6\"" May 17 00:44:53.747356 env[1729]: time="2025-05-17T00:44:53.747317599Z" level=info msg="RemoveContainer for \"7956b42198ec74a373b5619604de8687e7d324a5e944848b8d7e5dad8b26c9a6\" returns successfully" May 17 00:44:53.789687 kubelet[2618]: E0517 00:44:53.789653 2618 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="36a3bb78-cde5-4726-8498-69c08a353a51" containerName="mount-cgroup" May 17 00:44:53.789865 kubelet[2618]: I0517 00:44:53.789726 2618 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a3bb78-cde5-4726-8498-69c08a353a51" containerName="mount-cgroup" May 17 00:44:53.923642 kubelet[2618]: I0517 00:44:53.923599 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e95b36bc-725e-4149-a05f-15e085c720c8-cilium-cgroup\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.923642 kubelet[2618]: I0517 00:44:53.923640 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e95b36bc-725e-4149-a05f-15e085c720c8-lib-modules\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.923855 kubelet[2618]: I0517 00:44:53.923659 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e95b36bc-725e-4149-a05f-15e085c720c8-cilium-ipsec-secrets\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.923855 kubelet[2618]: I0517 00:44:53.923676 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e95b36bc-725e-4149-a05f-15e085c720c8-clustermesh-secrets\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.923855 kubelet[2618]: I0517 00:44:53.923695 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e95b36bc-725e-4149-a05f-15e085c720c8-bpf-maps\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.923855 kubelet[2618]: I0517 00:44:53.923711 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e95b36bc-725e-4149-a05f-15e085c720c8-hostproc\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.923855 kubelet[2618]: I0517 00:44:53.923726 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e95b36bc-725e-4149-a05f-15e085c720c8-cni-path\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.923855 kubelet[2618]: I0517 00:44:53.923740 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e95b36bc-725e-4149-a05f-15e085c720c8-xtables-lock\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.924029 kubelet[2618]: I0517 00:44:53.923754 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e95b36bc-725e-4149-a05f-15e085c720c8-cilium-config-path\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.924029 kubelet[2618]: I0517 00:44:53.923768 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e95b36bc-725e-4149-a05f-15e085c720c8-hubble-tls\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.924029 kubelet[2618]: I0517 00:44:53.923783 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e95b36bc-725e-4149-a05f-15e085c720c8-etc-cni-netd\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.924029 kubelet[2618]: I0517 00:44:53.923798 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e95b36bc-725e-4149-a05f-15e085c720c8-host-proc-sys-kernel\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.924029 kubelet[2618]: I0517 00:44:53.923813 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg9g6\" (UniqueName: \"kubernetes.io/projected/e95b36bc-725e-4149-a05f-15e085c720c8-kube-api-access-zg9g6\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.924029 kubelet[2618]: I0517 00:44:53.923828 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e95b36bc-725e-4149-a05f-15e085c720c8-cilium-run\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:53.924926 kubelet[2618]: I0517 00:44:53.923867 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e95b36bc-725e-4149-a05f-15e085c720c8-host-proc-sys-net\") pod \"cilium-7bb6j\" (UID: \"e95b36bc-725e-4149-a05f-15e085c720c8\") " pod="kube-system/cilium-7bb6j" May 17 00:44:54.097727 env[1729]: time="2025-05-17T00:44:54.097457182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7bb6j,Uid:e95b36bc-725e-4149-a05f-15e085c720c8,Namespace:kube-system,Attempt:0,}" May 17 00:44:54.130007 env[1729]: time="2025-05-17T00:44:54.129924451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:44:54.130553 env[1729]: time="2025-05-17T00:44:54.129971486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:44:54.130553 env[1729]: time="2025-05-17T00:44:54.129987265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:44:54.130553 env[1729]: time="2025-05-17T00:44:54.130145325Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ede9421118cd6e8a95e7f7e69bd11a1d9f609db3fb2303ef154102e5c46b8b6 pid=4633 runtime=io.containerd.runc.v2 May 17 00:44:54.164872 systemd[1]: run-containerd-runc-k8s.io-6ede9421118cd6e8a95e7f7e69bd11a1d9f609db3fb2303ef154102e5c46b8b6-runc.mEEdiW.mount: Deactivated successfully. May 17 00:44:54.190980 env[1729]: time="2025-05-17T00:44:54.190933639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7bb6j,Uid:e95b36bc-725e-4149-a05f-15e085c720c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ede9421118cd6e8a95e7f7e69bd11a1d9f609db3fb2303ef154102e5c46b8b6\"" May 17 00:44:54.197603 env[1729]: time="2025-05-17T00:44:54.197545599Z" level=info msg="CreateContainer within sandbox \"6ede9421118cd6e8a95e7f7e69bd11a1d9f609db3fb2303ef154102e5c46b8b6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:44:54.224437 env[1729]: time="2025-05-17T00:44:54.224354376Z" level=info msg="CreateContainer within sandbox \"6ede9421118cd6e8a95e7f7e69bd11a1d9f609db3fb2303ef154102e5c46b8b6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"25bf06df377cfb19a6fe651d1eedcf42026c37b99705ed265dc20136ae59ad23\"" May 17 00:44:54.226615 env[1729]: time="2025-05-17T00:44:54.226577610Z" level=info msg="StartContainer for \"25bf06df377cfb19a6fe651d1eedcf42026c37b99705ed265dc20136ae59ad23\"" May 17 00:44:54.285035 env[1729]: time="2025-05-17T00:44:54.284096079Z" level=info msg="StartContainer for \"25bf06df377cfb19a6fe651d1eedcf42026c37b99705ed265dc20136ae59ad23\" returns successfully" May 17 00:44:54.324258 env[1729]: time="2025-05-17T00:44:54.324217095Z" level=info msg="shim disconnected" id=25bf06df377cfb19a6fe651d1eedcf42026c37b99705ed265dc20136ae59ad23 May 17 00:44:54.324580 env[1729]: time="2025-05-17T00:44:54.324517652Z" level=warning msg="cleaning up after shim disconnected" id=25bf06df377cfb19a6fe651d1eedcf42026c37b99705ed265dc20136ae59ad23 namespace=k8s.io May 17 00:44:54.324580 env[1729]: time="2025-05-17T00:44:54.324541758Z" level=info msg="cleaning up dead shim" May 17 00:44:54.333826 env[1729]: time="2025-05-17T00:44:54.333779680Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4716 runtime=io.containerd.runc.v2\n" May 17 00:44:54.748567 env[1729]: time="2025-05-17T00:44:54.748514261Z" level=info msg="CreateContainer within sandbox \"6ede9421118cd6e8a95e7f7e69bd11a1d9f609db3fb2303ef154102e5c46b8b6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:44:54.769871 env[1729]: time="2025-05-17T00:44:54.769813869Z" level=info msg="CreateContainer within sandbox \"6ede9421118cd6e8a95e7f7e69bd11a1d9f609db3fb2303ef154102e5c46b8b6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"65b796c3e26ebbac6299dd100bfdce51dc5f9b1d4e4bc4697695e42dd75f9f35\"" May 17 00:44:54.770667 env[1729]: time="2025-05-17T00:44:54.770638982Z" level=info msg="StartContainer for \"65b796c3e26ebbac6299dd100bfdce51dc5f9b1d4e4bc4697695e42dd75f9f35\"" May 17 00:44:54.865033 env[1729]: time="2025-05-17T00:44:54.864987666Z" level=info msg="StartContainer for \"65b796c3e26ebbac6299dd100bfdce51dc5f9b1d4e4bc4697695e42dd75f9f35\" returns successfully" May 17 00:44:54.903213 env[1729]: time="2025-05-17T00:44:54.903161211Z" level=info msg="shim disconnected" id=65b796c3e26ebbac6299dd100bfdce51dc5f9b1d4e4bc4697695e42dd75f9f35 May 17 00:44:54.903213 env[1729]: time="2025-05-17T00:44:54.903208863Z" level=warning msg="cleaning up after shim disconnected" id=65b796c3e26ebbac6299dd100bfdce51dc5f9b1d4e4bc4697695e42dd75f9f35 namespace=k8s.io May 17 00:44:54.903213 env[1729]: time="2025-05-17T00:44:54.903218514Z" level=info msg="cleaning up dead shim" May 17 00:44:54.911712 env[1729]: time="2025-05-17T00:44:54.911656140Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4781 runtime=io.containerd.runc.v2\n" May 17 00:44:55.117803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1646816724.mount: Deactivated successfully. May 17 00:44:55.404939 kubelet[2618]: I0517 00:44:55.404767 2618 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36a3bb78-cde5-4726-8498-69c08a353a51" path="/var/lib/kubelet/pods/36a3bb78-cde5-4726-8498-69c08a353a51/volumes" May 17 00:44:55.545788 kubelet[2618]: E0517 00:44:55.545743 2618 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:44:55.749461 env[1729]: time="2025-05-17T00:44:55.749423627Z" level=info msg="CreateContainer within sandbox \"6ede9421118cd6e8a95e7f7e69bd11a1d9f609db3fb2303ef154102e5c46b8b6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:44:55.806843 env[1729]: time="2025-05-17T00:44:55.806787477Z" level=info msg="CreateContainer within sandbox \"6ede9421118cd6e8a95e7f7e69bd11a1d9f609db3fb2303ef154102e5c46b8b6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"24ecfac85a78ff5610ec17995916c335070087343cd5f9e3216ececbd9196fbc\"" May 17 00:44:55.812974 env[1729]: time="2025-05-17T00:44:55.812931182Z" level=info msg="StartContainer for \"24ecfac85a78ff5610ec17995916c335070087343cd5f9e3216ececbd9196fbc\"" May 17 00:44:55.813818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4283039509.mount: Deactivated successfully. May 17 00:44:55.901636 env[1729]: time="2025-05-17T00:44:55.901581258Z" level=info msg="StartContainer for \"24ecfac85a78ff5610ec17995916c335070087343cd5f9e3216ececbd9196fbc\" returns successfully" May 17 00:44:55.944354 env[1729]: time="2025-05-17T00:44:55.944294492Z" level=info msg="shim disconnected" id=24ecfac85a78ff5610ec17995916c335070087343cd5f9e3216ececbd9196fbc May 17 00:44:55.944354 env[1729]: time="2025-05-17T00:44:55.944365205Z" level=warning msg="cleaning up after shim disconnected" id=24ecfac85a78ff5610ec17995916c335070087343cd5f9e3216ececbd9196fbc namespace=k8s.io May 17 00:44:55.944791 env[1729]: time="2025-05-17T00:44:55.944379652Z" level=info msg="cleaning up dead shim" May 17 00:44:55.953615 env[1729]: time="2025-05-17T00:44:55.953570785Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4842 runtime=io.containerd.runc.v2\n" May 17 00:44:56.117846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24ecfac85a78ff5610ec17995916c335070087343cd5f9e3216ececbd9196fbc-rootfs.mount: Deactivated successfully. May 17 00:44:56.760800 env[1729]: time="2025-05-17T00:44:56.760740455Z" level=info msg="CreateContainer within sandbox \"6ede9421118cd6e8a95e7f7e69bd11a1d9f609db3fb2303ef154102e5c46b8b6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:44:56.793310 env[1729]: time="2025-05-17T00:44:56.793231604Z" level=info msg="CreateContainer within sandbox \"6ede9421118cd6e8a95e7f7e69bd11a1d9f609db3fb2303ef154102e5c46b8b6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"14c58e1245cf22f3480904e67ab4848f4596974fba1623c5c5f001372da13131\"" May 17 00:44:56.794130 env[1729]: time="2025-05-17T00:44:56.794089257Z" level=info msg="StartContainer for \"14c58e1245cf22f3480904e67ab4848f4596974fba1623c5c5f001372da13131\"" May 17 00:44:56.866596 env[1729]: time="2025-05-17T00:44:56.862868733Z" level=info msg="StartContainer for \"14c58e1245cf22f3480904e67ab4848f4596974fba1623c5c5f001372da13131\" returns successfully" May 17 00:44:56.900793 env[1729]: time="2025-05-17T00:44:56.900557487Z" level=info msg="shim disconnected" id=14c58e1245cf22f3480904e67ab4848f4596974fba1623c5c5f001372da13131 May 17 00:44:56.900793 env[1729]: time="2025-05-17T00:44:56.900708974Z" level=warning msg="cleaning up after shim disconnected" id=14c58e1245cf22f3480904e67ab4848f4596974fba1623c5c5f001372da13131 namespace=k8s.io May 17 00:44:56.900793 env[1729]: time="2025-05-17T00:44:56.900728139Z" level=info msg="cleaning up dead shim" May 17 00:44:56.910327 env[1729]: time="2025-05-17T00:44:56.910281080Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4899 runtime=io.containerd.runc.v2\n" May 17 00:44:57.118318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14c58e1245cf22f3480904e67ab4848f4596974fba1623c5c5f001372da13131-rootfs.mount: Deactivated successfully. May 17 00:44:57.378921 kubelet[2618]: I0517 00:44:57.378776 2618 setters.go:600] "Node became not ready" node="ip-172-31-31-72" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:44:57Z","lastTransitionTime":"2025-05-17T00:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:44:57.769181 env[1729]: time="2025-05-17T00:44:57.767796153Z" level=info msg="CreateContainer within sandbox \"6ede9421118cd6e8a95e7f7e69bd11a1d9f609db3fb2303ef154102e5c46b8b6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:44:57.799161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3005142176.mount: Deactivated successfully. May 17 00:44:57.809636 env[1729]: time="2025-05-17T00:44:57.809581532Z" level=info msg="CreateContainer within sandbox \"6ede9421118cd6e8a95e7f7e69bd11a1d9f609db3fb2303ef154102e5c46b8b6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"72b49aa76cf3827e990a4ed3b48fd4203aad4b70dd992dc54aef246679007655\"" May 17 00:44:57.810329 env[1729]: time="2025-05-17T00:44:57.810300226Z" level=info msg="StartContainer for \"72b49aa76cf3827e990a4ed3b48fd4203aad4b70dd992dc54aef246679007655\"" May 17 00:44:57.877300 env[1729]: time="2025-05-17T00:44:57.874687992Z" level=info msg="StartContainer for \"72b49aa76cf3827e990a4ed3b48fd4203aad4b70dd992dc54aef246679007655\" returns successfully" May 17 00:44:58.606441 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:44:58.797088 kubelet[2618]: I0517 00:44:58.797024 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7bb6j" podStartSLOduration=5.797005162 podStartE2EDuration="5.797005162s" podCreationTimestamp="2025-05-17 00:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:44:58.796877332 +0000 UTC m=+103.698174363" watchObservedRunningTime="2025-05-17 00:44:58.797005162 +0000 UTC m=+103.698302189" May 17 00:45:01.319786 systemd[1]: run-containerd-runc-k8s.io-72b49aa76cf3827e990a4ed3b48fd4203aad4b70dd992dc54aef246679007655-runc.uz4iuv.mount: Deactivated successfully. May 17 00:45:02.571572 systemd-networkd[1409]: lxc_health: Link UP May 17 00:45:02.581730 (udev-worker)[5486]: Network interface NamePolicy= disabled on kernel command line. May 17 00:45:02.597244 systemd-networkd[1409]: lxc_health: Gained carrier May 17 00:45:02.597553 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:45:03.616717 systemd[1]: run-containerd-runc-k8s.io-72b49aa76cf3827e990a4ed3b48fd4203aad4b70dd992dc54aef246679007655-runc.TKPHaL.mount: Deactivated successfully. May 17 00:45:04.190308 systemd-networkd[1409]: lxc_health: Gained IPv6LL May 17 00:45:05.923349 systemd[1]: run-containerd-runc-k8s.io-72b49aa76cf3827e990a4ed3b48fd4203aad4b70dd992dc54aef246679007655-runc.2s647w.mount: Deactivated successfully. May 17 00:45:08.123756 systemd[1]: run-containerd-runc-k8s.io-72b49aa76cf3827e990a4ed3b48fd4203aad4b70dd992dc54aef246679007655-runc.3lsfpu.mount: Deactivated successfully. May 17 00:45:08.338796 sshd[4515]: pam_unix(sshd:session): session closed for user core May 17 00:45:08.346863 systemd[1]: sshd@25-172.31.31.72:22-139.178.68.195:54292.service: Deactivated successfully. May 17 00:45:08.348857 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:45:08.350528 systemd-logind[1721]: Session 26 logged out. Waiting for processes to exit. May 17 00:45:08.351818 systemd-logind[1721]: Removed session 26. May 17 00:45:15.382510 env[1729]: time="2025-05-17T00:45:15.382453814Z" level=info msg="StopPodSandbox for \"3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed\"" May 17 00:45:15.382953 env[1729]: time="2025-05-17T00:45:15.382552433Z" level=info msg="TearDown network for sandbox \"3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed\" successfully" May 17 00:45:15.382953 env[1729]: time="2025-05-17T00:45:15.382585107Z" level=info msg="StopPodSandbox for \"3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed\" returns successfully" May 17 00:45:15.383355 env[1729]: time="2025-05-17T00:45:15.383324495Z" level=info msg="RemovePodSandbox for \"3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed\"" May 17 00:45:15.383471 env[1729]: time="2025-05-17T00:45:15.383356810Z" level=info msg="Forcibly stopping sandbox \"3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed\"" May 17 00:45:15.383471 env[1729]: time="2025-05-17T00:45:15.383437867Z" level=info msg="TearDown network for sandbox \"3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed\" successfully" May 17 00:45:15.392308 env[1729]: time="2025-05-17T00:45:15.392236784Z" level=info msg="RemovePodSandbox \"3aace6843bb43af417602ad8d76d625b24c127296fb1d8deddf2a7fe4d5c94ed\" returns successfully" May 17 00:45:15.393058 env[1729]: time="2025-05-17T00:45:15.393026120Z" level=info msg="StopPodSandbox for \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\"" May 17 00:45:15.393165 env[1729]: time="2025-05-17T00:45:15.393128596Z" level=info msg="TearDown network for sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" successfully" May 17 00:45:15.393213 env[1729]: time="2025-05-17T00:45:15.393166151Z" level=info msg="StopPodSandbox for \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" returns successfully" May 17 00:45:15.393526 env[1729]: time="2025-05-17T00:45:15.393501797Z" level=info msg="RemovePodSandbox for \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\"" May 17 00:45:15.393590 env[1729]: time="2025-05-17T00:45:15.393530366Z" level=info msg="Forcibly stopping sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\"" May 17 00:45:15.393621 env[1729]: time="2025-05-17T00:45:15.393593079Z" level=info msg="TearDown network for sandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" successfully" May 17 00:45:15.399364 env[1729]: time="2025-05-17T00:45:15.399311346Z" level=info msg="RemovePodSandbox \"ee59bcb7d5b28ecd9f512f406d0d0511a8ed5eb190d9ed98ceaec6626bcee495\" returns successfully" May 17 00:45:15.399940 env[1729]: time="2025-05-17T00:45:15.399906856Z" level=info msg="StopPodSandbox for \"fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d\"" May 17 00:45:15.400057 env[1729]: time="2025-05-17T00:45:15.400002619Z" level=info msg="TearDown network for sandbox \"fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d\" successfully" May 17 00:45:15.400057 env[1729]: time="2025-05-17T00:45:15.400048685Z" level=info msg="StopPodSandbox for \"fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d\" returns successfully" May 17 00:45:15.400593 env[1729]: time="2025-05-17T00:45:15.400561799Z" level=info msg="RemovePodSandbox for \"fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d\"" May 17 00:45:15.400819 env[1729]: time="2025-05-17T00:45:15.400765156Z" level=info msg="Forcibly stopping sandbox \"fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d\"" May 17 00:45:15.400914 env[1729]: time="2025-05-17T00:45:15.400870060Z" level=info msg="TearDown network for sandbox \"fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d\" successfully" May 17 00:45:15.408185 env[1729]: time="2025-05-17T00:45:15.408128469Z" level=info msg="RemovePodSandbox \"fa821ece0992f0604be3c7cba273a75593dc251b7a3f151a5da0ed46b522c43d\" returns successfully" May 17 00:45:18.446925 amazon-ssm-agent[1805]: 2025-05-17 00:45:18 INFO [HealthCheck] HealthCheck reporting agent health. May 17 00:45:23.555385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-621d0ef4b3dca0b653ce2fcf394ba10ad9eb212eba0302fd9a53d8b7c0faf6ff-rootfs.mount: Deactivated successfully. May 17 00:45:23.591548 env[1729]: time="2025-05-17T00:45:23.591498143Z" level=info msg="shim disconnected" id=621d0ef4b3dca0b653ce2fcf394ba10ad9eb212eba0302fd9a53d8b7c0faf6ff May 17 00:45:23.591987 env[1729]: time="2025-05-17T00:45:23.591547393Z" level=warning msg="cleaning up after shim disconnected" id=621d0ef4b3dca0b653ce2fcf394ba10ad9eb212eba0302fd9a53d8b7c0faf6ff namespace=k8s.io May 17 00:45:23.591987 env[1729]: time="2025-05-17T00:45:23.591586140Z" level=info msg="cleaning up dead shim" May 17 00:45:23.600132 env[1729]: time="2025-05-17T00:45:23.600075701Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:45:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5606 runtime=io.containerd.runc.v2\n" May 17 00:45:23.841536 kubelet[2618]: I0517 00:45:23.841040 2618 scope.go:117] "RemoveContainer" containerID="621d0ef4b3dca0b653ce2fcf394ba10ad9eb212eba0302fd9a53d8b7c0faf6ff" May 17 00:45:23.844334 env[1729]: time="2025-05-17T00:45:23.844290971Z" level=info msg="CreateContainer within sandbox \"c31fff339df08696bd1baf46fec4249ec5b4372614afb5c187e791dab9f277f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:45:23.869693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2572381989.mount: Deactivated successfully. May 17 00:45:23.886540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008596411.mount: Deactivated successfully. May 17 00:45:23.898515 env[1729]: time="2025-05-17T00:45:23.898442499Z" level=info msg="CreateContainer within sandbox \"c31fff339df08696bd1baf46fec4249ec5b4372614afb5c187e791dab9f277f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b6ba0ba66cda5675db65aa80245ed4387483f652b61386b18befd25f0ce4f86e\"" May 17 00:45:23.899022 env[1729]: time="2025-05-17T00:45:23.898992154Z" level=info msg="StartContainer for \"b6ba0ba66cda5675db65aa80245ed4387483f652b61386b18befd25f0ce4f86e\"" May 17 00:45:23.981623 env[1729]: time="2025-05-17T00:45:23.981567300Z" level=info msg="StartContainer for \"b6ba0ba66cda5675db65aa80245ed4387483f652b61386b18befd25f0ce4f86e\" returns successfully" May 17 00:45:27.607615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-500d404091d652823adfb700df0edb5afc13b63f380f4107c7f5e70d36cde794-rootfs.mount: Deactivated successfully. May 17 00:45:27.630536 env[1729]: time="2025-05-17T00:45:27.630489896Z" level=info msg="shim disconnected" id=500d404091d652823adfb700df0edb5afc13b63f380f4107c7f5e70d36cde794 May 17 00:45:27.630536 env[1729]: time="2025-05-17T00:45:27.630535551Z" level=warning msg="cleaning up after shim disconnected" id=500d404091d652823adfb700df0edb5afc13b63f380f4107c7f5e70d36cde794 namespace=k8s.io May 17 00:45:27.630536 env[1729]: time="2025-05-17T00:45:27.630544513Z" level=info msg="cleaning up dead shim" May 17 00:45:27.639873 env[1729]: time="2025-05-17T00:45:27.639828376Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:45:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5663 runtime=io.containerd.runc.v2\n" May 17 00:45:27.738167 kubelet[2618]: E0517 00:45:27.738107 2618 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-72?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 17 00:45:27.851235 kubelet[2618]: I0517 00:45:27.851191 2618 scope.go:117] "RemoveContainer" containerID="500d404091d652823adfb700df0edb5afc13b63f380f4107c7f5e70d36cde794" May 17 00:45:27.853785 env[1729]: time="2025-05-17T00:45:27.853711334Z" level=info msg="CreateContainer within sandbox \"0a3bb49ae3591298f5d3dd97c21df72ae0cbce650f0711673ab96b935d4c4967\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:45:27.878408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount755398522.mount: Deactivated successfully. May 17 00:45:27.891617 env[1729]: time="2025-05-17T00:45:27.891479287Z" level=info msg="CreateContainer within sandbox \"0a3bb49ae3591298f5d3dd97c21df72ae0cbce650f0711673ab96b935d4c4967\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"23f4c05e3df3ca39d6485a0cec770c8825f65b5eafb7aa0c71947a107b7a78ce\"" May 17 00:45:27.892301 env[1729]: time="2025-05-17T00:45:27.891999043Z" level=info msg="StartContainer for \"23f4c05e3df3ca39d6485a0cec770c8825f65b5eafb7aa0c71947a107b7a78ce\"" May 17 00:45:27.972748 env[1729]: time="2025-05-17T00:45:27.972692402Z" level=info msg="StartContainer for \"23f4c05e3df3ca39d6485a0cec770c8825f65b5eafb7aa0c71947a107b7a78ce\" returns successfully" May 17 00:45:37.739078 kubelet[2618]: E0517 00:45:37.738985 2618 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-72?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"