Jul 12 00:23:00.001674 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 12 00:23:00.001729 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Jul 11 23:15:18 -00 2025 Jul 12 00:23:00.001755 kernel: efi: EFI v2.70 by EDK II Jul 12 00:23:00.001770 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x716fcf98 Jul 12 00:23:00.001784 kernel: ACPI: Early table checksum verification disabled Jul 12 00:23:00.001797 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 12 00:23:00.001813 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 12 00:23:00.001827 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 12 00:23:00.001841 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 12 00:23:00.001854 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 12 00:23:00.001873 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 12 00:23:00.001887 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 12 00:23:00.001900 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 12 00:23:00.001914 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 12 00:23:00.001930 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 12 00:23:00.001949 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 12 00:23:00.001963 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 12 00:23:00.001977 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 12 00:23:00.001991 kernel: printk: bootconsole [uart0] enabled Jul 12 00:23:00.002005 kernel: NUMA: Failed to initialise from firmware Jul 12 00:23:00.002019 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:23:00.002034 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Jul 12 00:23:00.002048 kernel: Zone ranges: Jul 12 00:23:00.002063 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 12 00:23:00.002077 kernel: DMA32 empty Jul 12 00:23:00.002091 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 12 00:23:00.002109 kernel: Movable zone start for each node Jul 12 00:23:00.002123 kernel: Early memory node ranges Jul 12 00:23:00.002138 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 12 00:23:00.002152 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 12 00:23:00.002167 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 12 00:23:00.002181 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 12 00:23:00.002196 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 12 00:23:00.002210 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 12 00:23:00.002224 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 12 00:23:00.002239 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 12 00:23:00.002253 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:23:00.002267 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 12 00:23:00.002285 kernel: psci: probing for conduit method from ACPI. Jul 12 00:23:00.002300 kernel: psci: PSCIv1.0 detected in firmware. Jul 12 00:23:00.002321 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:23:00.002336 kernel: psci: Trusted OS migration not required Jul 12 00:23:00.002351 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:23:00.002371 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 12 00:23:00.002386 kernel: ACPI: SRAT not present Jul 12 00:23:00.002401 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 12 00:23:00.002417 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 12 00:23:00.002432 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:23:00.002447 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:23:00.002462 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:23:00.002477 kernel: CPU features: detected: Spectre-v2 Jul 12 00:23:00.002492 kernel: CPU features: detected: Spectre-v3a Jul 12 00:23:00.002507 kernel: CPU features: detected: Spectre-BHB Jul 12 00:23:00.002522 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:23:00.003485 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:23:00.003508 kernel: CPU features: detected: ARM erratum 1742098 Jul 12 00:23:00.003524 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 12 00:23:00.003568 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 12 00:23:00.003584 kernel: Policy zone: Normal Jul 12 00:23:00.003602 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:23:00.003619 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:23:00.003635 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:23:00.003650 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:23:00.003665 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:23:00.003686 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 12 00:23:00.003703 kernel: Memory: 3824460K/4030464K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 206004K reserved, 0K cma-reserved) Jul 12 00:23:00.003719 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:23:00.003734 kernel: trace event string verifier disabled Jul 12 00:23:00.003749 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:23:00.003764 kernel: rcu: RCU event tracing is enabled. Jul 12 00:23:00.003780 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:23:00.003795 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:23:00.003811 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:23:00.003826 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:23:00.003842 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:23:00.003857 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:23:00.003877 kernel: GICv3: 96 SPIs implemented Jul 12 00:23:00.003892 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:23:00.003907 kernel: GICv3: Distributor has no Range Selector support Jul 12 00:23:00.003922 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:23:00.003937 kernel: GICv3: 16 PPIs implemented Jul 12 00:23:00.003952 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 12 00:23:00.003967 kernel: ACPI: SRAT not present Jul 12 00:23:00.003982 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 12 00:23:00.003997 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:23:00.004013 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:23:00.004028 kernel: GICv3: using LPI property table @0x00000004000b0000 Jul 12 00:23:00.004048 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 12 00:23:00.004063 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Jul 12 00:23:00.004078 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 12 00:23:00.004094 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 12 00:23:00.004109 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 12 00:23:00.004124 kernel: Console: colour dummy device 80x25 Jul 12 00:23:00.004140 kernel: printk: console [tty1] enabled Jul 12 00:23:00.004155 kernel: ACPI: Core revision 20210730 Jul 12 00:23:00.004171 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 12 00:23:00.004187 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:23:00.004207 kernel: LSM: Security Framework initializing Jul 12 00:23:00.004223 kernel: SELinux: Initializing. Jul 12 00:23:00.004238 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:23:00.004254 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:23:00.004270 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:23:00.004285 kernel: Platform MSI: ITS@0x10080000 domain created Jul 12 00:23:00.004301 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 12 00:23:00.004316 kernel: Remapping and enabling EFI services. Jul 12 00:23:00.004331 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:23:00.004347 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:23:00.004366 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 12 00:23:00.004382 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Jul 12 00:23:00.004397 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 12 00:23:00.004413 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:23:00.004428 kernel: SMP: Total of 2 processors activated. Jul 12 00:23:00.004444 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:23:00.004459 kernel: CPU features: detected: 32-bit EL1 Support Jul 12 00:23:00.004475 kernel: CPU features: detected: CRC32 instructions Jul 12 00:23:00.007276 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:23:00.007326 kernel: alternatives: patching kernel code Jul 12 00:23:00.007344 kernel: devtmpfs: initialized Jul 12 00:23:00.007372 kernel: KASLR disabled due to lack of seed Jul 12 00:23:00.007393 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:23:00.007410 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:23:00.007426 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:23:00.007442 kernel: SMBIOS 3.0.0 present. Jul 12 00:23:00.007458 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 12 00:23:00.007474 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:23:00.007491 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:23:00.007508 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:23:00.014990 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:23:00.015036 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:23:00.015054 kernel: audit: type=2000 audit(0.293:1): state=initialized audit_enabled=0 res=1 Jul 12 00:23:00.015072 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:23:00.015089 kernel: cpuidle: using governor menu Jul 12 00:23:00.015115 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:23:00.015132 kernel: ASID allocator initialised with 32768 entries Jul 12 00:23:00.015149 kernel: ACPI: bus type PCI registered Jul 12 00:23:00.015165 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:23:00.015181 kernel: Serial: AMBA PL011 UART driver Jul 12 00:23:00.015198 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:23:00.015215 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:23:00.015231 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:23:00.015247 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:23:00.015268 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 00:23:00.015286 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:23:00.015302 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:23:00.015318 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:23:00.015334 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:23:00.015351 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 12 00:23:00.015367 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 12 00:23:00.015384 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 12 00:23:00.015400 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:23:00.015417 kernel: ACPI: Interpreter enabled Jul 12 00:23:00.015437 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:23:00.015454 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:23:00.015470 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 12 00:23:00.015777 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:23:00.015973 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:23:00.016164 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:23:00.016355 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 12 00:23:00.021679 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 12 00:23:00.021725 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 12 00:23:00.021743 kernel: acpiphp: Slot [1] registered Jul 12 00:23:00.021760 kernel: acpiphp: Slot [2] registered Jul 12 00:23:00.021777 kernel: acpiphp: Slot [3] registered Jul 12 00:23:00.021793 kernel: acpiphp: Slot [4] registered Jul 12 00:23:00.021809 kernel: acpiphp: Slot [5] registered Jul 12 00:23:00.021825 kernel: acpiphp: Slot [6] registered Jul 12 00:23:00.021841 kernel: acpiphp: Slot [7] registered Jul 12 00:23:00.021865 kernel: acpiphp: Slot [8] registered Jul 12 00:23:00.021881 kernel: acpiphp: Slot [9] registered Jul 12 00:23:00.021897 kernel: acpiphp: Slot [10] registered Jul 12 00:23:00.021913 kernel: acpiphp: Slot [11] registered Jul 12 00:23:00.021929 kernel: acpiphp: Slot [12] registered Jul 12 00:23:00.021945 kernel: acpiphp: Slot [13] registered Jul 12 00:23:00.021961 kernel: acpiphp: Slot [14] registered Jul 12 00:23:00.021977 kernel: acpiphp: Slot [15] registered Jul 12 00:23:00.021992 kernel: acpiphp: Slot [16] registered Jul 12 00:23:00.022013 kernel: acpiphp: Slot [17] registered Jul 12 00:23:00.022029 kernel: acpiphp: Slot [18] registered Jul 12 00:23:00.022045 kernel: acpiphp: Slot [19] registered Jul 12 00:23:00.022061 kernel: acpiphp: Slot [20] registered Jul 12 00:23:00.022077 kernel: acpiphp: Slot [21] registered Jul 12 00:23:00.022092 kernel: acpiphp: Slot [22] registered Jul 12 00:23:00.022109 kernel: acpiphp: Slot [23] registered Jul 12 00:23:00.022124 kernel: acpiphp: Slot [24] registered Jul 12 00:23:00.022140 kernel: acpiphp: Slot [25] registered Jul 12 00:23:00.022156 kernel: acpiphp: Slot [26] registered Jul 12 00:23:00.022176 kernel: acpiphp: Slot [27] registered Jul 12 00:23:00.022192 kernel: acpiphp: Slot [28] registered Jul 12 00:23:00.022208 kernel: acpiphp: Slot [29] registered Jul 12 00:23:00.022223 kernel: acpiphp: Slot [30] registered Jul 12 00:23:00.022239 kernel: acpiphp: Slot [31] registered Jul 12 00:23:00.022255 kernel: PCI host bridge to bus 0000:00 Jul 12 00:23:00.022467 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 12 00:23:00.022681 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:23:00.022863 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 12 00:23:00.023788 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 12 00:23:00.024023 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 12 00:23:00.024234 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 12 00:23:00.027635 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 12 00:23:00.027903 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 12 00:23:00.028105 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 12 00:23:00.028295 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:23:00.028500 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 12 00:23:00.028730 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 12 00:23:00.028924 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 12 00:23:00.029113 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 12 00:23:00.029302 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:23:00.029497 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 12 00:23:00.029751 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 12 00:23:00.029954 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 12 00:23:00.030146 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 12 00:23:00.030345 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 12 00:23:00.030519 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 12 00:23:00.036711 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:23:00.047776 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 12 00:23:00.047820 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:23:00.047838 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:23:00.047855 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:23:00.047872 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:23:00.047888 kernel: iommu: Default domain type: Translated Jul 12 00:23:00.047904 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:23:00.047921 kernel: vgaarb: loaded Jul 12 00:23:00.047938 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:23:00.047963 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:23:00.047979 kernel: PTP clock support registered Jul 12 00:23:00.047995 kernel: Registered efivars operations Jul 12 00:23:00.048011 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:23:00.048027 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:23:00.048043 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:23:00.048059 kernel: pnp: PnP ACPI init Jul 12 00:23:00.048256 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 12 00:23:00.048285 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:23:00.048303 kernel: NET: Registered PF_INET protocol family Jul 12 00:23:00.048319 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:23:00.048336 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:23:00.048352 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:23:00.048369 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:23:00.048385 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 12 00:23:00.048402 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:23:00.048418 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:23:00.048438 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:23:00.048454 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:23:00.048470 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:23:00.048487 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 12 00:23:00.048503 kernel: kvm [1]: HYP mode not available Jul 12 00:23:00.048519 kernel: Initialise system trusted keyrings Jul 12 00:23:00.048557 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:23:00.048577 kernel: Key type asymmetric registered Jul 12 00:23:00.050604 kernel: Asymmetric key parser 'x509' registered Jul 12 00:23:00.050636 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 12 00:23:00.050654 kernel: io scheduler mq-deadline registered Jul 12 00:23:00.050670 kernel: io scheduler kyber registered Jul 12 00:23:00.050687 kernel: io scheduler bfq registered Jul 12 00:23:00.050916 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 12 00:23:00.050942 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:23:00.050959 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:23:00.050976 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 12 00:23:00.050992 kernel: ACPI: button: Sleep Button [SLPB] Jul 12 00:23:00.051014 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:23:00.051031 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 12 00:23:00.051227 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 12 00:23:00.051251 kernel: printk: console [ttyS0] disabled Jul 12 00:23:00.051268 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 12 00:23:00.051284 kernel: printk: console [ttyS0] enabled Jul 12 00:23:00.051300 kernel: printk: bootconsole [uart0] disabled Jul 12 00:23:00.051316 kernel: thunder_xcv, ver 1.0 Jul 12 00:23:00.051332 kernel: thunder_bgx, ver 1.0 Jul 12 00:23:00.051354 kernel: nicpf, ver 1.0 Jul 12 00:23:00.051370 kernel: nicvf, ver 1.0 Jul 12 00:23:00.051599 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:23:00.051785 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:22:59 UTC (1752279779) Jul 12 00:23:00.051808 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:23:00.051825 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:23:00.051842 kernel: Segment Routing with IPv6 Jul 12 00:23:00.051858 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:23:00.051880 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:23:00.051896 kernel: Key type dns_resolver registered Jul 12 00:23:00.051912 kernel: registered taskstats version 1 Jul 12 00:23:00.051928 kernel: Loading compiled-in X.509 certificates Jul 12 00:23:00.051945 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: de2ee1d04443f96c763927c453375bbe23b5752a' Jul 12 00:23:00.051961 kernel: Key type .fscrypt registered Jul 12 00:23:00.051978 kernel: Key type fscrypt-provisioning registered Jul 12 00:23:00.051993 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:23:00.052010 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:23:00.052783 kernel: ima: No architecture policies found Jul 12 00:23:00.052802 kernel: clk: Disabling unused clocks Jul 12 00:23:00.052819 kernel: Freeing unused kernel memory: 36416K Jul 12 00:23:00.052835 kernel: Run /init as init process Jul 12 00:23:00.052851 kernel: with arguments: Jul 12 00:23:00.052867 kernel: /init Jul 12 00:23:00.052883 kernel: with environment: Jul 12 00:23:00.052899 kernel: HOME=/ Jul 12 00:23:00.052914 kernel: TERM=linux Jul 12 00:23:00.052938 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:23:00.052961 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:23:00.052982 systemd[1]: Detected virtualization amazon. Jul 12 00:23:00.053001 systemd[1]: Detected architecture arm64. Jul 12 00:23:00.053018 systemd[1]: Running in initrd. Jul 12 00:23:00.053035 systemd[1]: No hostname configured, using default hostname. Jul 12 00:23:00.053052 systemd[1]: Hostname set to . Jul 12 00:23:00.053075 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:23:00.053093 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:23:00.053110 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:23:00.053127 systemd[1]: Reached target cryptsetup.target. Jul 12 00:23:00.053145 systemd[1]: Reached target paths.target. Jul 12 00:23:00.053162 systemd[1]: Reached target slices.target. Jul 12 00:23:00.053180 systemd[1]: Reached target swap.target. Jul 12 00:23:00.053197 systemd[1]: Reached target timers.target. Jul 12 00:23:00.053219 systemd[1]: Listening on iscsid.socket. Jul 12 00:23:00.053237 systemd[1]: Listening on iscsiuio.socket. Jul 12 00:23:00.053255 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:23:00.053272 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:23:00.053290 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:23:00.053308 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:23:00.053326 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:23:00.053343 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:23:00.053365 systemd[1]: Reached target sockets.target. Jul 12 00:23:00.053383 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:23:00.053400 systemd[1]: Finished network-cleanup.service. Jul 12 00:23:00.053418 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:23:00.053435 systemd[1]: Starting systemd-journald.service... Jul 12 00:23:00.053453 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:23:00.053471 systemd[1]: Starting systemd-resolved.service... Jul 12 00:23:00.053489 systemd[1]: Starting systemd-vconsole-setup.service... Jul 12 00:23:00.053506 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:23:00.053581 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:23:00.053605 systemd[1]: Finished systemd-vconsole-setup.service. Jul 12 00:23:00.053645 systemd[1]: Starting dracut-cmdline-ask.service... Jul 12 00:23:00.053663 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:23:00.053681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:23:00.053699 kernel: audit: type=1130 audit(1752279780.036:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.053721 systemd-journald[310]: Journal started Jul 12 00:23:00.053830 systemd-journald[310]: Runtime Journal (/run/log/journal/ec212b89020011193a90a4d620c8649a) is 8.0M, max 75.4M, 67.4M free. Jul 12 00:23:00.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.007258 systemd-modules-load[311]: Inserted module 'overlay' Jul 12 00:23:00.057854 systemd[1]: Started systemd-journald.service. Jul 12 00:23:00.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.077565 kernel: audit: type=1130 audit(1752279780.064:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.084509 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:23:00.090920 systemd[1]: Finished dracut-cmdline-ask.service. Jul 12 00:23:00.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.103495 systemd[1]: Starting dracut-cmdline.service... Jul 12 00:23:00.110569 kernel: audit: type=1130 audit(1752279780.091:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.115297 kernel: Bridge firewalling registered Jul 12 00:23:00.114448 systemd-modules-load[311]: Inserted module 'br_netfilter' Jul 12 00:23:00.118788 systemd-resolved[312]: Positive Trust Anchors: Jul 12 00:23:00.120903 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:23:00.121084 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:23:00.153569 kernel: SCSI subsystem initialized Jul 12 00:23:00.170864 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:23:00.170960 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:23:00.176155 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 12 00:23:00.176255 dracut-cmdline[327]: dracut-dracut-053 Jul 12 00:23:00.182845 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:23:00.199448 systemd-modules-load[311]: Inserted module 'dm_multipath' Jul 12 00:23:00.202974 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:23:00.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.212919 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:23:00.232568 kernel: audit: type=1130 audit(1752279780.208:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.240452 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:23:00.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.255588 kernel: audit: type=1130 audit(1752279780.243:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.355563 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:23:00.377574 kernel: iscsi: registered transport (tcp) Jul 12 00:23:00.405700 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:23:00.405771 kernel: QLogic iSCSI HBA Driver Jul 12 00:23:00.576352 systemd-resolved[312]: Defaulting to hostname 'linux'. Jul 12 00:23:00.579154 kernel: random: crng init done Jul 12 00:23:00.580171 systemd[1]: Started systemd-resolved.service. Jul 12 00:23:00.593247 kernel: audit: type=1130 audit(1752279780.581:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.582274 systemd[1]: Reached target nss-lookup.target. Jul 12 00:23:00.612683 systemd[1]: Finished dracut-cmdline.service. Jul 12 00:23:00.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.625837 kernel: audit: type=1130 audit(1752279780.611:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:00.624197 systemd[1]: Starting dracut-pre-udev.service... Jul 12 00:23:00.691590 kernel: raid6: neonx8 gen() 6402 MB/s Jul 12 00:23:00.709565 kernel: raid6: neonx8 xor() 4590 MB/s Jul 12 00:23:00.727565 kernel: raid6: neonx4 gen() 6587 MB/s Jul 12 00:23:00.745565 kernel: raid6: neonx4 xor() 4662 MB/s Jul 12 00:23:00.763564 kernel: raid6: neonx2 gen() 5805 MB/s Jul 12 00:23:00.781564 kernel: raid6: neonx2 xor() 4392 MB/s Jul 12 00:23:00.799564 kernel: raid6: neonx1 gen() 4505 MB/s Jul 12 00:23:00.817568 kernel: raid6: neonx1 xor() 3555 MB/s Jul 12 00:23:00.835566 kernel: raid6: int64x8 gen() 3418 MB/s Jul 12 00:23:00.853571 kernel: raid6: int64x8 xor() 2052 MB/s Jul 12 00:23:00.871570 kernel: raid6: int64x4 gen() 3842 MB/s Jul 12 00:23:00.889567 kernel: raid6: int64x4 xor() 2163 MB/s Jul 12 00:23:00.907569 kernel: raid6: int64x2 gen() 3612 MB/s Jul 12 00:23:00.925575 kernel: raid6: int64x2 xor() 1922 MB/s Jul 12 00:23:00.943575 kernel: raid6: int64x1 gen() 2742 MB/s Jul 12 00:23:00.963146 kernel: raid6: int64x1 xor() 1437 MB/s Jul 12 00:23:00.963186 kernel: raid6: using algorithm neonx4 gen() 6587 MB/s Jul 12 00:23:00.963210 kernel: raid6: .... xor() 4662 MB/s, rmw enabled Jul 12 00:23:00.965006 kernel: raid6: using neon recovery algorithm Jul 12 00:23:00.985764 kernel: xor: measuring software checksum speed Jul 12 00:23:00.985830 kernel: 8regs : 9307 MB/sec Jul 12 00:23:00.987691 kernel: 32regs : 11098 MB/sec Jul 12 00:23:00.989862 kernel: arm64_neon : 9432 MB/sec Jul 12 00:23:00.989892 kernel: xor: using function: 32regs (11098 MB/sec) Jul 12 00:23:01.089578 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 12 00:23:01.107310 systemd[1]: Finished dracut-pre-udev.service. Jul 12 00:23:01.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:01.118568 kernel: audit: type=1130 audit(1752279781.108:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:01.117000 audit: BPF prog-id=7 op=LOAD Jul 12 00:23:01.119768 systemd[1]: Starting systemd-udevd.service... Jul 12 00:23:01.125941 kernel: audit: type=1334 audit(1752279781.117:10): prog-id=7 op=LOAD Jul 12 00:23:01.117000 audit: BPF prog-id=8 op=LOAD Jul 12 00:23:01.151143 systemd-udevd[509]: Using default interface naming scheme 'v252'. Jul 12 00:23:01.162294 systemd[1]: Started systemd-udevd.service. Jul 12 00:23:01.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:01.186237 systemd[1]: Starting dracut-pre-trigger.service... Jul 12 00:23:01.198956 dracut-pre-trigger[513]: rd.md=0: removing MD RAID activation Jul 12 00:23:01.260349 systemd[1]: Finished dracut-pre-trigger.service. Jul 12 00:23:01.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:01.263636 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:23:01.361562 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:23:01.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:01.509115 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:23:01.509204 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 12 00:23:01.526696 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 12 00:23:01.526742 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 12 00:23:01.526986 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 12 00:23:01.527261 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 12 00:23:01.527482 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:55:6b:4b:d8:e9 Jul 12 00:23:01.530573 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 12 00:23:01.539054 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:23:01.539139 kernel: GPT:9289727 != 16777215 Jul 12 00:23:01.541397 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:23:01.542812 kernel: GPT:9289727 != 16777215 Jul 12 00:23:01.544806 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:23:01.546467 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:23:01.550779 (udev-worker)[577]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:23:01.623576 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (574) Jul 12 00:23:01.732946 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 12 00:23:01.733164 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 12 00:23:01.757941 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:23:01.772931 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 12 00:23:01.796954 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 12 00:23:01.802823 systemd[1]: Starting disk-uuid.service... Jul 12 00:23:01.821584 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:23:01.824688 disk-uuid[675]: Primary Header is updated. Jul 12 00:23:01.824688 disk-uuid[675]: Secondary Entries is updated. Jul 12 00:23:01.824688 disk-uuid[675]: Secondary Header is updated. Jul 12 00:23:02.856352 disk-uuid[676]: The operation has completed successfully. Jul 12 00:23:02.858751 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:23:03.029830 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:23:03.032318 systemd[1]: Finished disk-uuid.service. Jul 12 00:23:03.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:03.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:03.055687 systemd[1]: Starting verity-setup.service... Jul 12 00:23:03.092576 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:23:03.196181 systemd[1]: Found device dev-mapper-usr.device. Jul 12 00:23:03.201103 systemd[1]: Mounting sysusr-usr.mount... Jul 12 00:23:03.208905 systemd[1]: Finished verity-setup.service. Jul 12 00:23:03.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:03.299580 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 12 00:23:03.300412 systemd[1]: Mounted sysusr-usr.mount. Jul 12 00:23:03.303869 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 12 00:23:03.308294 systemd[1]: Starting ignition-setup.service... Jul 12 00:23:03.315463 systemd[1]: Starting parse-ip-for-networkd.service... Jul 12 00:23:03.351611 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:23:03.351682 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:23:03.354359 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:23:03.366824 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:23:03.384748 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:23:03.401978 systemd[1]: Finished ignition-setup.service. Jul 12 00:23:03.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:03.407292 systemd[1]: Starting ignition-fetch-offline.service... Jul 12 00:23:03.471363 systemd[1]: Finished parse-ip-for-networkd.service. Jul 12 00:23:03.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:03.475000 audit: BPF prog-id=9 op=LOAD Jul 12 00:23:03.478035 systemd[1]: Starting systemd-networkd.service... Jul 12 00:23:03.531214 systemd-networkd[1021]: lo: Link UP Jul 12 00:23:03.531239 systemd-networkd[1021]: lo: Gained carrier Jul 12 00:23:03.535224 systemd-networkd[1021]: Enumeration completed Jul 12 00:23:03.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:03.535790 systemd-networkd[1021]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:23:03.536004 systemd[1]: Started systemd-networkd.service. Jul 12 00:23:03.538246 systemd[1]: Reached target network.target. Jul 12 00:23:03.542772 systemd[1]: Starting iscsiuio.service... Jul 12 00:23:03.562128 systemd-networkd[1021]: eth0: Link UP Jul 12 00:23:03.564698 systemd-networkd[1021]: eth0: Gained carrier Jul 12 00:23:03.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:03.572448 systemd[1]: Started iscsiuio.service. Jul 12 00:23:03.578712 systemd[1]: Starting iscsid.service... Jul 12 00:23:03.590227 iscsid[1026]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:23:03.590227 iscsid[1026]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 12 00:23:03.590227 iscsid[1026]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 12 00:23:03.590227 iscsid[1026]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 12 00:23:03.590227 iscsid[1026]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:23:03.590227 iscsid[1026]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 12 00:23:03.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:03.600688 systemd[1]: Started iscsid.service. Jul 12 00:23:03.600926 systemd-networkd[1021]: eth0: DHCPv4 address 172.31.16.189/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:23:03.614065 systemd[1]: Starting dracut-initqueue.service... Jul 12 00:23:03.658738 systemd[1]: Finished dracut-initqueue.service. Jul 12 00:23:03.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:03.662563 systemd[1]: Reached target remote-fs-pre.target. Jul 12 00:23:03.664612 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:23:03.670467 systemd[1]: Reached target remote-fs.target. Jul 12 00:23:03.679786 systemd[1]: Starting dracut-pre-mount.service... Jul 12 00:23:03.698986 systemd[1]: Finished dracut-pre-mount.service. Jul 12 00:23:03.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:03.704732 kernel: kauditd_printk_skb: 14 callbacks suppressed Jul 12 00:23:03.704781 kernel: audit: type=1130 audit(1752279783.701:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:04.314345 ignition[972]: Ignition 2.14.0 Jul 12 00:23:04.315022 ignition[972]: Stage: fetch-offline Jul 12 00:23:04.316063 ignition[972]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:23:04.316771 ignition[972]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:23:04.344062 ignition[972]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:23:04.347117 ignition[972]: Ignition finished successfully Jul 12 00:23:04.350562 systemd[1]: Finished ignition-fetch-offline.service. Jul 12 00:23:04.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:04.363991 kernel: audit: type=1130 audit(1752279784.351:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:04.354499 systemd[1]: Starting ignition-fetch.service... Jul 12 00:23:04.374931 ignition[1045]: Ignition 2.14.0 Jul 12 00:23:04.374959 ignition[1045]: Stage: fetch Jul 12 00:23:04.375283 ignition[1045]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:23:04.375347 ignition[1045]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:23:04.392448 ignition[1045]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:23:04.395246 ignition[1045]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:23:04.413314 ignition[1045]: INFO : PUT result: OK Jul 12 00:23:04.417200 ignition[1045]: DEBUG : parsed url from cmdline: "" Jul 12 00:23:04.417200 ignition[1045]: INFO : no config URL provided Jul 12 00:23:04.417200 ignition[1045]: INFO : reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:23:04.417200 ignition[1045]: INFO : no config at "/usr/lib/ignition/user.ign" Jul 12 00:23:04.417200 ignition[1045]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:23:04.432864 ignition[1045]: INFO : PUT result: OK Jul 12 00:23:04.432864 ignition[1045]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 12 00:23:04.432864 ignition[1045]: INFO : GET result: OK Jul 12 00:23:04.432864 ignition[1045]: DEBUG : parsing config with SHA512: 25c5b26accf1956d4c53499fab75b4d945599263abd9f603f42ef3e13ed7316f91f82973533249270784e4295fd9e56be8106806ee71cb1264cda93495a24fab Jul 12 00:23:04.446627 unknown[1045]: fetched base config from "system" Jul 12 00:23:04.448861 unknown[1045]: fetched base config from "system" Jul 12 00:23:04.451089 unknown[1045]: fetched user config from "aws" Jul 12 00:23:04.454426 ignition[1045]: fetch: fetch complete Jul 12 00:23:04.454617 ignition[1045]: fetch: fetch passed Jul 12 00:23:04.454756 ignition[1045]: Ignition finished successfully Jul 12 00:23:04.462490 systemd[1]: Finished ignition-fetch.service. Jul 12 00:23:04.478584 kernel: audit: type=1130 audit(1752279784.463:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:04.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:04.466602 systemd[1]: Starting ignition-kargs.service... Jul 12 00:23:04.493749 ignition[1051]: Ignition 2.14.0 Jul 12 00:23:04.493781 ignition[1051]: Stage: kargs Jul 12 00:23:04.494144 ignition[1051]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:23:04.494212 ignition[1051]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:23:04.511520 ignition[1051]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:23:04.514707 ignition[1051]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:23:04.518019 ignition[1051]: INFO : PUT result: OK Jul 12 00:23:04.523232 ignition[1051]: kargs: kargs passed Jul 12 00:23:04.524419 ignition[1051]: Ignition finished successfully Jul 12 00:23:04.528160 systemd[1]: Finished ignition-kargs.service. Jul 12 00:23:04.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:04.546458 kernel: audit: type=1130 audit(1752279784.532:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:04.544266 systemd[1]: Starting ignition-disks.service... Jul 12 00:23:04.562919 ignition[1057]: Ignition 2.14.0 Jul 12 00:23:04.564947 ignition[1057]: Stage: disks Jul 12 00:23:04.566907 ignition[1057]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:23:04.569833 ignition[1057]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:23:04.582065 ignition[1057]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:23:04.585120 ignition[1057]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:23:04.589082 ignition[1057]: INFO : PUT result: OK Jul 12 00:23:04.594785 ignition[1057]: disks: disks passed Jul 12 00:23:04.594938 ignition[1057]: Ignition finished successfully Jul 12 00:23:04.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:04.611406 kernel: audit: type=1130 audit(1752279784.598:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:04.598112 systemd[1]: Finished ignition-disks.service. Jul 12 00:23:04.600594 systemd[1]: Reached target initrd-root-device.target. Jul 12 00:23:04.611640 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:23:04.615280 systemd[1]: Reached target local-fs.target. Jul 12 00:23:04.621947 systemd[1]: Reached target sysinit.target. Jul 12 00:23:04.625515 systemd[1]: Reached target basic.target. Jul 12 00:23:04.630895 systemd[1]: Starting systemd-fsck-root.service... Jul 12 00:23:04.676358 systemd-fsck[1065]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 12 00:23:04.682409 systemd[1]: Finished systemd-fsck-root.service. Jul 12 00:23:04.695664 kernel: audit: type=1130 audit(1752279784.681:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:04.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:04.683940 systemd[1]: Mounting sysroot.mount... Jul 12 00:23:04.715550 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 12 00:23:04.716552 systemd[1]: Mounted sysroot.mount. Jul 12 00:23:04.719540 systemd[1]: Reached target initrd-root-fs.target. Jul 12 00:23:04.726344 systemd[1]: Mounting sysroot-usr.mount... Jul 12 00:23:04.727330 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 12 00:23:04.727427 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:23:04.727493 systemd[1]: Reached target ignition-diskful.target. Jul 12 00:23:04.748389 systemd[1]: Mounted sysroot-usr.mount. Jul 12 00:23:04.755306 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:23:04.764390 systemd[1]: Starting initrd-setup-root.service... Jul 12 00:23:04.779729 initrd-setup-root[1087]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:23:04.786132 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1082) Jul 12 00:23:04.793581 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:23:04.793642 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:23:04.793666 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:23:04.800764 initrd-setup-root[1109]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:23:04.809701 initrd-setup-root[1119]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:23:04.819574 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:23:04.821498 initrd-setup-root[1129]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:23:04.827844 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:23:05.066213 systemd[1]: Finished initrd-setup-root.service. Jul 12 00:23:05.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:05.080582 kernel: audit: type=1130 audit(1752279785.070:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:05.079148 systemd[1]: Starting ignition-mount.service... Jul 12 00:23:05.085446 systemd[1]: Starting sysroot-boot.service... Jul 12 00:23:05.100220 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 12 00:23:05.100399 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 12 00:23:05.123363 ignition[1147]: INFO : Ignition 2.14.0 Jul 12 00:23:05.123363 ignition[1147]: INFO : Stage: mount Jul 12 00:23:05.126980 ignition[1147]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:23:05.126980 ignition[1147]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:23:05.147715 systemd[1]: Finished sysroot-boot.service. Jul 12 00:23:05.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:05.159177 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:23:05.159177 ignition[1147]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:23:05.164671 kernel: audit: type=1130 audit(1752279785.149:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:05.167702 ignition[1147]: INFO : PUT result: OK Jul 12 00:23:05.173256 ignition[1147]: INFO : mount: mount passed Jul 12 00:23:05.175052 ignition[1147]: INFO : Ignition finished successfully Jul 12 00:23:05.179139 systemd[1]: Finished ignition-mount.service. Jul 12 00:23:05.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:05.183059 systemd[1]: Starting ignition-files.service... Jul 12 00:23:05.195600 kernel: audit: type=1130 audit(1752279785.180:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:05.203658 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:23:05.229592 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1157) Jul 12 00:23:05.236276 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:23:05.236362 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:23:05.236387 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:23:05.246585 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:23:05.252028 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:23:05.275549 ignition[1176]: INFO : Ignition 2.14.0 Jul 12 00:23:05.275549 ignition[1176]: INFO : Stage: files Jul 12 00:23:05.285523 ignition[1176]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:23:05.285523 ignition[1176]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:23:05.307861 ignition[1176]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:23:05.316326 ignition[1176]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:23:05.316326 ignition[1176]: INFO : PUT result: OK Jul 12 00:23:05.331175 ignition[1176]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:23:05.340496 ignition[1176]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:23:05.340496 ignition[1176]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:23:05.370283 ignition[1176]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:23:05.376917 ignition[1176]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:23:05.383399 unknown[1176]: wrote ssh authorized keys file for user: core Jul 12 00:23:05.388079 ignition[1176]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:23:05.395081 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:23:05.403152 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:23:05.403152 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:23:05.416624 ignition[1176]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:23:05.464714 systemd-networkd[1021]: eth0: Gained IPv6LL Jul 12 00:23:05.510970 ignition[1176]: INFO : GET result: OK Jul 12 00:23:05.660319 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:23:05.660319 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:23:05.669903 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:23:05.669903 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:23:05.669903 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:23:05.669903 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 12 00:23:05.669903 ignition[1176]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:23:05.700512 ignition[1176]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1524682989" Jul 12 00:23:05.700512 ignition[1176]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1524682989": device or resource busy Jul 12 00:23:05.700512 ignition[1176]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1524682989", trying btrfs: device or resource busy Jul 12 00:23:05.700512 ignition[1176]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1524682989" Jul 12 00:23:05.700512 ignition[1176]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1524682989" Jul 12 00:23:05.718600 ignition[1176]: INFO : op(3): [started] unmounting "/mnt/oem1524682989" Jul 12 00:23:05.718600 ignition[1176]: INFO : op(3): [finished] unmounting "/mnt/oem1524682989" Jul 12 00:23:05.718600 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 12 00:23:05.718600 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:23:05.718600 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:23:05.718600 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:23:05.718600 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:23:05.718600 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:23:05.718600 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:23:05.718600 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:23:05.718600 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:23:05.718600 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 12 00:23:05.718600 ignition[1176]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:23:05.778883 ignition[1176]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem513883933" Jul 12 00:23:05.778883 ignition[1176]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem513883933": device or resource busy Jul 12 00:23:05.778883 ignition[1176]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem513883933", trying btrfs: device or resource busy Jul 12 00:23:05.778883 ignition[1176]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem513883933" Jul 12 00:23:05.778883 ignition[1176]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem513883933" Jul 12 00:23:05.778883 ignition[1176]: INFO : op(6): [started] unmounting "/mnt/oem513883933" Jul 12 00:23:05.778883 ignition[1176]: INFO : op(6): [finished] unmounting "/mnt/oem513883933" Jul 12 00:23:05.778883 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 12 00:23:05.778883 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:23:05.778883 ignition[1176]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:23:05.808130 systemd[1]: mnt-oem513883933.mount: Deactivated successfully. Jul 12 00:23:06.305377 ignition[1176]: INFO : GET result: OK Jul 12 00:23:06.755613 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:23:06.760730 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 12 00:23:06.760730 ignition[1176]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:23:06.775614 ignition[1176]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3602395263" Jul 12 00:23:06.778917 ignition[1176]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3602395263": device or resource busy Jul 12 00:23:06.778917 ignition[1176]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3602395263", trying btrfs: device or resource busy Jul 12 00:23:06.786944 ignition[1176]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3602395263" Jul 12 00:23:06.791568 ignition[1176]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3602395263" Jul 12 00:23:06.791568 ignition[1176]: INFO : op(9): [started] unmounting "/mnt/oem3602395263" Jul 12 00:23:06.791568 ignition[1176]: INFO : op(9): [finished] unmounting "/mnt/oem3602395263" Jul 12 00:23:06.791568 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 12 00:23:06.791568 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 12 00:23:06.791568 ignition[1176]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:23:06.810410 systemd[1]: mnt-oem3602395263.mount: Deactivated successfully. Jul 12 00:23:06.843355 ignition[1176]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4258159544" Jul 12 00:23:06.846640 ignition[1176]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4258159544": device or resource busy Jul 12 00:23:06.846640 ignition[1176]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4258159544", trying btrfs: device or resource busy Jul 12 00:23:06.846640 ignition[1176]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4258159544" Jul 12 00:23:06.870212 ignition[1176]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4258159544" Jul 12 00:23:06.870212 ignition[1176]: INFO : op(c): [started] unmounting "/mnt/oem4258159544" Jul 12 00:23:06.870212 ignition[1176]: INFO : op(c): [finished] unmounting "/mnt/oem4258159544" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(13): [started] processing unit "nvidia.service" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(13): [finished] processing unit "nvidia.service" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(14): [started] processing unit "containerd.service" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(14): op(15): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(14): op(15): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(14): [finished] processing unit "containerd.service" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:23:06.870212 ignition[1176]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Jul 12 00:23:06.997736 kernel: audit: type=1130 audit(1752279786.906:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:06.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:06.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:06.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:06.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:06.857386 systemd[1]: mnt-oem4258159544.mount: Deactivated successfully. Jul 12 00:23:07.000249 ignition[1176]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:23:07.000249 ignition[1176]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:23:07.000249 ignition[1176]: INFO : files: op(19): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 12 00:23:07.000249 ignition[1176]: INFO : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 12 00:23:07.000249 ignition[1176]: INFO : files: op(1a): [started] setting preset to enabled for "amazon-ssm-agent.service" Jul 12 00:23:07.000249 ignition[1176]: INFO : files: op(1a): [finished] setting preset to enabled for "amazon-ssm-agent.service" Jul 12 00:23:07.000249 ignition[1176]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Jul 12 00:23:07.000249 ignition[1176]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Jul 12 00:23:07.000249 ignition[1176]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:23:07.000249 ignition[1176]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:23:07.000249 ignition[1176]: INFO : files: files passed Jul 12 00:23:07.000249 ignition[1176]: INFO : Ignition finished successfully Jul 12 00:23:07.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:06.901337 systemd[1]: Finished ignition-files.service. Jul 12 00:23:06.915711 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 12 00:23:07.062331 initrd-setup-root-after-ignition[1201]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:23:06.931506 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 12 00:23:06.944829 systemd[1]: Starting ignition-quench.service... Jul 12 00:23:06.955308 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:23:06.955565 systemd[1]: Finished ignition-quench.service. Jul 12 00:23:06.957897 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 12 00:23:06.964986 systemd[1]: Reached target ignition-complete.target. Jul 12 00:23:06.973611 systemd[1]: Starting initrd-parse-etc.service... Jul 12 00:23:07.029630 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:23:07.031097 systemd[1]: Finished initrd-parse-etc.service. Jul 12 00:23:07.033280 systemd[1]: Reached target initrd-fs.target. Jul 12 00:23:07.038829 systemd[1]: Reached target initrd.target. Jul 12 00:23:07.044349 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 12 00:23:07.045951 systemd[1]: Starting dracut-pre-pivot.service... Jul 12 00:23:07.109213 systemd[1]: Finished dracut-pre-pivot.service. Jul 12 00:23:07.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.116747 systemd[1]: Starting initrd-cleanup.service... Jul 12 00:23:07.138484 systemd[1]: Stopped target nss-lookup.target. Jul 12 00:23:07.142291 systemd[1]: Stopped target remote-cryptsetup.target. Jul 12 00:23:07.146354 systemd[1]: Stopped target timers.target. Jul 12 00:23:07.149785 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:23:07.152219 systemd[1]: Stopped dracut-pre-pivot.service. Jul 12 00:23:07.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.156346 systemd[1]: Stopped target initrd.target. Jul 12 00:23:07.159761 systemd[1]: Stopped target basic.target. Jul 12 00:23:07.162969 systemd[1]: Stopped target ignition-complete.target. Jul 12 00:23:07.166782 systemd[1]: Stopped target ignition-diskful.target. Jul 12 00:23:07.170494 systemd[1]: Stopped target initrd-root-device.target. Jul 12 00:23:07.175621 systemd[1]: Stopped target remote-fs.target. Jul 12 00:23:07.179271 systemd[1]: Stopped target remote-fs-pre.target. Jul 12 00:23:07.182856 systemd[1]: Stopped target sysinit.target. Jul 12 00:23:07.186108 systemd[1]: Stopped target local-fs.target. Jul 12 00:23:07.189457 systemd[1]: Stopped target local-fs-pre.target. Jul 12 00:23:07.193037 systemd[1]: Stopped target swap.target. Jul 12 00:23:07.196143 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:23:07.198340 systemd[1]: Stopped dracut-pre-mount.service. Jul 12 00:23:07.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.202057 systemd[1]: Stopped target cryptsetup.target. Jul 12 00:23:07.205494 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:23:07.207700 systemd[1]: Stopped dracut-initqueue.service. Jul 12 00:23:07.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.211267 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:23:07.213953 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 12 00:23:07.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.218596 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:23:07.220860 systemd[1]: Stopped ignition-files.service. Jul 12 00:23:07.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.225955 systemd[1]: Stopping ignition-mount.service... Jul 12 00:23:07.240419 ignition[1214]: INFO : Ignition 2.14.0 Jul 12 00:23:07.240419 ignition[1214]: INFO : Stage: umount Jul 12 00:23:07.240419 ignition[1214]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:23:07.240419 ignition[1214]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:23:07.247578 systemd[1]: Stopping iscsiuio.service... Jul 12 00:23:07.273444 ignition[1214]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:23:07.273444 ignition[1214]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:23:07.273444 ignition[1214]: INFO : PUT result: OK Jul 12 00:23:07.264188 systemd[1]: Stopping sysroot-boot.service... Jul 12 00:23:07.287198 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:23:07.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.287512 systemd[1]: Stopped systemd-udev-trigger.service. Jul 12 00:23:07.302307 ignition[1214]: INFO : umount: umount passed Jul 12 00:23:07.302307 ignition[1214]: INFO : Ignition finished successfully Jul 12 00:23:07.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.290899 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:23:07.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.291117 systemd[1]: Stopped dracut-pre-trigger.service. Jul 12 00:23:07.300307 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 12 00:23:07.302333 systemd[1]: Stopped iscsiuio.service. Jul 12 00:23:07.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.307076 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:23:07.307279 systemd[1]: Stopped ignition-mount.service. Jul 12 00:23:07.311503 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:23:07.311800 systemd[1]: Stopped ignition-disks.service. Jul 12 00:23:07.325787 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:23:07.325903 systemd[1]: Stopped ignition-kargs.service. Jul 12 00:23:07.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.340678 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:23:07.340788 systemd[1]: Stopped ignition-fetch.service. Jul 12 00:23:07.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.346567 systemd[1]: Stopped target network.target. Jul 12 00:23:07.350057 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:23:07.350191 systemd[1]: Stopped ignition-fetch-offline.service. Jul 12 00:23:07.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.356332 systemd[1]: Stopped target paths.target. Jul 12 00:23:07.358169 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:23:07.362403 systemd[1]: Stopped systemd-ask-password-console.path. Jul 12 00:23:07.366268 systemd[1]: Stopped target slices.target. Jul 12 00:23:07.369464 systemd[1]: Stopped target sockets.target. Jul 12 00:23:07.371489 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:23:07.373068 systemd[1]: Closed iscsid.socket. Jul 12 00:23:07.378127 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:23:07.378234 systemd[1]: Closed iscsiuio.socket. Jul 12 00:23:07.381644 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:23:07.385084 systemd[1]: Stopped ignition-setup.service. Jul 12 00:23:07.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.388815 systemd[1]: Stopping systemd-networkd.service... Jul 12 00:23:07.392636 systemd[1]: Stopping systemd-resolved.service... Jul 12 00:23:07.393689 systemd-networkd[1021]: eth0: DHCPv6 lease lost Jul 12 00:23:07.402216 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:23:07.403669 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:23:07.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.403965 systemd[1]: Stopped systemd-resolved.service. Jul 12 00:23:07.414491 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:23:07.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.437000 audit: BPF prog-id=9 op=UNLOAD Jul 12 00:23:07.414813 systemd[1]: Stopped systemd-networkd.service. Jul 12 00:23:07.440000 audit: BPF prog-id=6 op=UNLOAD Jul 12 00:23:07.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.423611 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:23:07.423840 systemd[1]: Finished initrd-cleanup.service. Jul 12 00:23:07.434771 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:23:07.434994 systemd[1]: Stopped sysroot-boot.service. Jul 12 00:23:07.442400 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:23:07.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.442478 systemd[1]: Closed systemd-networkd.socket. Jul 12 00:23:07.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.448848 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:23:07.448953 systemd[1]: Stopped initrd-setup-root.service. Jul 12 00:23:07.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.452240 systemd[1]: Stopping network-cleanup.service... Jul 12 00:23:07.454505 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:23:07.454644 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 12 00:23:07.468633 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:23:07.470646 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:23:07.474118 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:23:07.474217 systemd[1]: Stopped systemd-modules-load.service. Jul 12 00:23:07.490510 systemd[1]: Stopping systemd-udevd.service... Jul 12 00:23:07.505724 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:23:07.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.506131 systemd[1]: Stopped systemd-udevd.service. Jul 12 00:23:07.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.512335 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:23:07.512601 systemd[1]: Stopped network-cleanup.service. Jul 12 00:23:07.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.516659 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:23:07.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.516738 systemd[1]: Closed systemd-udevd-control.socket. Jul 12 00:23:07.518860 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:23:07.518947 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 12 00:23:07.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.522391 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:23:07.522499 systemd[1]: Stopped dracut-pre-udev.service. Jul 12 00:23:07.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.524592 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:23:07.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.524704 systemd[1]: Stopped dracut-cmdline.service. Jul 12 00:23:07.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:07.528761 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:23:07.528863 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 12 00:23:07.534166 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 12 00:23:07.549378 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:23:07.549509 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 12 00:23:07.553235 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:23:07.553338 systemd[1]: Stopped kmod-static-nodes.service. Jul 12 00:23:07.557738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:23:07.557838 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 12 00:23:07.561714 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:23:07.561912 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 12 00:23:07.566975 systemd[1]: Reached target initrd-switch-root.target. Jul 12 00:23:07.571237 systemd[1]: Starting initrd-switch-root.service... Jul 12 00:23:07.621658 systemd[1]: Switching root. Jul 12 00:23:07.624000 audit: BPF prog-id=5 op=UNLOAD Jul 12 00:23:07.624000 audit: BPF prog-id=4 op=UNLOAD Jul 12 00:23:07.624000 audit: BPF prog-id=3 op=UNLOAD Jul 12 00:23:07.625000 audit: BPF prog-id=8 op=UNLOAD Jul 12 00:23:07.625000 audit: BPF prog-id=7 op=UNLOAD Jul 12 00:23:07.654596 iscsid[1026]: iscsid shutting down. Jul 12 00:23:07.656251 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Jul 12 00:23:07.656348 systemd-journald[310]: Journal stopped Jul 12 00:23:13.104843 kernel: SELinux: Class mctp_socket not defined in policy. Jul 12 00:23:13.104974 kernel: SELinux: Class anon_inode not defined in policy. Jul 12 00:23:13.105010 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 12 00:23:13.105048 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:23:13.105080 kernel: SELinux: policy capability open_perms=1 Jul 12 00:23:13.105111 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:23:13.105149 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:23:13.105181 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:23:13.105215 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:23:13.105248 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:23:13.105278 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:23:13.105309 kernel: kauditd_printk_skb: 47 callbacks suppressed Jul 12 00:23:13.105342 kernel: audit: type=1403 audit(1752279788.702:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:23:13.105376 systemd[1]: Successfully loaded SELinux policy in 121.680ms. Jul 12 00:23:13.105432 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.507ms. Jul 12 00:23:13.105495 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:23:13.105567 systemd[1]: Detected virtualization amazon. Jul 12 00:23:13.105603 systemd[1]: Detected architecture arm64. Jul 12 00:23:13.105636 systemd[1]: Detected first boot. Jul 12 00:23:13.105670 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:23:13.105703 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 12 00:23:13.105739 kernel: audit: type=1400 audit(1752279789.025:83): avc: denied { associate } for pid=1265 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 12 00:23:13.105779 kernel: audit: type=1300 audit(1752279789.025:83): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014766c a1=40000c8ae0 a2=40000cea00 a3=32 items=0 ppid=1248 pid=1265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:23:13.105811 kernel: audit: type=1327 audit(1752279789.025:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 12 00:23:13.105846 kernel: audit: type=1400 audit(1752279789.029:84): avc: denied { associate } for pid=1265 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 12 00:23:13.105879 kernel: audit: type=1300 audit(1752279789.029:84): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000147745 a2=1ed a3=0 items=2 ppid=1248 pid=1265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:23:13.105923 kernel: audit: type=1307 audit(1752279789.029:84): cwd="/" Jul 12 00:23:13.105961 kernel: audit: type=1302 audit(1752279789.029:84): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:23:13.105992 kernel: audit: type=1302 audit(1752279789.029:84): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:23:13.106028 kernel: audit: type=1327 audit(1752279789.029:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 12 00:23:13.106061 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:23:13.106094 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:23:13.106128 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:23:13.106161 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:23:13.106196 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:23:13.106228 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Jul 12 00:23:13.106260 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 12 00:23:13.106290 systemd[1]: Created slice system-addon\x2drun.slice. Jul 12 00:23:13.106320 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 12 00:23:13.106352 systemd[1]: Created slice system-getty.slice. Jul 12 00:23:13.106385 systemd[1]: Created slice system-modprobe.slice. Jul 12 00:23:13.106417 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 12 00:23:13.106451 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 12 00:23:13.106483 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 12 00:23:13.106516 systemd[1]: Created slice user.slice. Jul 12 00:23:13.119628 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:23:13.119671 systemd[1]: Started systemd-ask-password-wall.path. Jul 12 00:23:13.119703 systemd[1]: Set up automount boot.automount. Jul 12 00:23:13.119741 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 12 00:23:13.119774 systemd[1]: Reached target integritysetup.target. Jul 12 00:23:13.119805 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:23:13.119843 systemd[1]: Reached target remote-fs.target. Jul 12 00:23:13.119875 systemd[1]: Reached target slices.target. Jul 12 00:23:13.119904 systemd[1]: Reached target swap.target. Jul 12 00:23:13.119933 systemd[1]: Reached target torcx.target. Jul 12 00:23:13.119963 systemd[1]: Reached target veritysetup.target. Jul 12 00:23:13.119994 systemd[1]: Listening on systemd-coredump.socket. Jul 12 00:23:13.120024 systemd[1]: Listening on systemd-initctl.socket. Jul 12 00:23:13.120056 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:23:13.120092 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:23:13.120126 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:23:13.120156 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:23:13.120188 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:23:13.120218 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:23:13.120247 systemd[1]: Listening on systemd-userdbd.socket. Jul 12 00:23:13.120305 systemd[1]: Mounting dev-hugepages.mount... Jul 12 00:23:13.120379 systemd[1]: Mounting dev-mqueue.mount... Jul 12 00:23:13.120417 systemd[1]: Mounting media.mount... Jul 12 00:23:13.120449 systemd[1]: Mounting sys-kernel-debug.mount... Jul 12 00:23:13.120481 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 12 00:23:13.120520 systemd[1]: Mounting tmp.mount... Jul 12 00:23:13.120604 systemd[1]: Starting flatcar-tmpfiles.service... Jul 12 00:23:13.120724 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:23:13.120756 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:23:13.120788 systemd[1]: Starting modprobe@configfs.service... Jul 12 00:23:13.120828 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:23:13.120860 systemd[1]: Starting modprobe@drm.service... Jul 12 00:23:13.120892 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:23:13.120929 systemd[1]: Starting modprobe@fuse.service... Jul 12 00:23:13.120968 systemd[1]: Starting modprobe@loop.service... Jul 12 00:23:13.121000 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:23:13.121031 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 12 00:23:13.121062 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 12 00:23:13.121095 systemd[1]: Starting systemd-journald.service... Jul 12 00:23:13.121124 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:23:13.121154 systemd[1]: Starting systemd-network-generator.service... Jul 12 00:23:13.121183 systemd[1]: Starting systemd-remount-fs.service... Jul 12 00:23:13.121216 kernel: fuse: init (API version 7.34) Jul 12 00:23:13.122708 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:23:13.122782 systemd[1]: Mounted dev-hugepages.mount. Jul 12 00:23:13.122997 systemd[1]: Mounted dev-mqueue.mount. Jul 12 00:23:13.123043 systemd[1]: Mounted media.mount. Jul 12 00:23:13.123077 systemd[1]: Mounted sys-kernel-debug.mount. Jul 12 00:23:13.123109 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 12 00:23:13.123139 systemd[1]: Mounted tmp.mount. Jul 12 00:23:13.123169 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:23:13.123198 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:23:13.123236 systemd[1]: Finished modprobe@configfs.service. Jul 12 00:23:13.123266 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:23:13.123296 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:23:13.123327 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:23:13.123357 systemd[1]: Finished modprobe@drm.service. Jul 12 00:23:13.123387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:23:13.123416 kernel: loop: module loaded Jul 12 00:23:13.123446 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:23:13.123476 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:23:13.123512 systemd[1]: Finished modprobe@fuse.service. Jul 12 00:23:13.123578 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:23:13.123611 systemd[1]: Finished modprobe@loop.service. Jul 12 00:23:13.123644 systemd-journald[1363]: Journal started Jul 12 00:23:13.123763 systemd-journald[1363]: Runtime Journal (/run/log/journal/ec212b89020011193a90a4d620c8649a) is 8.0M, max 75.4M, 67.4M free. Jul 12 00:23:12.779000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:23:12.779000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 12 00:23:13.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.100000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 12 00:23:13.151185 systemd[1]: Started systemd-journald.service. Jul 12 00:23:13.100000 audit[1363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd46a1570 a2=4000 a3=1 items=0 ppid=1 pid=1363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:23:13.100000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 12 00:23:13.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.133155 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:23:13.136456 systemd[1]: Finished systemd-network-generator.service. Jul 12 00:23:13.139384 systemd[1]: Finished systemd-remount-fs.service. Jul 12 00:23:13.141934 systemd[1]: Reached target network-pre.target. Jul 12 00:23:13.146265 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 12 00:23:13.158149 systemd[1]: Mounting sys-kernel-config.mount... Jul 12 00:23:13.159987 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:23:13.166173 systemd[1]: Starting systemd-hwdb-update.service... Jul 12 00:23:13.176446 systemd[1]: Starting systemd-journal-flush.service... Jul 12 00:23:13.178654 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:23:13.181867 systemd[1]: Starting systemd-random-seed.service... Jul 12 00:23:13.185720 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:23:13.190572 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:23:13.201391 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 12 00:23:13.204025 systemd[1]: Mounted sys-kernel-config.mount. Jul 12 00:23:13.228760 systemd-journald[1363]: Time spent on flushing to /var/log/journal/ec212b89020011193a90a4d620c8649a is 60.540ms for 1069 entries. Jul 12 00:23:13.228760 systemd-journald[1363]: System Journal (/var/log/journal/ec212b89020011193a90a4d620c8649a) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:23:13.313287 systemd-journald[1363]: Received client request to flush runtime journal. Jul 12 00:23:13.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.229691 systemd[1]: Finished systemd-random-seed.service. Jul 12 00:23:13.233917 systemd[1]: Reached target first-boot-complete.target. Jul 12 00:23:13.277766 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:23:13.317363 systemd[1]: Finished systemd-journal-flush.service. Jul 12 00:23:13.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.328063 systemd[1]: Finished flatcar-tmpfiles.service. Jul 12 00:23:13.334458 systemd[1]: Starting systemd-sysusers.service... Jul 12 00:23:13.409329 systemd[1]: Finished systemd-sysusers.service. Jul 12 00:23:13.413774 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:23:13.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.465333 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:23:13.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:13.469988 systemd[1]: Starting systemd-udev-settle.service... Jul 12 00:23:13.491273 udevadm[1420]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:23:13.506705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:23:13.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.124231 systemd[1]: Finished systemd-hwdb-update.service. Jul 12 00:23:14.128811 kernel: kauditd_printk_skb: 29 callbacks suppressed Jul 12 00:23:14.128863 kernel: audit: type=1130 audit(1752279794.124:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.128626 systemd[1]: Starting systemd-udevd.service... Jul 12 00:23:14.175628 systemd-udevd[1423]: Using default interface naming scheme 'v252'. Jul 12 00:23:14.229907 systemd[1]: Started systemd-udevd.service. Jul 12 00:23:14.239248 systemd[1]: Starting systemd-networkd.service... Jul 12 00:23:14.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.257559 kernel: audit: type=1130 audit(1752279794.233:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.262969 systemd[1]: Starting systemd-userdbd.service... Jul 12 00:23:14.334981 systemd[1]: Found device dev-ttyS0.device. Jul 12 00:23:14.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.359576 systemd[1]: Started systemd-userdbd.service. Jul 12 00:23:14.373561 kernel: audit: type=1130 audit(1752279794.360:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.388858 (udev-worker)[1433]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:23:14.535306 systemd-networkd[1435]: lo: Link UP Jul 12 00:23:14.535934 systemd-networkd[1435]: lo: Gained carrier Jul 12 00:23:14.537113 systemd-networkd[1435]: Enumeration completed Jul 12 00:23:14.537348 systemd[1]: Started systemd-networkd.service. Jul 12 00:23:14.541901 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 12 00:23:14.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.555672 systemd-networkd[1435]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:23:14.556580 kernel: audit: type=1130 audit(1752279794.538:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.564564 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:23:14.564960 systemd-networkd[1435]: eth0: Link UP Jul 12 00:23:14.565348 systemd-networkd[1435]: eth0: Gained carrier Jul 12 00:23:14.576820 systemd-networkd[1435]: eth0: DHCPv4 address 172.31.16.189/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:23:14.768182 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:23:14.774452 systemd[1]: Finished systemd-udev-settle.service. Jul 12 00:23:14.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.781155 systemd[1]: Starting lvm2-activation-early.service... Jul 12 00:23:14.786058 kernel: audit: type=1130 audit(1752279794.774:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.870274 lvm[1543]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:23:14.906325 systemd[1]: Finished lvm2-activation-early.service. Jul 12 00:23:14.908666 systemd[1]: Reached target cryptsetup.target. Jul 12 00:23:14.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.921432 systemd[1]: Starting lvm2-activation.service... Jul 12 00:23:14.927634 kernel: audit: type=1130 audit(1752279794.904:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.931953 lvm[1545]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:23:14.970395 systemd[1]: Finished lvm2-activation.service. Jul 12 00:23:14.972553 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:23:14.974515 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:23:14.974601 systemd[1]: Reached target local-fs.target. Jul 12 00:23:14.981921 systemd[1]: Reached target machines.target. Jul 12 00:23:14.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.986407 systemd[1]: Starting ldconfig.service... Jul 12 00:23:14.995563 kernel: audit: type=1130 audit(1752279794.970:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:14.996903 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:23:14.997013 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:23:14.999455 systemd[1]: Starting systemd-boot-update.service... Jul 12 00:23:15.003468 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 12 00:23:15.008601 systemd[1]: Starting systemd-machine-id-commit.service... Jul 12 00:23:15.013211 systemd[1]: Starting systemd-sysext.service... Jul 12 00:23:15.028600 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1548 (bootctl) Jul 12 00:23:15.031143 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 12 00:23:15.054619 systemd[1]: Unmounting usr-share-oem.mount... Jul 12 00:23:15.065693 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 12 00:23:15.066260 systemd[1]: Unmounted usr-share-oem.mount. Jul 12 00:23:15.104583 kernel: loop0: detected capacity change from 0 to 203944 Jul 12 00:23:15.105826 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 12 00:23:15.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.118574 kernel: audit: type=1130 audit(1752279795.104:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.207922 systemd-fsck[1562]: fsck.fat 4.2 (2021-01-31) Jul 12 00:23:15.207922 systemd-fsck[1562]: /dev/nvme0n1p1: 236 files, 117310/258078 clusters Jul 12 00:23:15.211164 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 12 00:23:15.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.216670 systemd[1]: Mounting boot.mount... Jul 12 00:23:15.233628 kernel: audit: type=1130 audit(1752279795.212:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.273161 systemd[1]: Mounted boot.mount. Jul 12 00:23:15.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.312925 systemd[1]: Finished systemd-boot-update.service. Jul 12 00:23:15.326589 kernel: audit: type=1130 audit(1752279795.313:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.339571 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:23:15.375607 kernel: loop1: detected capacity change from 0 to 203944 Jul 12 00:23:15.387883 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:23:15.391018 systemd[1]: Finished systemd-machine-id-commit.service. Jul 12 00:23:15.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.394900 (sd-sysext)[1581]: Using extensions 'kubernetes'. Jul 12 00:23:15.396754 (sd-sysext)[1581]: Merged extensions into '/usr'. Jul 12 00:23:15.442305 systemd[1]: Mounting usr-share-oem.mount... Jul 12 00:23:15.444636 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:23:15.447515 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:23:15.452475 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:23:15.460236 systemd[1]: Starting modprobe@loop.service... Jul 12 00:23:15.462251 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:23:15.462651 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:23:15.464976 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:23:15.467127 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:23:15.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.470612 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:23:15.471088 systemd[1]: Finished modprobe@loop.service. Jul 12 00:23:15.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.481130 systemd[1]: Mounted usr-share-oem.mount. Jul 12 00:23:15.484784 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:23:15.496710 systemd[1]: Finished systemd-sysext.service. Jul 12 00:23:15.504007 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:23:15.504392 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:23:15.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:15.509374 systemd[1]: Starting ensure-sysext.service... Jul 12 00:23:15.513836 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:23:15.521920 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 12 00:23:15.535109 systemd[1]: Reloading. Jul 12 00:23:15.551227 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 12 00:23:15.554301 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:23:15.559520 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:23:15.680433 /usr/lib/systemd/system-generators/torcx-generator[1616]: time="2025-07-12T00:23:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:23:15.680505 /usr/lib/systemd/system-generators/torcx-generator[1616]: time="2025-07-12T00:23:15Z" level=info msg="torcx already run" Jul 12 00:23:15.896714 systemd-networkd[1435]: eth0: Gained IPv6LL Jul 12 00:23:15.977678 ldconfig[1547]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:23:16.037837 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:23:16.037879 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:23:16.086012 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:23:16.238000 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 12 00:23:16.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.242374 systemd[1]: Finished ldconfig.service. Jul 12 00:23:16.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.246895 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 12 00:23:16.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.257553 systemd[1]: Starting audit-rules.service... Jul 12 00:23:16.263361 systemd[1]: Starting clean-ca-certificates.service... Jul 12 00:23:16.276357 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 12 00:23:16.283762 systemd[1]: Starting systemd-resolved.service... Jul 12 00:23:16.289807 systemd[1]: Starting systemd-timesyncd.service... Jul 12 00:23:16.297748 systemd[1]: Starting systemd-update-utmp.service... Jul 12 00:23:16.304048 systemd[1]: Finished clean-ca-certificates.service. Jul 12 00:23:16.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.321094 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:23:16.325498 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:23:16.328851 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:23:16.333488 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:23:16.338165 systemd[1]: Starting modprobe@loop.service... Jul 12 00:23:16.340105 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:23:16.340427 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:23:16.340747 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:23:16.339000 audit[1693]: SYSTEM_BOOT pid=1693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.357690 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:23:16.358159 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:23:16.358521 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:23:16.358816 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:23:16.360284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:23:16.360715 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:23:16.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.368070 systemd[1]: Finished systemd-update-utmp.service. Jul 12 00:23:16.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.378230 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:23:16.378714 systemd[1]: Finished modprobe@loop.service. Jul 12 00:23:16.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.381422 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:23:16.390937 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:23:16.394926 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:23:16.399705 systemd[1]: Starting modprobe@drm.service... Jul 12 00:23:16.405462 systemd[1]: Starting modprobe@loop.service... Jul 12 00:23:16.413899 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:23:16.414244 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:23:16.414849 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:23:16.419188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:23:16.421710 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:23:16.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.425482 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:23:16.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.425894 systemd[1]: Finished modprobe@loop.service. Jul 12 00:23:16.431515 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:23:16.434638 systemd[1]: Finished ensure-sysext.service. Jul 12 00:23:16.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.442353 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:23:16.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.442772 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:23:16.444941 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:23:16.448359 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:23:16.448787 systemd[1]: Finished modprobe@drm.service. Jul 12 00:23:16.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.482271 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 12 00:23:16.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.487441 systemd[1]: Starting systemd-update-done.service... Jul 12 00:23:16.513700 systemd[1]: Finished systemd-update-done.service. Jul 12 00:23:16.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:16.542000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 12 00:23:16.542000 audit[1720]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffa38c6c0 a2=420 a3=0 items=0 ppid=1681 pid=1720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:23:16.542000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 12 00:23:16.544654 augenrules[1720]: No rules Jul 12 00:23:16.546357 systemd[1]: Finished audit-rules.service. Jul 12 00:23:16.601824 systemd[1]: Started systemd-timesyncd.service. Jul 12 00:23:16.604144 systemd[1]: Reached target time-set.target. Jul 12 00:23:16.622842 systemd-timesyncd[1690]: Contacted time server 15.204.87.223:123 (0.flatcar.pool.ntp.org). Jul 12 00:23:16.622983 systemd-timesyncd[1690]: Initial clock synchronization to Sat 2025-07-12 00:23:16.437134 UTC. Jul 12 00:23:16.630513 systemd-resolved[1688]: Positive Trust Anchors: Jul 12 00:23:16.631112 systemd-resolved[1688]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:23:16.631268 systemd-resolved[1688]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:23:16.665234 systemd-resolved[1688]: Defaulting to hostname 'linux'. Jul 12 00:23:16.668612 systemd[1]: Started systemd-resolved.service. Jul 12 00:23:16.670680 systemd[1]: Reached target network.target. Jul 12 00:23:16.672507 systemd[1]: Reached target network-online.target. Jul 12 00:23:16.674942 systemd[1]: Reached target nss-lookup.target. Jul 12 00:23:16.676785 systemd[1]: Reached target sysinit.target. Jul 12 00:23:16.678696 systemd[1]: Started motdgen.path. Jul 12 00:23:16.680340 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 12 00:23:16.683732 systemd[1]: Started logrotate.timer. Jul 12 00:23:16.685567 systemd[1]: Started mdadm.timer. Jul 12 00:23:16.687064 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 12 00:23:16.688980 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:23:16.689031 systemd[1]: Reached target paths.target. Jul 12 00:23:16.691140 systemd[1]: Reached target timers.target. Jul 12 00:23:16.693384 systemd[1]: Listening on dbus.socket. Jul 12 00:23:16.697248 systemd[1]: Starting docker.socket... Jul 12 00:23:16.703469 systemd[1]: Listening on sshd.socket. Jul 12 00:23:16.705820 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:23:16.706798 systemd[1]: Listening on docker.socket. Jul 12 00:23:16.708866 systemd[1]: Reached target sockets.target. Jul 12 00:23:16.710937 systemd[1]: Reached target basic.target. Jul 12 00:23:16.713187 systemd[1]: System is tainted: cgroupsv1 Jul 12 00:23:16.713563 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:23:16.713788 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:23:16.716492 systemd[1]: Started amazon-ssm-agent.service. Jul 12 00:23:16.722770 systemd[1]: Starting containerd.service... Jul 12 00:23:16.727982 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 12 00:23:16.733361 systemd[1]: Starting dbus.service... Jul 12 00:23:16.737583 systemd[1]: Starting enable-oem-cloudinit.service... Jul 12 00:23:16.746030 systemd[1]: Starting extend-filesystems.service... Jul 12 00:23:16.747990 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 12 00:23:16.766422 systemd[1]: Starting kubelet.service... Jul 12 00:23:16.776091 systemd[1]: Starting motdgen.service... Jul 12 00:23:16.803550 systemd[1]: Started nvidia.service. Jul 12 00:23:16.809000 systemd[1]: Starting prepare-helm.service... Jul 12 00:23:16.817773 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 12 00:23:16.822744 systemd[1]: Starting sshd-keygen.service... Jul 12 00:23:16.836122 systemd[1]: Starting systemd-logind.service... Jul 12 00:23:16.843817 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:23:16.843974 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:23:16.847623 systemd[1]: Starting update-engine.service... Jul 12 00:23:16.853799 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 12 00:23:16.868814 jq[1750]: true Jul 12 00:23:16.928852 jq[1735]: false Jul 12 00:23:16.929329 tar[1758]: linux-arm64/helm Jul 12 00:23:16.932998 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:23:16.933619 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 12 00:23:16.949823 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:23:16.950415 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 12 00:23:16.997102 jq[1760]: true Jul 12 00:23:17.082000 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:23:17.084893 extend-filesystems[1736]: Found loop1 Jul 12 00:23:17.084893 extend-filesystems[1736]: Found nvme0n1 Jul 12 00:23:17.084893 extend-filesystems[1736]: Found nvme0n1p1 Jul 12 00:23:17.084893 extend-filesystems[1736]: Found nvme0n1p2 Jul 12 00:23:17.084893 extend-filesystems[1736]: Found nvme0n1p3 Jul 12 00:23:17.084893 extend-filesystems[1736]: Found usr Jul 12 00:23:17.082617 systemd[1]: Finished motdgen.service. Jul 12 00:23:17.098719 extend-filesystems[1736]: Found nvme0n1p4 Jul 12 00:23:17.098719 extend-filesystems[1736]: Found nvme0n1p6 Jul 12 00:23:17.098719 extend-filesystems[1736]: Found nvme0n1p7 Jul 12 00:23:17.098719 extend-filesystems[1736]: Found nvme0n1p9 Jul 12 00:23:17.098719 extend-filesystems[1736]: Checking size of /dev/nvme0n1p9 Jul 12 00:23:17.166831 extend-filesystems[1736]: Resized partition /dev/nvme0n1p9 Jul 12 00:23:17.174087 dbus-daemon[1734]: [system] SELinux support is enabled Jul 12 00:23:17.179987 systemd[1]: Started dbus.service. Jul 12 00:23:17.185389 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:23:17.191785 extend-filesystems[1798]: resize2fs 1.46.5 (30-Dec-2021) Jul 12 00:23:17.185433 systemd[1]: Reached target system-config.target. Jul 12 00:23:17.189313 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:23:17.189360 systemd[1]: Reached target user-config.target. Jul 12 00:23:17.221571 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 12 00:23:17.259120 dbus-daemon[1734]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1435 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 12 00:23:17.269178 systemd[1]: Starting systemd-hostnamed.service... Jul 12 00:23:17.296368 update_engine[1748]: I0712 00:23:17.294310 1748 main.cc:92] Flatcar Update Engine starting Jul 12 00:23:17.350549 update_engine[1748]: I0712 00:23:17.340840 1748 update_check_scheduler.cc:74] Next update check in 5m59s Jul 12 00:23:17.340756 systemd[1]: Started update-engine.service. Jul 12 00:23:17.375811 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 12 00:23:17.375873 bash[1805]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:23:17.369680 systemd[1]: Started locksmithd.service. Jul 12 00:23:17.376979 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 12 00:23:17.390964 extend-filesystems[1798]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 12 00:23:17.390964 extend-filesystems[1798]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:23:17.390964 extend-filesystems[1798]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 12 00:23:17.408120 extend-filesystems[1736]: Resized filesystem in /dev/nvme0n1p9 Jul 12 00:23:17.396978 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:23:17.397600 systemd[1]: Finished extend-filesystems.service. Jul 12 00:23:17.547045 amazon-ssm-agent[1730]: 2025/07/12 00:23:17 Failed to load instance info from vault. RegistrationKey does not exist. Jul 12 00:23:17.559808 amazon-ssm-agent[1730]: Initializing new seelog logger Jul 12 00:23:17.567654 amazon-ssm-agent[1730]: New Seelog Logger Creation Complete Jul 12 00:23:17.567951 amazon-ssm-agent[1730]: 2025/07/12 00:23:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:23:17.569316 amazon-ssm-agent[1730]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:23:17.570221 amazon-ssm-agent[1730]: 2025/07/12 00:23:17 processing appconfig overrides Jul 12 00:23:17.585394 systemd[1]: nvidia.service: Deactivated successfully. Jul 12 00:23:17.641973 systemd-logind[1747]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:23:17.642035 systemd-logind[1747]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 12 00:23:17.646330 systemd-logind[1747]: New seat seat0. Jul 12 00:23:17.658516 systemd[1]: Started systemd-logind.service. Jul 12 00:23:17.681232 env[1764]: time="2025-07-12T00:23:17.681124358Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 12 00:23:17.870890 env[1764]: time="2025-07-12T00:23:17.870825499Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:23:17.871381 env[1764]: time="2025-07-12T00:23:17.871341125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:23:17.888413 env[1764]: time="2025-07-12T00:23:17.888121051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:23:17.888413 env[1764]: time="2025-07-12T00:23:17.888189025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:23:17.889001 env[1764]: time="2025-07-12T00:23:17.888950279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:23:17.889179 env[1764]: time="2025-07-12T00:23:17.889147497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:23:17.889303 env[1764]: time="2025-07-12T00:23:17.889270985Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 12 00:23:17.889413 env[1764]: time="2025-07-12T00:23:17.889385049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:23:17.889742 env[1764]: time="2025-07-12T00:23:17.889707009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:23:17.890546 env[1764]: time="2025-07-12T00:23:17.890468310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:23:17.891458 env[1764]: time="2025-07-12T00:23:17.891400841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:23:17.891697 env[1764]: time="2025-07-12T00:23:17.891660266Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:23:17.892478 env[1764]: time="2025-07-12T00:23:17.891963822Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 12 00:23:17.892872 env[1764]: time="2025-07-12T00:23:17.892802228Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:23:17.919564 env[1764]: time="2025-07-12T00:23:17.917885149Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:23:17.919564 env[1764]: time="2025-07-12T00:23:17.917955327Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:23:17.919564 env[1764]: time="2025-07-12T00:23:17.918003245Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:23:17.919564 env[1764]: time="2025-07-12T00:23:17.918227188Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:23:17.919564 env[1764]: time="2025-07-12T00:23:17.918267863Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:23:17.919564 env[1764]: time="2025-07-12T00:23:17.918301211Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:23:17.919564 env[1764]: time="2025-07-12T00:23:17.918333199Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:23:17.919564 env[1764]: time="2025-07-12T00:23:17.918928791Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:23:17.919564 env[1764]: time="2025-07-12T00:23:17.918981233Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 12 00:23:17.919564 env[1764]: time="2025-07-12T00:23:17.919016645Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:23:17.919564 env[1764]: time="2025-07-12T00:23:17.919048575Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:23:17.919564 env[1764]: time="2025-07-12T00:23:17.919079368Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:23:17.919564 env[1764]: time="2025-07-12T00:23:17.919309757Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:23:17.920375 env[1764]: time="2025-07-12T00:23:17.920330272Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:23:17.921124 env[1764]: time="2025-07-12T00:23:17.921083496Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:23:17.921317 env[1764]: time="2025-07-12T00:23:17.921284946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:23:17.921450 env[1764]: time="2025-07-12T00:23:17.921419815Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:23:17.921843 env[1764]: time="2025-07-12T00:23:17.921799587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:23:17.922047 env[1764]: time="2025-07-12T00:23:17.922004670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:23:17.922187 env[1764]: time="2025-07-12T00:23:17.922153934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:23:17.922320 env[1764]: time="2025-07-12T00:23:17.922289308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:23:17.922442 env[1764]: time="2025-07-12T00:23:17.922413968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:23:17.922598 env[1764]: time="2025-07-12T00:23:17.922568882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:23:17.922742 env[1764]: time="2025-07-12T00:23:17.922712097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:23:17.923243 env[1764]: time="2025-07-12T00:23:17.923200142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:23:17.923429 env[1764]: time="2025-07-12T00:23:17.923398332Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:23:17.923924 env[1764]: time="2025-07-12T00:23:17.923885521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:23:17.924781 env[1764]: time="2025-07-12T00:23:17.924735837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:23:17.927659 env[1764]: time="2025-07-12T00:23:17.927602214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:23:17.927894 env[1764]: time="2025-07-12T00:23:17.927861064Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:23:17.928037 env[1764]: time="2025-07-12T00:23:17.927990390Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 12 00:23:17.928623 env[1764]: time="2025-07-12T00:23:17.928582124Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:23:17.928799 env[1764]: time="2025-07-12T00:23:17.928760247Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 12 00:23:17.930709 env[1764]: time="2025-07-12T00:23:17.930655962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:23:17.932180 env[1764]: time="2025-07-12T00:23:17.931664919Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:23:17.934122 env[1764]: time="2025-07-12T00:23:17.933282865Z" level=info msg="Connect containerd service" Jul 12 00:23:17.934122 env[1764]: time="2025-07-12T00:23:17.933430523Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:23:17.943851 env[1764]: time="2025-07-12T00:23:17.942186678Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:23:17.944787 env[1764]: time="2025-07-12T00:23:17.944736300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:23:17.947328 env[1764]: time="2025-07-12T00:23:17.947278056Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:23:17.950569 systemd[1]: Started containerd.service. Jul 12 00:23:17.956643 env[1764]: time="2025-07-12T00:23:17.956561160Z" level=info msg="Start subscribing containerd event" Jul 12 00:23:17.956756 env[1764]: time="2025-07-12T00:23:17.956687391Z" level=info msg="Start recovering state" Jul 12 00:23:17.956929 env[1764]: time="2025-07-12T00:23:17.956862501Z" level=info msg="Start event monitor" Jul 12 00:23:17.957006 env[1764]: time="2025-07-12T00:23:17.956924275Z" level=info msg="Start snapshots syncer" Jul 12 00:23:17.957006 env[1764]: time="2025-07-12T00:23:17.956950871Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:23:17.957006 env[1764]: time="2025-07-12T00:23:17.956992999Z" level=info msg="Start streaming server" Jul 12 00:23:18.003692 env[1764]: time="2025-07-12T00:23:18.003631324Z" level=info msg="containerd successfully booted in 0.371202s" Jul 12 00:23:18.150230 coreos-metadata[1733]: Jul 12 00:23:18.150 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 12 00:23:18.157788 coreos-metadata[1733]: Jul 12 00:23:18.157 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Jul 12 00:23:18.161698 coreos-metadata[1733]: Jul 12 00:23:18.161 INFO Fetch successful Jul 12 00:23:18.161698 coreos-metadata[1733]: Jul 12 00:23:18.161 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 12 00:23:18.167926 coreos-metadata[1733]: Jul 12 00:23:18.167 INFO Fetch successful Jul 12 00:23:18.170919 unknown[1733]: wrote ssh authorized keys file for user: core Jul 12 00:23:18.204104 update-ssh-keys[1891]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:23:18.204983 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 12 00:23:18.283019 dbus-daemon[1734]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 12 00:23:18.283294 systemd[1]: Started systemd-hostnamed.service. Jul 12 00:23:18.287157 dbus-daemon[1734]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1806 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 12 00:23:18.293139 systemd[1]: Starting polkit.service... Jul 12 00:23:18.330015 polkitd[1910]: Started polkitd version 121 Jul 12 00:23:18.368002 polkitd[1910]: Loading rules from directory /etc/polkit-1/rules.d Jul 12 00:23:18.368138 polkitd[1910]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 12 00:23:18.375646 polkitd[1910]: Finished loading, compiling and executing 2 rules Jul 12 00:23:18.376561 dbus-daemon[1734]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 12 00:23:18.376831 systemd[1]: Started polkit.service. Jul 12 00:23:18.378802 polkitd[1910]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 12 00:23:18.420116 systemd-hostnamed[1806]: Hostname set to (transient) Jul 12 00:23:18.420307 systemd-resolved[1688]: System hostname changed to 'ip-172-31-16-189'. Jul 12 00:23:18.437863 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO Create new startup processor Jul 12 00:23:18.440442 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [LongRunningPluginsManager] registered plugins: {} Jul 12 00:23:18.451322 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO Initializing bookkeeping folders Jul 12 00:23:18.451575 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO removing the completed state files Jul 12 00:23:18.451719 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO Initializing bookkeeping folders for long running plugins Jul 12 00:23:18.451897 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Jul 12 00:23:18.452037 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO Initializing healthcheck folders for long running plugins Jul 12 00:23:18.452171 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO Initializing locations for inventory plugin Jul 12 00:23:18.452322 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO Initializing default location for custom inventory Jul 12 00:23:18.452473 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO Initializing default location for file inventory Jul 12 00:23:18.452634 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO Initializing default location for role inventory Jul 12 00:23:18.452759 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO Init the cloudwatchlogs publisher Jul 12 00:23:18.452913 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [instanceID=i-050364d78c157fce5] Successfully loaded platform independent plugin aws:configurePackage Jul 12 00:23:18.453044 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [instanceID=i-050364d78c157fce5] Successfully loaded platform independent plugin aws:softwareInventory Jul 12 00:23:18.453167 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [instanceID=i-050364d78c157fce5] Successfully loaded platform independent plugin aws:updateSsmAgent Jul 12 00:23:18.453297 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [instanceID=i-050364d78c157fce5] Successfully loaded platform independent plugin aws:refreshAssociation Jul 12 00:23:18.453432 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [instanceID=i-050364d78c157fce5] Successfully loaded platform independent plugin aws:downloadContent Jul 12 00:23:18.454457 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [instanceID=i-050364d78c157fce5] Successfully loaded platform independent plugin aws:runDocument Jul 12 00:23:18.454692 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [instanceID=i-050364d78c157fce5] Successfully loaded platform independent plugin aws:runPowerShellScript Jul 12 00:23:18.454825 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [instanceID=i-050364d78c157fce5] Successfully loaded platform independent plugin aws:configureDocker Jul 12 00:23:18.455447 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [instanceID=i-050364d78c157fce5] Successfully loaded platform independent plugin aws:runDockerAction Jul 12 00:23:18.455707 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [instanceID=i-050364d78c157fce5] Successfully loaded platform dependent plugin aws:runShellScript Jul 12 00:23:18.455861 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Jul 12 00:23:18.455994 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO OS: linux, Arch: arm64 Jul 12 00:23:18.470476 amazon-ssm-agent[1730]: datastore file /var/lib/amazon/ssm/i-050364d78c157fce5/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Jul 12 00:23:18.558794 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessagingDeliveryService] Starting document processing engine... Jul 12 00:23:18.657802 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessagingDeliveryService] [EngineProcessor] Starting Jul 12 00:23:18.752234 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Jul 12 00:23:18.846673 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessagingDeliveryService] Starting message polling Jul 12 00:23:18.941408 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessagingDeliveryService] Starting send replies to MDS Jul 12 00:23:19.036438 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [instanceID=i-050364d78c157fce5] Starting association polling Jul 12 00:23:19.131504 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Jul 12 00:23:19.170030 tar[1758]: linux-arm64/LICENSE Jul 12 00:23:19.170806 tar[1758]: linux-arm64/README.md Jul 12 00:23:19.187377 systemd[1]: Finished prepare-helm.service. Jul 12 00:23:19.227474 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessagingDeliveryService] [Association] Launching response handler Jul 12 00:23:19.323646 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Jul 12 00:23:19.391436 locksmithd[1814]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:23:19.419349 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Jul 12 00:23:19.515292 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Jul 12 00:23:19.611457 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [HealthCheck] HealthCheck reporting agent health. Jul 12 00:23:19.707727 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessageGatewayService] Starting session document processing engine... Jul 12 00:23:19.804232 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessageGatewayService] [EngineProcessor] Starting Jul 12 00:23:19.901052 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Jul 12 00:23:19.997876 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-050364d78c157fce5, requestId: a79a804b-13a3-4c80-a0a2-45e32a7695e6 Jul 12 00:23:20.094996 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [OfflineService] Starting document processing engine... Jul 12 00:23:20.192373 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [OfflineService] [EngineProcessor] Starting Jul 12 00:23:20.238463 systemd[1]: Started kubelet.service. Jul 12 00:23:20.289832 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [OfflineService] [EngineProcessor] Initial processing Jul 12 00:23:20.387569 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [OfflineService] Starting message polling Jul 12 00:23:20.485472 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [OfflineService] Starting send replies to MDS Jul 12 00:23:20.583463 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [LongRunningPluginsManager] starting long running plugin manager Jul 12 00:23:20.681834 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Jul 12 00:23:20.780423 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessageGatewayService] listening reply. Jul 12 00:23:20.879006 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [StartupProcessor] Executing startup processor tasks Jul 12 00:23:20.977866 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Jul 12 00:23:21.076921 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Jul 12 00:23:21.176172 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 Jul 12 00:23:21.261011 sshd_keygen[1776]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:23:21.275737 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Jul 12 00:23:21.292699 kubelet[1959]: E0712 00:23:21.292624 1959 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:23:21.296930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:23:21.297329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:23:21.306460 systemd[1]: Finished sshd-keygen.service. Jul 12 00:23:21.312826 systemd[1]: Starting issuegen.service... Jul 12 00:23:21.327484 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:23:21.328098 systemd[1]: Finished issuegen.service. Jul 12 00:23:21.333476 systemd[1]: Starting systemd-user-sessions.service... Jul 12 00:23:21.349495 systemd[1]: Finished systemd-user-sessions.service. Jul 12 00:23:21.354754 systemd[1]: Started getty@tty1.service. Jul 12 00:23:21.360955 systemd[1]: Started serial-getty@ttyS0.service. Jul 12 00:23:21.363699 systemd[1]: Reached target getty.target. Jul 12 00:23:21.365732 systemd[1]: Reached target multi-user.target. Jul 12 00:23:21.371158 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 12 00:23:21.376698 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-050364d78c157fce5?role=subscribe&stream=input Jul 12 00:23:21.390727 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 12 00:23:21.391496 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 12 00:23:21.401617 systemd[1]: Startup finished in 10.180s (kernel) + 12.827s (userspace) = 23.008s. Jul 12 00:23:21.476676 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-050364d78c157fce5?role=subscribe&stream=input Jul 12 00:23:21.576733 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessageGatewayService] Starting receiving message from control channel Jul 12 00:23:21.677054 amazon-ssm-agent[1730]: 2025-07-12 00:23:18 INFO [MessageGatewayService] [EngineProcessor] Initial processing Jul 12 00:23:24.967987 systemd[1]: Created slice system-sshd.slice. Jul 12 00:23:24.970333 systemd[1]: Started sshd@0-172.31.16.189:22-147.75.109.163:59488.service. Jul 12 00:23:25.169556 sshd[1985]: Accepted publickey for core from 147.75.109.163 port 59488 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:23:25.176252 sshd[1985]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:23:25.200486 systemd[1]: Created slice user-500.slice. Jul 12 00:23:25.202702 systemd[1]: Starting user-runtime-dir@500.service... Jul 12 00:23:25.208658 systemd-logind[1747]: New session 1 of user core. Jul 12 00:23:25.223084 systemd[1]: Finished user-runtime-dir@500.service. Jul 12 00:23:25.225668 systemd[1]: Starting user@500.service... Jul 12 00:23:25.240367 (systemd)[1990]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:23:25.427455 systemd[1990]: Queued start job for default target default.target. Jul 12 00:23:25.427939 systemd[1990]: Reached target paths.target. Jul 12 00:23:25.427979 systemd[1990]: Reached target sockets.target. Jul 12 00:23:25.428012 systemd[1990]: Reached target timers.target. Jul 12 00:23:25.428043 systemd[1990]: Reached target basic.target. Jul 12 00:23:25.428146 systemd[1990]: Reached target default.target. Jul 12 00:23:25.428209 systemd[1990]: Startup finished in 175ms. Jul 12 00:23:25.428825 systemd[1]: Started user@500.service. Jul 12 00:23:25.431248 systemd[1]: Started session-1.scope. Jul 12 00:23:25.574106 systemd[1]: Started sshd@1-172.31.16.189:22-147.75.109.163:59502.service. Jul 12 00:23:25.750079 sshd[1999]: Accepted publickey for core from 147.75.109.163 port 59502 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:23:25.753118 sshd[1999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:23:25.761288 systemd-logind[1747]: New session 2 of user core. Jul 12 00:23:25.762246 systemd[1]: Started session-2.scope. Jul 12 00:23:25.891226 sshd[1999]: pam_unix(sshd:session): session closed for user core Jul 12 00:23:25.896995 systemd-logind[1747]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:23:25.898458 systemd[1]: sshd@1-172.31.16.189:22-147.75.109.163:59502.service: Deactivated successfully. Jul 12 00:23:25.899945 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:23:25.902332 systemd-logind[1747]: Removed session 2. Jul 12 00:23:25.916315 systemd[1]: Started sshd@2-172.31.16.189:22-147.75.109.163:59506.service. Jul 12 00:23:26.088174 sshd[2006]: Accepted publickey for core from 147.75.109.163 port 59506 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:23:26.091161 sshd[2006]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:23:26.098771 systemd-logind[1747]: New session 3 of user core. Jul 12 00:23:26.099646 systemd[1]: Started session-3.scope. Jul 12 00:23:26.220770 sshd[2006]: pam_unix(sshd:session): session closed for user core Jul 12 00:23:26.226514 systemd[1]: sshd@2-172.31.16.189:22-147.75.109.163:59506.service: Deactivated successfully. Jul 12 00:23:26.228954 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:23:26.230104 systemd-logind[1747]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:23:26.232065 systemd-logind[1747]: Removed session 3. Jul 12 00:23:26.246394 systemd[1]: Started sshd@3-172.31.16.189:22-147.75.109.163:43170.service. Jul 12 00:23:26.420412 sshd[2013]: Accepted publickey for core from 147.75.109.163 port 43170 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:23:26.423364 sshd[2013]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:23:26.431795 systemd[1]: Started session-4.scope. Jul 12 00:23:26.432641 systemd-logind[1747]: New session 4 of user core. Jul 12 00:23:26.564470 sshd[2013]: pam_unix(sshd:session): session closed for user core Jul 12 00:23:26.569317 systemd[1]: sshd@3-172.31.16.189:22-147.75.109.163:43170.service: Deactivated successfully. Jul 12 00:23:26.570806 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:23:26.573237 systemd-logind[1747]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:23:26.576590 systemd-logind[1747]: Removed session 4. Jul 12 00:23:26.590297 systemd[1]: Started sshd@4-172.31.16.189:22-147.75.109.163:43172.service. Jul 12 00:23:26.764188 sshd[2020]: Accepted publickey for core from 147.75.109.163 port 43172 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:23:26.767181 sshd[2020]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:23:26.775620 systemd-logind[1747]: New session 5 of user core. Jul 12 00:23:26.775625 systemd[1]: Started session-5.scope. Jul 12 00:23:26.901632 sudo[2024]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:23:26.902818 sudo[2024]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:23:26.981764 systemd[1]: Starting docker.service... Jul 12 00:23:27.098991 env[2034]: time="2025-07-12T00:23:27.098904002Z" level=info msg="Starting up" Jul 12 00:23:27.102104 env[2034]: time="2025-07-12T00:23:27.102057802Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:23:27.102286 env[2034]: time="2025-07-12T00:23:27.102257015Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:23:27.102421 env[2034]: time="2025-07-12T00:23:27.102389915Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:23:27.102564 env[2034]: time="2025-07-12T00:23:27.102505629Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:23:27.107428 env[2034]: time="2025-07-12T00:23:27.107360820Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:23:27.107428 env[2034]: time="2025-07-12T00:23:27.107404520Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:23:27.107686 env[2034]: time="2025-07-12T00:23:27.107443723Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:23:27.107686 env[2034]: time="2025-07-12T00:23:27.107469533Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:23:27.515730 env[2034]: time="2025-07-12T00:23:27.515655119Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 12 00:23:27.515730 env[2034]: time="2025-07-12T00:23:27.515702146Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 12 00:23:27.516055 env[2034]: time="2025-07-12T00:23:27.515955984Z" level=info msg="Loading containers: start." Jul 12 00:23:27.711595 kernel: Initializing XFRM netlink socket Jul 12 00:23:27.755906 env[2034]: time="2025-07-12T00:23:27.755857561Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 12 00:23:27.758860 (udev-worker)[2045]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:23:27.861916 systemd-networkd[1435]: docker0: Link UP Jul 12 00:23:27.883852 env[2034]: time="2025-07-12T00:23:27.883784286Z" level=info msg="Loading containers: done." Jul 12 00:23:27.919027 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1832659593-merged.mount: Deactivated successfully. Jul 12 00:23:27.929548 env[2034]: time="2025-07-12T00:23:27.929461274Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:23:27.930215 env[2034]: time="2025-07-12T00:23:27.930183131Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 12 00:23:27.930599 env[2034]: time="2025-07-12T00:23:27.930572624Z" level=info msg="Daemon has completed initialization" Jul 12 00:23:27.956091 systemd[1]: Started docker.service. Jul 12 00:23:27.974071 env[2034]: time="2025-07-12T00:23:27.974005397Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:23:29.120954 env[1764]: time="2025-07-12T00:23:29.120874673Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:23:29.693030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771438198.mount: Deactivated successfully. Jul 12 00:23:31.299355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:23:31.299772 systemd[1]: Stopped kubelet.service. Jul 12 00:23:31.303818 systemd[1]: Starting kubelet.service... Jul 12 00:23:31.567429 env[1764]: time="2025-07-12T00:23:31.567043434Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:31.571972 env[1764]: time="2025-07-12T00:23:31.571866300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:31.583687 env[1764]: time="2025-07-12T00:23:31.582042531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:31.600408 env[1764]: time="2025-07-12T00:23:31.600326154Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 12 00:23:31.602497 env[1764]: time="2025-07-12T00:23:31.602297494Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:31.607122 env[1764]: time="2025-07-12T00:23:31.607067940Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:23:31.699549 systemd[1]: Started kubelet.service. Jul 12 00:23:31.789819 kubelet[2165]: E0712 00:23:31.789752 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:23:31.796662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:23:31.797100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:23:33.619338 env[1764]: time="2025-07-12T00:23:33.619281208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:33.621726 env[1764]: time="2025-07-12T00:23:33.621677951Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:33.625072 env[1764]: time="2025-07-12T00:23:33.625009783Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:33.628552 env[1764]: time="2025-07-12T00:23:33.628476233Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:33.630244 env[1764]: time="2025-07-12T00:23:33.630192681Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 12 00:23:33.631136 env[1764]: time="2025-07-12T00:23:33.631092538Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:23:34.110676 amazon-ssm-agent[1730]: 2025-07-12 00:23:34 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Jul 12 00:23:35.275551 env[1764]: time="2025-07-12T00:23:35.275467881Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:35.278349 env[1764]: time="2025-07-12T00:23:35.278271419Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:35.282151 env[1764]: time="2025-07-12T00:23:35.282087119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:35.285745 env[1764]: time="2025-07-12T00:23:35.285679763Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:35.287679 env[1764]: time="2025-07-12T00:23:35.287623484Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 12 00:23:35.288665 env[1764]: time="2025-07-12T00:23:35.288618665Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:23:36.616403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329264720.mount: Deactivated successfully. Jul 12 00:23:37.502238 env[1764]: time="2025-07-12T00:23:37.502165483Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:37.513092 env[1764]: time="2025-07-12T00:23:37.513020452Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:37.516958 env[1764]: time="2025-07-12T00:23:37.516889955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:37.520977 env[1764]: time="2025-07-12T00:23:37.520907983Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:37.522300 env[1764]: time="2025-07-12T00:23:37.522235725Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:23:37.523331 env[1764]: time="2025-07-12T00:23:37.523262127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:23:38.041168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431740975.mount: Deactivated successfully. Jul 12 00:23:39.393134 env[1764]: time="2025-07-12T00:23:39.393061713Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:39.396054 env[1764]: time="2025-07-12T00:23:39.395982423Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:39.399766 env[1764]: time="2025-07-12T00:23:39.399707464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:39.403357 env[1764]: time="2025-07-12T00:23:39.403300380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:39.405235 env[1764]: time="2025-07-12T00:23:39.405149249Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:23:39.406077 env[1764]: time="2025-07-12T00:23:39.406029338Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:23:40.109710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925353669.mount: Deactivated successfully. Jul 12 00:23:40.402463 env[1764]: time="2025-07-12T00:23:40.402046824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:40.475429 env[1764]: time="2025-07-12T00:23:40.475329448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:40.516508 env[1764]: time="2025-07-12T00:23:40.516438573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:40.576374 env[1764]: time="2025-07-12T00:23:40.576319251Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:40.577648 env[1764]: time="2025-07-12T00:23:40.577588481Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:23:40.578621 env[1764]: time="2025-07-12T00:23:40.578570908Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:23:41.139050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687615809.mount: Deactivated successfully. Jul 12 00:23:41.799277 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:23:41.799662 systemd[1]: Stopped kubelet.service. Jul 12 00:23:41.802652 systemd[1]: Starting kubelet.service... Jul 12 00:23:42.141473 systemd[1]: Started kubelet.service. Jul 12 00:23:42.239762 kubelet[2178]: E0712 00:23:42.239689 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:23:42.243042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:23:42.243433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:23:44.298548 env[1764]: time="2025-07-12T00:23:44.298460405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:44.301129 env[1764]: time="2025-07-12T00:23:44.301063699Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:44.307191 env[1764]: time="2025-07-12T00:23:44.307106052Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:44.314871 env[1764]: time="2025-07-12T00:23:44.314784270Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:44.316934 env[1764]: time="2025-07-12T00:23:44.316866996Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 12 00:23:48.452590 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 12 00:23:52.299271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:23:52.299641 systemd[1]: Stopped kubelet.service. Jul 12 00:23:52.302447 systemd[1]: Starting kubelet.service... Jul 12 00:23:52.628001 systemd[1]: Started kubelet.service. Jul 12 00:23:52.737275 kubelet[2214]: E0712 00:23:52.737213 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:23:52.740923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:23:52.741306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:23:53.252566 systemd[1]: Stopped kubelet.service. Jul 12 00:23:53.259645 systemd[1]: Starting kubelet.service... Jul 12 00:23:53.319392 systemd[1]: Reloading. Jul 12 00:23:53.511711 /usr/lib/systemd/system-generators/torcx-generator[2248]: time="2025-07-12T00:23:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:23:53.511776 /usr/lib/systemd/system-generators/torcx-generator[2248]: time="2025-07-12T00:23:53Z" level=info msg="torcx already run" Jul 12 00:23:53.742219 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:23:53.742259 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:23:53.785260 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:23:53.991351 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 00:23:53.991637 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 00:23:53.992273 systemd[1]: Stopped kubelet.service. Jul 12 00:23:53.995900 systemd[1]: Starting kubelet.service... Jul 12 00:23:54.309817 systemd[1]: Started kubelet.service. Jul 12 00:23:54.407131 kubelet[2323]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:23:54.407760 kubelet[2323]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:23:54.407859 kubelet[2323]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:23:54.408303 kubelet[2323]: I0712 00:23:54.408246 2323 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:23:55.430909 kubelet[2323]: I0712 00:23:55.430814 2323 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:23:55.430909 kubelet[2323]: I0712 00:23:55.430886 2323 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:23:55.432390 kubelet[2323]: I0712 00:23:55.432316 2323 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:23:55.497497 kubelet[2323]: E0712 00:23:55.497432 2323 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.189:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:23:55.499862 kubelet[2323]: I0712 00:23:55.499800 2323 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:23:55.515741 kubelet[2323]: E0712 00:23:55.515634 2323 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:23:55.515741 kubelet[2323]: I0712 00:23:55.515727 2323 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:23:55.523743 kubelet[2323]: I0712 00:23:55.523696 2323 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:23:55.524744 kubelet[2323]: I0712 00:23:55.524702 2323 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:23:55.525059 kubelet[2323]: I0712 00:23:55.525000 2323 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:23:55.525344 kubelet[2323]: I0712 00:23:55.525062 2323 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-189","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:23:55.525502 kubelet[2323]: I0712 00:23:55.525489 2323 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:23:55.525600 kubelet[2323]: I0712 00:23:55.525511 2323 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:23:55.525901 kubelet[2323]: I0712 00:23:55.525870 2323 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:23:55.537251 kubelet[2323]: I0712 00:23:55.537175 2323 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:23:55.537251 kubelet[2323]: I0712 00:23:55.537250 2323 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:23:55.537494 kubelet[2323]: I0712 00:23:55.537295 2323 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:23:55.537494 kubelet[2323]: I0712 00:23:55.537465 2323 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:23:55.557476 kubelet[2323]: W0712 00:23:55.557387 2323 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.189:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-189&limit=500&resourceVersion=0": dial tcp 172.31.16.189:6443: connect: connection refused Jul 12 00:23:55.557807 kubelet[2323]: E0712 00:23:55.557771 2323 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.189:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-189&limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:23:55.558250 kubelet[2323]: I0712 00:23:55.558208 2323 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:23:55.559740 kubelet[2323]: I0712 00:23:55.559699 2323 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:23:55.560290 kubelet[2323]: W0712 00:23:55.560251 2323 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:23:55.564800 kubelet[2323]: I0712 00:23:55.564756 2323 server.go:1274] "Started kubelet" Jul 12 00:23:55.577097 kubelet[2323]: I0712 00:23:55.577025 2323 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:23:55.581067 kubelet[2323]: E0712 00:23:55.576958 2323 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.189:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.189:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-189.18515939f9ea1a90 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-189,UID:ip-172-31-16-189,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-189,},FirstTimestamp:2025-07-12 00:23:55.564710544 +0000 UTC m=+1.234395815,LastTimestamp:2025-07-12 00:23:55.564710544 +0000 UTC m=+1.234395815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-189,}" Jul 12 00:23:55.581703 kubelet[2323]: I0712 00:23:55.581585 2323 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:23:55.582486 kubelet[2323]: I0712 00:23:55.582451 2323 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:23:55.584651 kubelet[2323]: W0712 00:23:55.584493 2323 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.189:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.189:6443: connect: connection refused Jul 12 00:23:55.584828 kubelet[2323]: E0712 00:23:55.584669 2323 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.189:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:23:55.589459 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 12 00:23:55.589887 kubelet[2323]: I0712 00:23:55.589857 2323 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:23:55.597834 kubelet[2323]: I0712 00:23:55.597759 2323 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:23:55.601185 kubelet[2323]: I0712 00:23:55.601144 2323 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:23:55.602015 kubelet[2323]: E0712 00:23:55.601963 2323 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-189\" not found" Jul 12 00:23:55.602767 kubelet[2323]: I0712 00:23:55.602739 2323 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:23:55.603022 kubelet[2323]: I0712 00:23:55.602998 2323 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:23:55.604383 kubelet[2323]: I0712 00:23:55.604347 2323 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:23:55.606869 kubelet[2323]: W0712 00:23:55.606792 2323 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.189:6443: connect: connection refused Jul 12 00:23:55.607146 kubelet[2323]: E0712 00:23:55.607096 2323 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:23:55.607389 kubelet[2323]: E0712 00:23:55.607349 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-189?timeout=10s\": dial tcp 172.31.16.189:6443: connect: connection refused" interval="200ms" Jul 12 00:23:55.609967 kubelet[2323]: E0712 00:23:55.609923 2323 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:23:55.612587 kubelet[2323]: I0712 00:23:55.612501 2323 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:23:55.612587 kubelet[2323]: I0712 00:23:55.612580 2323 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:23:55.612833 kubelet[2323]: I0712 00:23:55.612722 2323 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:23:55.648682 kubelet[2323]: I0712 00:23:55.648617 2323 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:23:55.650851 kubelet[2323]: I0712 00:23:55.650809 2323 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:23:55.653681 kubelet[2323]: I0712 00:23:55.653623 2323 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:23:55.653681 kubelet[2323]: I0712 00:23:55.653701 2323 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:23:55.653934 kubelet[2323]: E0712 00:23:55.653785 2323 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:23:55.658714 kubelet[2323]: W0712 00:23:55.658571 2323 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.189:6443: connect: connection refused Jul 12 00:23:55.661036 kubelet[2323]: E0712 00:23:55.660954 2323 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:23:55.672590 kubelet[2323]: I0712 00:23:55.672507 2323 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:23:55.672590 kubelet[2323]: I0712 00:23:55.672575 2323 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:23:55.672814 kubelet[2323]: I0712 00:23:55.672618 2323 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:23:55.675288 kubelet[2323]: I0712 00:23:55.675230 2323 policy_none.go:49] "None policy: Start" Jul 12 00:23:55.676749 kubelet[2323]: I0712 00:23:55.676702 2323 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:23:55.676921 kubelet[2323]: I0712 00:23:55.676757 2323 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:23:55.692374 kubelet[2323]: I0712 00:23:55.692228 2323 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:23:55.695099 kubelet[2323]: I0712 00:23:55.695043 2323 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:23:55.695233 kubelet[2323]: I0712 00:23:55.695087 2323 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:23:55.697053 kubelet[2323]: I0712 00:23:55.696997 2323 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:23:55.701902 kubelet[2323]: E0712 00:23:55.701834 2323 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-189\" not found" Jul 12 00:23:55.797275 kubelet[2323]: I0712 00:23:55.797230 2323 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-189" Jul 12 00:23:55.798056 kubelet[2323]: E0712 00:23:55.797999 2323 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.189:6443/api/v1/nodes\": dial tcp 172.31.16.189:6443: connect: connection refused" node="ip-172-31-16-189" Jul 12 00:23:55.803913 kubelet[2323]: I0712 00:23:55.803834 2323 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c84f16ec0e1565414cbe9ba24dc54f7-ca-certs\") pod \"kube-apiserver-ip-172-31-16-189\" (UID: \"8c84f16ec0e1565414cbe9ba24dc54f7\") " pod="kube-system/kube-apiserver-ip-172-31-16-189" Jul 12 00:23:55.808377 kubelet[2323]: E0712 00:23:55.808315 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-189?timeout=10s\": dial tcp 172.31.16.189:6443: connect: connection refused" interval="400ms" Jul 12 00:23:55.904685 kubelet[2323]: I0712 00:23:55.904631 2323 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ddd8110344ca538e8333349eb061327-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"9ddd8110344ca538e8333349eb061327\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Jul 12 00:23:55.904890 kubelet[2323]: I0712 00:23:55.904713 2323 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9ddd8110344ca538e8333349eb061327-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"9ddd8110344ca538e8333349eb061327\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Jul 12 00:23:55.904890 kubelet[2323]: I0712 00:23:55.904757 2323 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ddd8110344ca538e8333349eb061327-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"9ddd8110344ca538e8333349eb061327\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Jul 12 00:23:55.904890 kubelet[2323]: I0712 00:23:55.904795 2323 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ddd8110344ca538e8333349eb061327-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"9ddd8110344ca538e8333349eb061327\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Jul 12 00:23:55.904890 kubelet[2323]: I0712 00:23:55.904834 2323 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ddd8110344ca538e8333349eb061327-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"9ddd8110344ca538e8333349eb061327\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Jul 12 00:23:55.904890 kubelet[2323]: I0712 00:23:55.904870 2323 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db46d660c23fbfd3dadceb4dfd3a3e7c-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-189\" (UID: \"db46d660c23fbfd3dadceb4dfd3a3e7c\") " pod="kube-system/kube-scheduler-ip-172-31-16-189" Jul 12 00:23:55.905178 kubelet[2323]: I0712 00:23:55.904934 2323 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c84f16ec0e1565414cbe9ba24dc54f7-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-189\" (UID: \"8c84f16ec0e1565414cbe9ba24dc54f7\") " pod="kube-system/kube-apiserver-ip-172-31-16-189" Jul 12 00:23:55.905178 kubelet[2323]: I0712 00:23:55.904971 2323 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c84f16ec0e1565414cbe9ba24dc54f7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-189\" (UID: \"8c84f16ec0e1565414cbe9ba24dc54f7\") " pod="kube-system/kube-apiserver-ip-172-31-16-189" Jul 12 00:23:56.001917 kubelet[2323]: I0712 00:23:56.000869 2323 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-189" Jul 12 00:23:56.002363 kubelet[2323]: E0712 00:23:56.002291 2323 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.189:6443/api/v1/nodes\": dial tcp 172.31.16.189:6443: connect: connection refused" node="ip-172-31-16-189" Jul 12 00:23:56.070838 env[1764]: time="2025-07-12T00:23:56.070295051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-189,Uid:8c84f16ec0e1565414cbe9ba24dc54f7,Namespace:kube-system,Attempt:0,}" Jul 12 00:23:56.077655 env[1764]: time="2025-07-12T00:23:56.077570565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-189,Uid:9ddd8110344ca538e8333349eb061327,Namespace:kube-system,Attempt:0,}" Jul 12 00:23:56.084193 env[1764]: time="2025-07-12T00:23:56.084142670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-189,Uid:db46d660c23fbfd3dadceb4dfd3a3e7c,Namespace:kube-system,Attempt:0,}" Jul 12 00:23:56.209193 kubelet[2323]: E0712 00:23:56.209108 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-189?timeout=10s\": dial tcp 172.31.16.189:6443: connect: connection refused" interval="800ms" Jul 12 00:23:56.404869 kubelet[2323]: I0712 00:23:56.404811 2323 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-189" Jul 12 00:23:56.405400 kubelet[2323]: E0712 00:23:56.405344 2323 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.189:6443/api/v1/nodes\": dial tcp 172.31.16.189:6443: connect: connection refused" node="ip-172-31-16-189" Jul 12 00:23:56.516155 kubelet[2323]: W0712 00:23:56.515973 2323 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.189:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-189&limit=500&resourceVersion=0": dial tcp 172.31.16.189:6443: connect: connection refused Jul 12 00:23:56.516155 kubelet[2323]: E0712 00:23:56.516083 2323 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.189:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-189&limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:23:56.574856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2847408431.mount: Deactivated successfully. Jul 12 00:23:56.583492 env[1764]: time="2025-07-12T00:23:56.583379885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:56.592261 env[1764]: time="2025-07-12T00:23:56.592196269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:56.594631 env[1764]: time="2025-07-12T00:23:56.594514035Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:56.597026 env[1764]: time="2025-07-12T00:23:56.596940642Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:56.603469 env[1764]: time="2025-07-12T00:23:56.603376607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:56.607283 env[1764]: time="2025-07-12T00:23:56.607227683Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:56.612875 env[1764]: time="2025-07-12T00:23:56.612812722Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:56.618368 env[1764]: time="2025-07-12T00:23:56.618282716Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:56.621258 env[1764]: time="2025-07-12T00:23:56.621176871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:56.623118 env[1764]: time="2025-07-12T00:23:56.622960322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:56.633954 env[1764]: time="2025-07-12T00:23:56.633867946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:56.640793 env[1764]: time="2025-07-12T00:23:56.640694633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:23:56.677284 env[1764]: time="2025-07-12T00:23:56.675840190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:23:56.677755 env[1764]: time="2025-07-12T00:23:56.677601261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:23:56.677755 env[1764]: time="2025-07-12T00:23:56.677655525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:23:56.678557 env[1764]: time="2025-07-12T00:23:56.678431150Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6456d91e9a9c7bc51babf9975b5eae70e6a86b06e7cb0549ae6d88635d0dd7c pid=2364 runtime=io.containerd.runc.v2 Jul 12 00:23:56.747920 env[1764]: time="2025-07-12T00:23:56.737013994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:23:56.747920 env[1764]: time="2025-07-12T00:23:56.737118371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:23:56.747920 env[1764]: time="2025-07-12T00:23:56.737146835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:23:56.747920 env[1764]: time="2025-07-12T00:23:56.737648666Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c7eddb53c49f69f4a4d4a58c3161302ac48faf9c456bd4ca35c426cdac20070 pid=2395 runtime=io.containerd.runc.v2 Jul 12 00:23:56.758886 kubelet[2323]: W0712 00:23:56.758658 2323 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.189:6443: connect: connection refused Jul 12 00:23:56.758886 kubelet[2323]: E0712 00:23:56.758794 2323 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:23:56.763933 env[1764]: time="2025-07-12T00:23:56.763607613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:23:56.764410 env[1764]: time="2025-07-12T00:23:56.764315318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:23:56.764749 env[1764]: time="2025-07-12T00:23:56.764647060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:23:56.765616 env[1764]: time="2025-07-12T00:23:56.765444465Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/096be13b5be8a27fab8500505e26a8000b73ab3e31f6491e20d744b034f73f9a pid=2404 runtime=io.containerd.runc.v2 Jul 12 00:23:56.871504 env[1764]: time="2025-07-12T00:23:56.871448726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-189,Uid:8c84f16ec0e1565414cbe9ba24dc54f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6456d91e9a9c7bc51babf9975b5eae70e6a86b06e7cb0549ae6d88635d0dd7c\"" Jul 12 00:23:56.878472 env[1764]: time="2025-07-12T00:23:56.878416258Z" level=info msg="CreateContainer within sandbox \"c6456d91e9a9c7bc51babf9975b5eae70e6a86b06e7cb0549ae6d88635d0dd7c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:23:56.888321 kubelet[2323]: W0712 00:23:56.888175 2323 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.189:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.189:6443: connect: connection refused Jul 12 00:23:56.888321 kubelet[2323]: E0712 00:23:56.888269 2323 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.189:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:23:56.938430 env[1764]: time="2025-07-12T00:23:56.936875153Z" level=info msg="CreateContainer within sandbox \"c6456d91e9a9c7bc51babf9975b5eae70e6a86b06e7cb0549ae6d88635d0dd7c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57226c3414b70f3e6642204f966a4ae27f746834bed2bf11732332ef49beb24a\"" Jul 12 00:23:56.939682 env[1764]: time="2025-07-12T00:23:56.939592990Z" level=info msg="StartContainer for \"57226c3414b70f3e6642204f966a4ae27f746834bed2bf11732332ef49beb24a\"" Jul 12 00:23:56.949211 env[1764]: time="2025-07-12T00:23:56.949119778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-189,Uid:db46d660c23fbfd3dadceb4dfd3a3e7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"096be13b5be8a27fab8500505e26a8000b73ab3e31f6491e20d744b034f73f9a\"" Jul 12 00:23:56.954736 env[1764]: time="2025-07-12T00:23:56.954632973Z" level=info msg="CreateContainer within sandbox \"096be13b5be8a27fab8500505e26a8000b73ab3e31f6491e20d744b034f73f9a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:23:56.967210 env[1764]: time="2025-07-12T00:23:56.967127031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-189,Uid:9ddd8110344ca538e8333349eb061327,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c7eddb53c49f69f4a4d4a58c3161302ac48faf9c456bd4ca35c426cdac20070\"" Jul 12 00:23:56.972606 env[1764]: time="2025-07-12T00:23:56.972549013Z" level=info msg="CreateContainer within sandbox \"3c7eddb53c49f69f4a4d4a58c3161302ac48faf9c456bd4ca35c426cdac20070\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:23:57.003571 env[1764]: time="2025-07-12T00:23:56.999296257Z" level=info msg="CreateContainer within sandbox \"096be13b5be8a27fab8500505e26a8000b73ab3e31f6491e20d744b034f73f9a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"997332b5193a3cfe13de12dd3a9ae82c79c973a98bf7e0af07c874d22474fc7a\"" Jul 12 00:23:57.003571 env[1764]: time="2025-07-12T00:23:57.000321640Z" level=info msg="StartContainer for \"997332b5193a3cfe13de12dd3a9ae82c79c973a98bf7e0af07c874d22474fc7a\"" Jul 12 00:23:57.012393 kubelet[2323]: E0712 00:23:57.009908 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-189?timeout=10s\": dial tcp 172.31.16.189:6443: connect: connection refused" interval="1.6s" Jul 12 00:23:57.023559 env[1764]: time="2025-07-12T00:23:57.022966411Z" level=info msg="CreateContainer within sandbox \"3c7eddb53c49f69f4a4d4a58c3161302ac48faf9c456bd4ca35c426cdac20070\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eb2f014e301a59c8cdb984ba3adaf69055414c78fcfc422e5cd3330d626ea1e9\"" Jul 12 00:23:57.023932 env[1764]: time="2025-07-12T00:23:57.023869608Z" level=info msg="StartContainer for \"eb2f014e301a59c8cdb984ba3adaf69055414c78fcfc422e5cd3330d626ea1e9\"" Jul 12 00:23:57.078145 kubelet[2323]: W0712 00:23:57.078011 2323 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.189:6443: connect: connection refused Jul 12 00:23:57.078299 kubelet[2323]: E0712 00:23:57.078176 2323 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:23:57.130214 env[1764]: time="2025-07-12T00:23:57.130009328Z" level=info msg="StartContainer for \"57226c3414b70f3e6642204f966a4ae27f746834bed2bf11732332ef49beb24a\" returns successfully" Jul 12 00:23:57.208749 kubelet[2323]: I0712 00:23:57.208613 2323 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-189" Jul 12 00:23:57.209918 kubelet[2323]: E0712 00:23:57.209307 2323 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.189:6443/api/v1/nodes\": dial tcp 172.31.16.189:6443: connect: connection refused" node="ip-172-31-16-189" Jul 12 00:23:57.279485 env[1764]: time="2025-07-12T00:23:57.279391797Z" level=info msg="StartContainer for \"eb2f014e301a59c8cdb984ba3adaf69055414c78fcfc422e5cd3330d626ea1e9\" returns successfully" Jul 12 00:23:57.312893 env[1764]: time="2025-07-12T00:23:57.312830056Z" level=info msg="StartContainer for \"997332b5193a3cfe13de12dd3a9ae82c79c973a98bf7e0af07c874d22474fc7a\" returns successfully" Jul 12 00:23:58.812105 kubelet[2323]: I0712 00:23:58.812058 2323 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-189" Jul 12 00:24:02.163499 kubelet[2323]: E0712 00:24:02.163446 2323 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-189\" not found" node="ip-172-31-16-189" Jul 12 00:24:02.208564 kubelet[2323]: I0712 00:24:02.208494 2323 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-16-189" Jul 12 00:24:02.208794 kubelet[2323]: E0712 00:24:02.208764 2323 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-189\": node \"ip-172-31-16-189\" not found" Jul 12 00:24:02.407732 update_engine[1748]: I0712 00:24:02.407657 1748 update_attempter.cc:509] Updating boot flags... Jul 12 00:24:02.575603 kubelet[2323]: I0712 00:24:02.571984 2323 apiserver.go:52] "Watching apiserver" Jul 12 00:24:02.603395 kubelet[2323]: I0712 00:24:02.603306 2323 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:24:04.139039 amazon-ssm-agent[1730]: 2025-07-12 00:24:04 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Jul 12 00:24:04.321016 systemd[1]: Reloading. Jul 12 00:24:04.511999 /usr/lib/systemd/system-generators/torcx-generator[2804]: time="2025-07-12T00:24:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:24:04.514305 /usr/lib/systemd/system-generators/torcx-generator[2804]: time="2025-07-12T00:24:04Z" level=info msg="torcx already run" Jul 12 00:24:04.698630 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:24:04.698670 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:24:04.746020 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:24:04.976084 systemd[1]: Stopping kubelet.service... Jul 12 00:24:05.008384 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:24:05.009037 systemd[1]: Stopped kubelet.service. Jul 12 00:24:05.014972 systemd[1]: Starting kubelet.service... Jul 12 00:24:05.488030 systemd[1]: Started kubelet.service. Jul 12 00:24:05.683171 kubelet[2864]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:24:05.684029 kubelet[2864]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:24:05.684163 kubelet[2864]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:24:05.684452 kubelet[2864]: I0712 00:24:05.684387 2864 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:24:05.696896 kubelet[2864]: I0712 00:24:05.696809 2864 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:24:05.696896 kubelet[2864]: I0712 00:24:05.696873 2864 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:24:05.697434 kubelet[2864]: I0712 00:24:05.697370 2864 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:24:05.700321 kubelet[2864]: I0712 00:24:05.700256 2864 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:24:05.704429 kubelet[2864]: I0712 00:24:05.704340 2864 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:24:05.712883 kubelet[2864]: E0712 00:24:05.712796 2864 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:24:05.712883 kubelet[2864]: I0712 00:24:05.712869 2864 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:24:05.718698 kubelet[2864]: I0712 00:24:05.718636 2864 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:24:05.719736 kubelet[2864]: I0712 00:24:05.719680 2864 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:24:05.720028 kubelet[2864]: I0712 00:24:05.719949 2864 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:24:05.721512 kubelet[2864]: I0712 00:24:05.721033 2864 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-189","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:24:05.722860 kubelet[2864]: I0712 00:24:05.722786 2864 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:24:05.722860 kubelet[2864]: I0712 00:24:05.722856 2864 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:24:05.723079 kubelet[2864]: I0712 00:24:05.722941 2864 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:24:05.723179 kubelet[2864]: I0712 00:24:05.723149 2864 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:24:05.723251 kubelet[2864]: I0712 00:24:05.723187 2864 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:24:05.723251 kubelet[2864]: I0712 00:24:05.723242 2864 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:24:05.723380 kubelet[2864]: I0712 00:24:05.723278 2864 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:24:05.740591 kubelet[2864]: I0712 00:24:05.731647 2864 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:24:05.740591 kubelet[2864]: I0712 00:24:05.732975 2864 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:24:05.740591 kubelet[2864]: I0712 00:24:05.733933 2864 server.go:1274] "Started kubelet" Jul 12 00:24:05.742078 kubelet[2864]: I0712 00:24:05.742021 2864 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:24:05.743513 kubelet[2864]: I0712 00:24:05.743428 2864 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:24:05.744844 kubelet[2864]: I0712 00:24:05.744732 2864 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:24:05.745738 kubelet[2864]: I0712 00:24:05.745680 2864 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:24:05.746389 kubelet[2864]: I0712 00:24:05.746342 2864 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:24:05.746816 kubelet[2864]: E0712 00:24:05.746739 2864 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-189\" not found" Jul 12 00:24:05.748363 kubelet[2864]: I0712 00:24:05.748308 2864 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:24:05.748682 kubelet[2864]: I0712 00:24:05.748646 2864 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:24:05.751839 kubelet[2864]: I0712 00:24:05.751785 2864 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:24:05.753731 kubelet[2864]: I0712 00:24:05.753678 2864 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:24:05.774586 kubelet[2864]: I0712 00:24:05.759467 2864 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:24:05.774586 kubelet[2864]: I0712 00:24:05.759728 2864 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:24:05.774586 kubelet[2864]: I0712 00:24:05.763900 2864 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:24:05.846699 kubelet[2864]: I0712 00:24:05.846362 2864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:24:05.856257 kubelet[2864]: I0712 00:24:05.855670 2864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:24:05.856257 kubelet[2864]: I0712 00:24:05.855730 2864 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:24:05.856257 kubelet[2864]: I0712 00:24:05.855764 2864 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:24:05.856257 kubelet[2864]: E0712 00:24:05.855851 2864 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:24:05.956320 kubelet[2864]: E0712 00:24:05.956276 2864 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:24:05.993596 kubelet[2864]: I0712 00:24:05.991985 2864 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:24:05.993596 kubelet[2864]: I0712 00:24:05.992598 2864 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:24:05.993596 kubelet[2864]: I0712 00:24:05.992642 2864 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:24:05.993596 kubelet[2864]: I0712 00:24:05.992904 2864 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:24:05.993596 kubelet[2864]: I0712 00:24:05.992924 2864 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:24:05.993596 kubelet[2864]: I0712 00:24:05.992959 2864 policy_none.go:49] "None policy: Start" Jul 12 00:24:05.995240 kubelet[2864]: I0712 00:24:05.995208 2864 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:24:05.995477 kubelet[2864]: I0712 00:24:05.995455 2864 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:24:05.995889 kubelet[2864]: I0712 00:24:05.995865 2864 state_mem.go:75] "Updated machine memory state" Jul 12 00:24:05.999332 kubelet[2864]: I0712 00:24:05.999290 2864 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:24:05.999845 kubelet[2864]: I0712 00:24:05.999816 2864 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:24:06.000074 kubelet[2864]: I0712 00:24:06.000014 2864 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:24:06.004022 kubelet[2864]: I0712 00:24:06.003265 2864 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:24:06.133516 kubelet[2864]: I0712 00:24:06.131276 2864 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-189" Jul 12 00:24:06.153805 kubelet[2864]: I0712 00:24:06.153716 2864 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-16-189" Jul 12 00:24:06.153964 kubelet[2864]: I0712 00:24:06.153933 2864 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-16-189" Jul 12 00:24:06.177567 kubelet[2864]: E0712 00:24:06.176208 2864 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-16-189\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-189" Jul 12 00:24:06.186889 kubelet[2864]: E0712 00:24:06.186792 2864 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-16-189\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-189" Jul 12 00:24:06.188275 kubelet[2864]: E0712 00:24:06.188207 2864 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-16-189\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-189" Jul 12 00:24:06.261353 kubelet[2864]: I0712 00:24:06.261165 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c84f16ec0e1565414cbe9ba24dc54f7-ca-certs\") pod \"kube-apiserver-ip-172-31-16-189\" (UID: \"8c84f16ec0e1565414cbe9ba24dc54f7\") " pod="kube-system/kube-apiserver-ip-172-31-16-189" Jul 12 00:24:06.261813 kubelet[2864]: I0712 00:24:06.261745 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c84f16ec0e1565414cbe9ba24dc54f7-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-189\" (UID: \"8c84f16ec0e1565414cbe9ba24dc54f7\") " pod="kube-system/kube-apiserver-ip-172-31-16-189" Jul 12 00:24:06.262088 kubelet[2864]: I0712 00:24:06.262027 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c84f16ec0e1565414cbe9ba24dc54f7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-189\" (UID: \"8c84f16ec0e1565414cbe9ba24dc54f7\") " pod="kube-system/kube-apiserver-ip-172-31-16-189" Jul 12 00:24:06.262587 kubelet[2864]: I0712 00:24:06.262499 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ddd8110344ca538e8333349eb061327-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"9ddd8110344ca538e8333349eb061327\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Jul 12 00:24:06.262867 kubelet[2864]: I0712 00:24:06.262810 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ddd8110344ca538e8333349eb061327-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"9ddd8110344ca538e8333349eb061327\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Jul 12 00:24:06.263755 kubelet[2864]: I0712 00:24:06.263689 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ddd8110344ca538e8333349eb061327-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"9ddd8110344ca538e8333349eb061327\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Jul 12 00:24:06.264088 kubelet[2864]: I0712 00:24:06.264024 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9ddd8110344ca538e8333349eb061327-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"9ddd8110344ca538e8333349eb061327\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Jul 12 00:24:06.264361 kubelet[2864]: I0712 00:24:06.264305 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ddd8110344ca538e8333349eb061327-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"9ddd8110344ca538e8333349eb061327\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Jul 12 00:24:06.265092 kubelet[2864]: I0712 00:24:06.265053 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db46d660c23fbfd3dadceb4dfd3a3e7c-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-189\" (UID: \"db46d660c23fbfd3dadceb4dfd3a3e7c\") " pod="kube-system/kube-scheduler-ip-172-31-16-189" Jul 12 00:24:06.746987 kubelet[2864]: I0712 00:24:06.746933 2864 apiserver.go:52] "Watching apiserver" Jul 12 00:24:06.849489 kubelet[2864]: I0712 00:24:06.849439 2864 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:24:06.988731 kubelet[2864]: I0712 00:24:06.988642 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-189" podStartSLOduration=2.988618679 podStartE2EDuration="2.988618679s" podCreationTimestamp="2025-07-12 00:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:24:06.971149865 +0000 UTC m=+1.456546849" watchObservedRunningTime="2025-07-12 00:24:06.988618679 +0000 UTC m=+1.474015651" Jul 12 00:24:07.005718 kubelet[2864]: I0712 00:24:07.005396 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-189" podStartSLOduration=4.005375313 podStartE2EDuration="4.005375313s" podCreationTimestamp="2025-07-12 00:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:24:06.99066399 +0000 UTC m=+1.476060986" watchObservedRunningTime="2025-07-12 00:24:07.005375313 +0000 UTC m=+1.490772273" Jul 12 00:24:07.006261 kubelet[2864]: I0712 00:24:07.006155 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-189" podStartSLOduration=3.006133764 podStartE2EDuration="3.006133764s" podCreationTimestamp="2025-07-12 00:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:24:07.004955492 +0000 UTC m=+1.490352488" watchObservedRunningTime="2025-07-12 00:24:07.006133764 +0000 UTC m=+1.491530724" Jul 12 00:24:07.799742 sudo[2024]: pam_unix(sudo:session): session closed for user root Jul 12 00:24:07.825982 sshd[2020]: pam_unix(sshd:session): session closed for user core Jul 12 00:24:07.831783 systemd[1]: sshd@4-172.31.16.189:22-147.75.109.163:43172.service: Deactivated successfully. Jul 12 00:24:07.834629 systemd-logind[1747]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:24:07.834694 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:24:07.838474 systemd-logind[1747]: Removed session 5. Jul 12 00:24:09.848579 kubelet[2864]: I0712 00:24:09.848520 2864 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:24:09.849889 env[1764]: time="2025-07-12T00:24:09.849838143Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:24:09.850807 kubelet[2864]: I0712 00:24:09.850777 2864 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:24:10.598232 kubelet[2864]: I0712 00:24:10.598153 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/b983aa39-fbd9-4c7b-9bb5-07d4e81e6422-flannel-cfg\") pod \"kube-flannel-ds-46nwh\" (UID: \"b983aa39-fbd9-4c7b-9bb5-07d4e81e6422\") " pod="kube-flannel/kube-flannel-ds-46nwh" Jul 12 00:24:10.598422 kubelet[2864]: I0712 00:24:10.598260 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b983aa39-fbd9-4c7b-9bb5-07d4e81e6422-xtables-lock\") pod \"kube-flannel-ds-46nwh\" (UID: \"b983aa39-fbd9-4c7b-9bb5-07d4e81e6422\") " pod="kube-flannel/kube-flannel-ds-46nwh" Jul 12 00:24:10.598422 kubelet[2864]: I0712 00:24:10.598335 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a9310e9-8757-418f-90c3-6ecd0b7689ad-xtables-lock\") pod \"kube-proxy-n5c2c\" (UID: \"8a9310e9-8757-418f-90c3-6ecd0b7689ad\") " pod="kube-system/kube-proxy-n5c2c" Jul 12 00:24:10.598422 kubelet[2864]: I0712 00:24:10.598376 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a9310e9-8757-418f-90c3-6ecd0b7689ad-lib-modules\") pod \"kube-proxy-n5c2c\" (UID: \"8a9310e9-8757-418f-90c3-6ecd0b7689ad\") " pod="kube-system/kube-proxy-n5c2c" Jul 12 00:24:10.598669 kubelet[2864]: I0712 00:24:10.598444 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/b983aa39-fbd9-4c7b-9bb5-07d4e81e6422-cni-plugin\") pod \"kube-flannel-ds-46nwh\" (UID: \"b983aa39-fbd9-4c7b-9bb5-07d4e81e6422\") " pod="kube-flannel/kube-flannel-ds-46nwh" Jul 12 00:24:10.598669 kubelet[2864]: I0712 00:24:10.598585 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a9310e9-8757-418f-90c3-6ecd0b7689ad-kube-proxy\") pod \"kube-proxy-n5c2c\" (UID: \"8a9310e9-8757-418f-90c3-6ecd0b7689ad\") " pod="kube-system/kube-proxy-n5c2c" Jul 12 00:24:10.598669 kubelet[2864]: I0712 00:24:10.598652 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b983aa39-fbd9-4c7b-9bb5-07d4e81e6422-run\") pod \"kube-flannel-ds-46nwh\" (UID: \"b983aa39-fbd9-4c7b-9bb5-07d4e81e6422\") " pod="kube-flannel/kube-flannel-ds-46nwh" Jul 12 00:24:10.598834 kubelet[2864]: I0712 00:24:10.598695 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/b983aa39-fbd9-4c7b-9bb5-07d4e81e6422-cni\") pod \"kube-flannel-ds-46nwh\" (UID: \"b983aa39-fbd9-4c7b-9bb5-07d4e81e6422\") " pod="kube-flannel/kube-flannel-ds-46nwh" Jul 12 00:24:10.598834 kubelet[2864]: I0712 00:24:10.598756 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hntlr\" (UniqueName: \"kubernetes.io/projected/b983aa39-fbd9-4c7b-9bb5-07d4e81e6422-kube-api-access-hntlr\") pod \"kube-flannel-ds-46nwh\" (UID: \"b983aa39-fbd9-4c7b-9bb5-07d4e81e6422\") " pod="kube-flannel/kube-flannel-ds-46nwh" Jul 12 00:24:10.598834 kubelet[2864]: I0712 00:24:10.598823 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdbf\" (UniqueName: \"kubernetes.io/projected/8a9310e9-8757-418f-90c3-6ecd0b7689ad-kube-api-access-hzdbf\") pod \"kube-proxy-n5c2c\" (UID: \"8a9310e9-8757-418f-90c3-6ecd0b7689ad\") " pod="kube-system/kube-proxy-n5c2c" Jul 12 00:24:10.713660 kubelet[2864]: I0712 00:24:10.713608 2864 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 12 00:24:10.803497 env[1764]: time="2025-07-12T00:24:10.802794488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n5c2c,Uid:8a9310e9-8757-418f-90c3-6ecd0b7689ad,Namespace:kube-system,Attempt:0,}" Jul 12 00:24:10.832642 env[1764]: time="2025-07-12T00:24:10.832465349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:24:10.832847 env[1764]: time="2025-07-12T00:24:10.832698894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:24:10.832847 env[1764]: time="2025-07-12T00:24:10.832776414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:24:10.833311 env[1764]: time="2025-07-12T00:24:10.833231660Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a08367e652793dd62b3b26fa65d148257c7ff5849b9e9148bbcf7b9dabe28c3 pid=2930 runtime=io.containerd.runc.v2 Jul 12 00:24:10.845386 env[1764]: time="2025-07-12T00:24:10.845311414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-46nwh,Uid:b983aa39-fbd9-4c7b-9bb5-07d4e81e6422,Namespace:kube-flannel,Attempt:0,}" Jul 12 00:24:10.895798 env[1764]: time="2025-07-12T00:24:10.895009343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:24:10.895798 env[1764]: time="2025-07-12T00:24:10.895084091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:24:10.895798 env[1764]: time="2025-07-12T00:24:10.895109723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:24:10.895798 env[1764]: time="2025-07-12T00:24:10.895362696Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b3ad5ecf859609803f4923a2878aacd9e4514796daa5e03fbb1b6445ef616dd pid=2957 runtime=io.containerd.runc.v2 Jul 12 00:24:10.988148 env[1764]: time="2025-07-12T00:24:10.988087913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n5c2c,Uid:8a9310e9-8757-418f-90c3-6ecd0b7689ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a08367e652793dd62b3b26fa65d148257c7ff5849b9e9148bbcf7b9dabe28c3\"" Jul 12 00:24:10.997257 env[1764]: time="2025-07-12T00:24:10.997200502Z" level=info msg="CreateContainer within sandbox \"8a08367e652793dd62b3b26fa65d148257c7ff5849b9e9148bbcf7b9dabe28c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:24:11.038777 env[1764]: time="2025-07-12T00:24:11.038694876Z" level=info msg="CreateContainer within sandbox \"8a08367e652793dd62b3b26fa65d148257c7ff5849b9e9148bbcf7b9dabe28c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"090d63ad0ebfb993f89e4ba65649c2f76413197fbd9c60a1ee02322b9b5a17db\"" Jul 12 00:24:11.042738 env[1764]: time="2025-07-12T00:24:11.042651252Z" level=info msg="StartContainer for \"090d63ad0ebfb993f89e4ba65649c2f76413197fbd9c60a1ee02322b9b5a17db\"" Jul 12 00:24:11.057717 env[1764]: time="2025-07-12T00:24:11.057654345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-46nwh,Uid:b983aa39-fbd9-4c7b-9bb5-07d4e81e6422,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"0b3ad5ecf859609803f4923a2878aacd9e4514796daa5e03fbb1b6445ef616dd\"" Jul 12 00:24:11.066072 env[1764]: time="2025-07-12T00:24:11.063901624Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jul 12 00:24:11.167744 env[1764]: time="2025-07-12T00:24:11.167589318Z" level=info msg="StartContainer for \"090d63ad0ebfb993f89e4ba65649c2f76413197fbd9c60a1ee02322b9b5a17db\" returns successfully" Jul 12 00:24:11.969337 kubelet[2864]: I0712 00:24:11.969224 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n5c2c" podStartSLOduration=1.969198069 podStartE2EDuration="1.969198069s" podCreationTimestamp="2025-07-12 00:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:24:11.967768852 +0000 UTC m=+6.453165824" watchObservedRunningTime="2025-07-12 00:24:11.969198069 +0000 UTC m=+6.454595041" Jul 12 00:24:12.962052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655445993.mount: Deactivated successfully. Jul 12 00:24:13.057893 env[1764]: time="2025-07-12T00:24:13.057814946Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:13.062640 env[1764]: time="2025-07-12T00:24:13.062575899Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:13.067260 env[1764]: time="2025-07-12T00:24:13.067196524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:13.071583 env[1764]: time="2025-07-12T00:24:13.071487232Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:13.073146 env[1764]: time="2025-07-12T00:24:13.073083140Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jul 12 00:24:13.081199 env[1764]: time="2025-07-12T00:24:13.081097723Z" level=info msg="CreateContainer within sandbox \"0b3ad5ecf859609803f4923a2878aacd9e4514796daa5e03fbb1b6445ef616dd\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jul 12 00:24:13.112874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount914694696.mount: Deactivated successfully. Jul 12 00:24:13.131911 env[1764]: time="2025-07-12T00:24:13.131835856Z" level=info msg="CreateContainer within sandbox \"0b3ad5ecf859609803f4923a2878aacd9e4514796daa5e03fbb1b6445ef616dd\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"b24820fd8aa50d110079f112354e49342f468ed3eb4df9c55b5ee65b1f381452\"" Jul 12 00:24:13.136296 env[1764]: time="2025-07-12T00:24:13.136039432Z" level=info msg="StartContainer for \"b24820fd8aa50d110079f112354e49342f468ed3eb4df9c55b5ee65b1f381452\"" Jul 12 00:24:13.258625 env[1764]: time="2025-07-12T00:24:13.257775847Z" level=info msg="StartContainer for \"b24820fd8aa50d110079f112354e49342f468ed3eb4df9c55b5ee65b1f381452\" returns successfully" Jul 12 00:24:13.497327 env[1764]: time="2025-07-12T00:24:13.497259542Z" level=info msg="shim disconnected" id=b24820fd8aa50d110079f112354e49342f468ed3eb4df9c55b5ee65b1f381452 Jul 12 00:24:13.497774 env[1764]: time="2025-07-12T00:24:13.497728467Z" level=warning msg="cleaning up after shim disconnected" id=b24820fd8aa50d110079f112354e49342f468ed3eb4df9c55b5ee65b1f381452 namespace=k8s.io Jul 12 00:24:13.498108 env[1764]: time="2025-07-12T00:24:13.498066580Z" level=info msg="cleaning up dead shim" Jul 12 00:24:13.512819 env[1764]: time="2025-07-12T00:24:13.512275904Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:24:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3209 runtime=io.containerd.runc.v2\n" Jul 12 00:24:13.964751 env[1764]: time="2025-07-12T00:24:13.964620428Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jul 12 00:24:16.041740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788486549.mount: Deactivated successfully. Jul 12 00:24:17.569001 env[1764]: time="2025-07-12T00:24:17.568894113Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:17.574743 env[1764]: time="2025-07-12T00:24:17.574673879Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:17.579381 env[1764]: time="2025-07-12T00:24:17.579293470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:17.584115 env[1764]: time="2025-07-12T00:24:17.584044870Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:17.586153 env[1764]: time="2025-07-12T00:24:17.586044374Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jul 12 00:24:17.593796 env[1764]: time="2025-07-12T00:24:17.593703729Z" level=info msg="CreateContainer within sandbox \"0b3ad5ecf859609803f4923a2878aacd9e4514796daa5e03fbb1b6445ef616dd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:24:17.624038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3714423777.mount: Deactivated successfully. Jul 12 00:24:17.639287 env[1764]: time="2025-07-12T00:24:17.639213093Z" level=info msg="CreateContainer within sandbox \"0b3ad5ecf859609803f4923a2878aacd9e4514796daa5e03fbb1b6445ef616dd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"839b2975b6480e463dcf4eff973ff655774873aad8a261913c7fcf2786b177e0\"" Jul 12 00:24:17.641115 env[1764]: time="2025-07-12T00:24:17.641036402Z" level=info msg="StartContainer for \"839b2975b6480e463dcf4eff973ff655774873aad8a261913c7fcf2786b177e0\"" Jul 12 00:24:17.782932 env[1764]: time="2025-07-12T00:24:17.782856832Z" level=info msg="StartContainer for \"839b2975b6480e463dcf4eff973ff655774873aad8a261913c7fcf2786b177e0\" returns successfully" Jul 12 00:24:17.881092 kubelet[2864]: I0712 00:24:17.880113 2864 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:24:17.950833 kubelet[2864]: I0712 00:24:17.950212 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7bkl\" (UniqueName: \"kubernetes.io/projected/45574b5f-0984-4431-aa19-ede1adb472ba-kube-api-access-c7bkl\") pod \"coredns-7c65d6cfc9-gplr9\" (UID: \"45574b5f-0984-4431-aa19-ede1adb472ba\") " pod="kube-system/coredns-7c65d6cfc9-gplr9" Jul 12 00:24:17.950833 kubelet[2864]: I0712 00:24:17.950298 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8792252-c48e-4e1f-894f-4ea74715c493-config-volume\") pod \"coredns-7c65d6cfc9-c6lcv\" (UID: \"b8792252-c48e-4e1f-894f-4ea74715c493\") " pod="kube-system/coredns-7c65d6cfc9-c6lcv" Jul 12 00:24:17.950833 kubelet[2864]: I0712 00:24:17.950342 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjh64\" (UniqueName: \"kubernetes.io/projected/b8792252-c48e-4e1f-894f-4ea74715c493-kube-api-access-jjh64\") pod \"coredns-7c65d6cfc9-c6lcv\" (UID: \"b8792252-c48e-4e1f-894f-4ea74715c493\") " pod="kube-system/coredns-7c65d6cfc9-c6lcv" Jul 12 00:24:17.950833 kubelet[2864]: I0712 00:24:17.950393 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45574b5f-0984-4431-aa19-ede1adb472ba-config-volume\") pod \"coredns-7c65d6cfc9-gplr9\" (UID: \"45574b5f-0984-4431-aa19-ede1adb472ba\") " pod="kube-system/coredns-7c65d6cfc9-gplr9" Jul 12 00:24:18.010591 env[1764]: time="2025-07-12T00:24:18.010218722Z" level=info msg="shim disconnected" id=839b2975b6480e463dcf4eff973ff655774873aad8a261913c7fcf2786b177e0 Jul 12 00:24:18.010591 env[1764]: time="2025-07-12T00:24:18.010323915Z" level=warning msg="cleaning up after shim disconnected" id=839b2975b6480e463dcf4eff973ff655774873aad8a261913c7fcf2786b177e0 namespace=k8s.io Jul 12 00:24:18.010591 env[1764]: time="2025-07-12T00:24:18.010349391Z" level=info msg="cleaning up dead shim" Jul 12 00:24:18.036338 env[1764]: time="2025-07-12T00:24:18.035257896Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:24:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3266 runtime=io.containerd.runc.v2\n" Jul 12 00:24:18.265011 env[1764]: time="2025-07-12T00:24:18.264255503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c6lcv,Uid:b8792252-c48e-4e1f-894f-4ea74715c493,Namespace:kube-system,Attempt:0,}" Jul 12 00:24:18.271804 env[1764]: time="2025-07-12T00:24:18.271726289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gplr9,Uid:45574b5f-0984-4431-aa19-ede1adb472ba,Namespace:kube-system,Attempt:0,}" Jul 12 00:24:18.353383 env[1764]: time="2025-07-12T00:24:18.353285592Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gplr9,Uid:45574b5f-0984-4431-aa19-ede1adb472ba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ffcd12d91e5d51ebdebaef18518ac00741d349468d0b8aee79a4289d72956ffe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 12 00:24:18.354284 kubelet[2864]: E0712 00:24:18.354216 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffcd12d91e5d51ebdebaef18518ac00741d349468d0b8aee79a4289d72956ffe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 12 00:24:18.354483 kubelet[2864]: E0712 00:24:18.354321 2864 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffcd12d91e5d51ebdebaef18518ac00741d349468d0b8aee79a4289d72956ffe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-gplr9" Jul 12 00:24:18.354483 kubelet[2864]: E0712 00:24:18.354358 2864 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffcd12d91e5d51ebdebaef18518ac00741d349468d0b8aee79a4289d72956ffe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-gplr9" Jul 12 00:24:18.354483 kubelet[2864]: E0712 00:24:18.354421 2864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gplr9_kube-system(45574b5f-0984-4431-aa19-ede1adb472ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gplr9_kube-system(45574b5f-0984-4431-aa19-ede1adb472ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffcd12d91e5d51ebdebaef18518ac00741d349468d0b8aee79a4289d72956ffe\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7c65d6cfc9-gplr9" podUID="45574b5f-0984-4431-aa19-ede1adb472ba" Jul 12 00:24:18.362099 env[1764]: time="2025-07-12T00:24:18.361991768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c6lcv,Uid:b8792252-c48e-4e1f-894f-4ea74715c493,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5ce08343a60262cc1080d420553dcccd893933e21ba87816244f9d6ad6d2449\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 12 00:24:18.362579 kubelet[2864]: E0712 00:24:18.362484 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ce08343a60262cc1080d420553dcccd893933e21ba87816244f9d6ad6d2449\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 12 00:24:18.363280 kubelet[2864]: E0712 00:24:18.362797 2864 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ce08343a60262cc1080d420553dcccd893933e21ba87816244f9d6ad6d2449\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-c6lcv" Jul 12 00:24:18.363280 kubelet[2864]: E0712 00:24:18.362844 2864 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ce08343a60262cc1080d420553dcccd893933e21ba87816244f9d6ad6d2449\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-c6lcv" Jul 12 00:24:18.363280 kubelet[2864]: E0712 00:24:18.362908 2864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-c6lcv_kube-system(b8792252-c48e-4e1f-894f-4ea74715c493)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-c6lcv_kube-system(b8792252-c48e-4e1f-894f-4ea74715c493)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5ce08343a60262cc1080d420553dcccd893933e21ba87816244f9d6ad6d2449\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7c65d6cfc9-c6lcv" podUID="b8792252-c48e-4e1f-894f-4ea74715c493" Jul 12 00:24:18.616422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-839b2975b6480e463dcf4eff973ff655774873aad8a261913c7fcf2786b177e0-rootfs.mount: Deactivated successfully. Jul 12 00:24:19.027819 env[1764]: time="2025-07-12T00:24:19.027311339Z" level=info msg="CreateContainer within sandbox \"0b3ad5ecf859609803f4923a2878aacd9e4514796daa5e03fbb1b6445ef616dd\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jul 12 00:24:19.068438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2591412464.mount: Deactivated successfully. Jul 12 00:24:19.083420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758559805.mount: Deactivated successfully. Jul 12 00:24:19.092549 env[1764]: time="2025-07-12T00:24:19.092410571Z" level=info msg="CreateContainer within sandbox \"0b3ad5ecf859609803f4923a2878aacd9e4514796daa5e03fbb1b6445ef616dd\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"590d5f5fc662523f7cc0fae2649133f8e31ae86ad188c63d0fbb3e4d32ea6984\"" Jul 12 00:24:19.094809 env[1764]: time="2025-07-12T00:24:19.094682620Z" level=info msg="StartContainer for \"590d5f5fc662523f7cc0fae2649133f8e31ae86ad188c63d0fbb3e4d32ea6984\"" Jul 12 00:24:19.217660 env[1764]: time="2025-07-12T00:24:19.215449125Z" level=info msg="StartContainer for \"590d5f5fc662523f7cc0fae2649133f8e31ae86ad188c63d0fbb3e4d32ea6984\" returns successfully" Jul 12 00:24:20.324975 (udev-worker)[3380]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:24:20.341378 systemd-networkd[1435]: flannel.1: Link UP Jul 12 00:24:20.341393 systemd-networkd[1435]: flannel.1: Gained carrier Jul 12 00:24:21.368834 systemd-networkd[1435]: flannel.1: Gained IPv6LL Jul 12 00:24:29.857589 env[1764]: time="2025-07-12T00:24:29.857493462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c6lcv,Uid:b8792252-c48e-4e1f-894f-4ea74715c493,Namespace:kube-system,Attempt:0,}" Jul 12 00:24:29.904768 systemd-networkd[1435]: cni0: Link UP Jul 12 00:24:29.904792 systemd-networkd[1435]: cni0: Gained carrier Jul 12 00:24:29.912441 (udev-worker)[3498]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:24:29.912901 systemd-networkd[1435]: cni0: Lost carrier Jul 12 00:24:29.920854 (udev-worker)[3502]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:24:29.923451 systemd-networkd[1435]: vethcb3f5cf5: Link UP Jul 12 00:24:29.929064 kernel: cni0: port 1(vethcb3f5cf5) entered blocking state Jul 12 00:24:29.929214 kernel: cni0: port 1(vethcb3f5cf5) entered disabled state Jul 12 00:24:29.931565 kernel: device vethcb3f5cf5 entered promiscuous mode Jul 12 00:24:29.936628 kernel: cni0: port 1(vethcb3f5cf5) entered blocking state Jul 12 00:24:29.936752 kernel: cni0: port 1(vethcb3f5cf5) entered forwarding state Jul 12 00:24:29.942573 kernel: cni0: port 1(vethcb3f5cf5) entered disabled state Jul 12 00:24:29.972371 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethcb3f5cf5: link becomes ready Jul 12 00:24:29.972517 kernel: cni0: port 1(vethcb3f5cf5) entered blocking state Jul 12 00:24:29.972618 kernel: cni0: port 1(vethcb3f5cf5) entered forwarding state Jul 12 00:24:29.975803 systemd-networkd[1435]: vethcb3f5cf5: Gained carrier Jul 12 00:24:29.977598 systemd-networkd[1435]: cni0: Gained carrier Jul 12 00:24:29.983682 env[1764]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Jul 12 00:24:29.983682 env[1764]: delegateAdd: netconf sent to delegate plugin: Jul 12 00:24:30.011139 env[1764]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-07-12T00:24:30.010820008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:24:30.011139 env[1764]: time="2025-07-12T00:24:30.010891084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:24:30.011139 env[1764]: time="2025-07-12T00:24:30.010917628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:24:30.012918 env[1764]: time="2025-07-12T00:24:30.011615525Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6545e67e6e61d46b9fa6ea7ffa19dc8735eaf45e18376a0678681e41634f0b69 pid=3528 runtime=io.containerd.runc.v2 Jul 12 00:24:30.130236 env[1764]: time="2025-07-12T00:24:30.129187503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c6lcv,Uid:b8792252-c48e-4e1f-894f-4ea74715c493,Namespace:kube-system,Attempt:0,} returns sandbox id \"6545e67e6e61d46b9fa6ea7ffa19dc8735eaf45e18376a0678681e41634f0b69\"" Jul 12 00:24:30.137160 env[1764]: time="2025-07-12T00:24:30.137091676Z" level=info msg="CreateContainer within sandbox \"6545e67e6e61d46b9fa6ea7ffa19dc8735eaf45e18376a0678681e41634f0b69\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:24:30.167586 env[1764]: time="2025-07-12T00:24:30.167379197Z" level=info msg="CreateContainer within sandbox \"6545e67e6e61d46b9fa6ea7ffa19dc8735eaf45e18376a0678681e41634f0b69\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78d52e57037dd625274a7412fbfddef2b1acd6c209a19840b7e3977c160250fa\"" Jul 12 00:24:30.171046 env[1764]: time="2025-07-12T00:24:30.169308236Z" level=info msg="StartContainer for \"78d52e57037dd625274a7412fbfddef2b1acd6c209a19840b7e3977c160250fa\"" Jul 12 00:24:30.289300 env[1764]: time="2025-07-12T00:24:30.289232435Z" level=info msg="StartContainer for \"78d52e57037dd625274a7412fbfddef2b1acd6c209a19840b7e3977c160250fa\" returns successfully" Jul 12 00:24:31.089381 kubelet[2864]: I0712 00:24:31.088080 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-46nwh" podStartSLOduration=14.558917016 podStartE2EDuration="21.088059589s" podCreationTimestamp="2025-07-12 00:24:10 +0000 UTC" firstStartedPulling="2025-07-12 00:24:11.059829604 +0000 UTC m=+5.545226576" lastFinishedPulling="2025-07-12 00:24:17.588972189 +0000 UTC m=+12.074369149" observedRunningTime="2025-07-12 00:24:20.044915878 +0000 UTC m=+14.530313030" watchObservedRunningTime="2025-07-12 00:24:31.088059589 +0000 UTC m=+25.573456537" Jul 12 00:24:31.089381 kubelet[2864]: I0712 00:24:31.088371 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-c6lcv" podStartSLOduration=21.08835377 podStartE2EDuration="21.08835377s" podCreationTimestamp="2025-07-12 00:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:24:31.087863641 +0000 UTC m=+25.573260601" watchObservedRunningTime="2025-07-12 00:24:31.08835377 +0000 UTC m=+25.573750730" Jul 12 00:24:31.672783 systemd-networkd[1435]: cni0: Gained IPv6LL Jul 12 00:24:31.736773 systemd-networkd[1435]: vethcb3f5cf5: Gained IPv6LL Jul 12 00:24:32.857992 env[1764]: time="2025-07-12T00:24:32.857713017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gplr9,Uid:45574b5f-0984-4431-aa19-ede1adb472ba,Namespace:kube-system,Attempt:0,}" Jul 12 00:24:32.908431 systemd-networkd[1435]: vethb5616681: Link UP Jul 12 00:24:32.914999 kernel: cni0: port 2(vethb5616681) entered blocking state Jul 12 00:24:32.915130 kernel: cni0: port 2(vethb5616681) entered disabled state Jul 12 00:24:32.917609 kernel: device vethb5616681 entered promiscuous mode Jul 12 00:24:32.937107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:24:32.937254 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb5616681: link becomes ready Jul 12 00:24:32.937301 kernel: cni0: port 2(vethb5616681) entered blocking state Jul 12 00:24:32.937357 kernel: cni0: port 2(vethb5616681) entered forwarding state Jul 12 00:24:32.938962 systemd-networkd[1435]: vethb5616681: Gained carrier Jul 12 00:24:32.943713 env[1764]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Jul 12 00:24:32.943713 env[1764]: delegateAdd: netconf sent to delegate plugin: Jul 12 00:24:32.971032 env[1764]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-07-12T00:24:32.970897505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:24:32.971032 env[1764]: time="2025-07-12T00:24:32.970979897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:24:32.971361 env[1764]: time="2025-07-12T00:24:32.971275133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:24:32.972195 env[1764]: time="2025-07-12T00:24:32.972064230Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/040f186999872031660f7ae0d541f91528f6de6b839a187b0293932f1bdf52f1 pid=3666 runtime=io.containerd.runc.v2 Jul 12 00:24:33.097225 env[1764]: time="2025-07-12T00:24:33.097168189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gplr9,Uid:45574b5f-0984-4431-aa19-ede1adb472ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"040f186999872031660f7ae0d541f91528f6de6b839a187b0293932f1bdf52f1\"" Jul 12 00:24:33.105725 env[1764]: time="2025-07-12T00:24:33.105647918Z" level=info msg="CreateContainer within sandbox \"040f186999872031660f7ae0d541f91528f6de6b839a187b0293932f1bdf52f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:24:33.146617 env[1764]: time="2025-07-12T00:24:33.144948505Z" level=info msg="CreateContainer within sandbox \"040f186999872031660f7ae0d541f91528f6de6b839a187b0293932f1bdf52f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5398c168d1d3faa646dc47864789be2d85273c16fbd14172f9d3919b6621112\"" Jul 12 00:24:33.147105 env[1764]: time="2025-07-12T00:24:33.147048413Z" level=info msg="StartContainer for \"b5398c168d1d3faa646dc47864789be2d85273c16fbd14172f9d3919b6621112\"" Jul 12 00:24:33.261246 env[1764]: time="2025-07-12T00:24:33.261177994Z" level=info msg="StartContainer for \"b5398c168d1d3faa646dc47864789be2d85273c16fbd14172f9d3919b6621112\" returns successfully" Jul 12 00:24:33.883180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652098102.mount: Deactivated successfully. Jul 12 00:24:34.095619 kubelet[2864]: I0712 00:24:34.095022 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gplr9" podStartSLOduration=24.09499646 podStartE2EDuration="24.09499646s" podCreationTimestamp="2025-07-12 00:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:24:34.094578511 +0000 UTC m=+28.579975483" watchObservedRunningTime="2025-07-12 00:24:34.09499646 +0000 UTC m=+28.580393420" Jul 12 00:24:34.552835 systemd-networkd[1435]: vethb5616681: Gained IPv6LL Jul 12 00:24:48.753944 systemd[1]: Started sshd@5-172.31.16.189:22-147.75.109.163:45264.service. Jul 12 00:24:48.936586 sshd[3809]: Accepted publickey for core from 147.75.109.163 port 45264 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:24:48.939366 sshd[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:24:48.949896 systemd[1]: Started session-6.scope. Jul 12 00:24:48.951334 systemd-logind[1747]: New session 6 of user core. Jul 12 00:24:49.236957 sshd[3809]: pam_unix(sshd:session): session closed for user core Jul 12 00:24:49.243300 systemd-logind[1747]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:24:49.243718 systemd[1]: sshd@5-172.31.16.189:22-147.75.109.163:45264.service: Deactivated successfully. Jul 12 00:24:49.250676 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:24:49.259128 systemd-logind[1747]: Removed session 6. Jul 12 00:24:54.262633 systemd[1]: Started sshd@6-172.31.16.189:22-147.75.109.163:45276.service. Jul 12 00:24:54.436453 sshd[3844]: Accepted publickey for core from 147.75.109.163 port 45276 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:24:54.439791 sshd[3844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:24:54.448677 systemd-logind[1747]: New session 7 of user core. Jul 12 00:24:54.449191 systemd[1]: Started session-7.scope. Jul 12 00:24:54.715090 sshd[3844]: pam_unix(sshd:session): session closed for user core Jul 12 00:24:54.720639 systemd-logind[1747]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:24:54.721240 systemd[1]: sshd@6-172.31.16.189:22-147.75.109.163:45276.service: Deactivated successfully. Jul 12 00:24:54.723708 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:24:54.725197 systemd-logind[1747]: Removed session 7. Jul 12 00:24:59.740158 systemd[1]: Started sshd@7-172.31.16.189:22-147.75.109.163:50290.service. Jul 12 00:24:59.917427 sshd[3878]: Accepted publickey for core from 147.75.109.163 port 50290 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:24:59.920294 sshd[3878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:24:59.930773 systemd-logind[1747]: New session 8 of user core. Jul 12 00:24:59.933249 systemd[1]: Started session-8.scope. Jul 12 00:25:00.198125 sshd[3878]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:00.204490 systemd-logind[1747]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:25:00.205015 systemd[1]: sshd@7-172.31.16.189:22-147.75.109.163:50290.service: Deactivated successfully. Jul 12 00:25:00.207501 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:25:00.210647 systemd-logind[1747]: Removed session 8. Jul 12 00:25:00.226327 systemd[1]: Started sshd@8-172.31.16.189:22-147.75.109.163:50302.service. Jul 12 00:25:00.407068 sshd[3892]: Accepted publickey for core from 147.75.109.163 port 50302 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:00.410717 sshd[3892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:00.421647 systemd[1]: Started session-9.scope. Jul 12 00:25:00.422504 systemd-logind[1747]: New session 9 of user core. Jul 12 00:25:00.806109 sshd[3892]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:00.812150 systemd[1]: sshd@8-172.31.16.189:22-147.75.109.163:50302.service: Deactivated successfully. Jul 12 00:25:00.817399 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:25:00.819160 systemd-logind[1747]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:25:00.824411 systemd-logind[1747]: Removed session 9. Jul 12 00:25:00.837425 systemd[1]: Started sshd@9-172.31.16.189:22-147.75.109.163:50318.service. Jul 12 00:25:01.049737 sshd[3914]: Accepted publickey for core from 147.75.109.163 port 50318 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:01.051910 sshd[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:01.060664 systemd-logind[1747]: New session 10 of user core. Jul 12 00:25:01.062705 systemd[1]: Started session-10.scope. Jul 12 00:25:01.343982 sshd[3914]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:01.350945 systemd[1]: sshd@9-172.31.16.189:22-147.75.109.163:50318.service: Deactivated successfully. Jul 12 00:25:01.352734 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:25:01.353930 systemd-logind[1747]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:25:01.357642 systemd-logind[1747]: Removed session 10. Jul 12 00:25:06.371163 systemd[1]: Started sshd@10-172.31.16.189:22-147.75.109.163:37518.service. Jul 12 00:25:06.550726 sshd[3961]: Accepted publickey for core from 147.75.109.163 port 37518 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:06.553734 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:06.564337 systemd[1]: Started session-11.scope. Jul 12 00:25:06.564883 systemd-logind[1747]: New session 11 of user core. Jul 12 00:25:06.833996 sshd[3961]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:06.840365 systemd-logind[1747]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:25:06.841974 systemd[1]: sshd@10-172.31.16.189:22-147.75.109.163:37518.service: Deactivated successfully. Jul 12 00:25:06.844323 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:25:06.846065 systemd-logind[1747]: Removed session 11. Jul 12 00:25:11.863599 systemd[1]: Started sshd@11-172.31.16.189:22-147.75.109.163:37530.service. Jul 12 00:25:12.041831 sshd[3997]: Accepted publickey for core from 147.75.109.163 port 37530 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:12.044721 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:12.056356 systemd[1]: Started session-12.scope. Jul 12 00:25:12.057318 systemd-logind[1747]: New session 12 of user core. Jul 12 00:25:12.328380 sshd[3997]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:12.334255 systemd[1]: sshd@11-172.31.16.189:22-147.75.109.163:37530.service: Deactivated successfully. Jul 12 00:25:12.337422 systemd-logind[1747]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:25:12.339263 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:25:12.343949 systemd-logind[1747]: Removed session 12. Jul 12 00:25:12.354961 systemd[1]: Started sshd@12-172.31.16.189:22-147.75.109.163:37532.service. Jul 12 00:25:12.532125 sshd[4010]: Accepted publickey for core from 147.75.109.163 port 37532 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:12.535662 sshd[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:12.545568 systemd-logind[1747]: New session 13 of user core. Jul 12 00:25:12.547002 systemd[1]: Started session-13.scope. Jul 12 00:25:12.897453 sshd[4010]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:12.903691 systemd-logind[1747]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:25:12.906077 systemd[1]: sshd@12-172.31.16.189:22-147.75.109.163:37532.service: Deactivated successfully. Jul 12 00:25:12.908140 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:25:12.912182 systemd-logind[1747]: Removed session 13. Jul 12 00:25:12.925724 systemd[1]: Started sshd@13-172.31.16.189:22-147.75.109.163:37546.service. Jul 12 00:25:13.104818 sshd[4021]: Accepted publickey for core from 147.75.109.163 port 37546 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:13.107850 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:13.118936 systemd-logind[1747]: New session 14 of user core. Jul 12 00:25:13.119729 systemd[1]: Started session-14.scope. Jul 12 00:25:15.736035 sshd[4021]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:15.743018 systemd[1]: sshd@13-172.31.16.189:22-147.75.109.163:37546.service: Deactivated successfully. Jul 12 00:25:15.746296 systemd-logind[1747]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:25:15.747472 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:25:15.753519 systemd-logind[1747]: Removed session 14. Jul 12 00:25:15.766384 systemd[1]: Started sshd@14-172.31.16.189:22-147.75.109.163:37558.service. Jul 12 00:25:15.957816 sshd[4045]: Accepted publickey for core from 147.75.109.163 port 37558 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:15.960739 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:15.968934 systemd-logind[1747]: New session 15 of user core. Jul 12 00:25:15.970453 systemd[1]: Started session-15.scope. Jul 12 00:25:16.517941 sshd[4045]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:16.523398 systemd[1]: sshd@14-172.31.16.189:22-147.75.109.163:37558.service: Deactivated successfully. Jul 12 00:25:16.525915 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:25:16.525921 systemd-logind[1747]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:25:16.529571 systemd-logind[1747]: Removed session 15. Jul 12 00:25:16.545665 systemd[1]: Started sshd@15-172.31.16.189:22-147.75.109.163:54706.service. Jul 12 00:25:16.724694 sshd[4071]: Accepted publickey for core from 147.75.109.163 port 54706 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:16.728149 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:16.738069 systemd[1]: Started session-16.scope. Jul 12 00:25:16.738880 systemd-logind[1747]: New session 16 of user core. Jul 12 00:25:16.990285 sshd[4071]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:16.996311 systemd[1]: sshd@15-172.31.16.189:22-147.75.109.163:54706.service: Deactivated successfully. Jul 12 00:25:16.997921 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:25:16.999620 systemd-logind[1747]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:25:17.003012 systemd-logind[1747]: Removed session 16. Jul 12 00:25:22.017872 systemd[1]: Started sshd@16-172.31.16.189:22-147.75.109.163:54722.service. Jul 12 00:25:22.195741 sshd[4105]: Accepted publickey for core from 147.75.109.163 port 54722 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:22.196981 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:22.206023 systemd-logind[1747]: New session 17 of user core. Jul 12 00:25:22.207146 systemd[1]: Started session-17.scope. Jul 12 00:25:22.465183 sshd[4105]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:22.471639 systemd[1]: sshd@16-172.31.16.189:22-147.75.109.163:54722.service: Deactivated successfully. Jul 12 00:25:22.473288 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:25:22.474046 systemd-logind[1747]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:25:22.479123 systemd-logind[1747]: Removed session 17. Jul 12 00:25:27.491938 systemd[1]: Started sshd@17-172.31.16.189:22-147.75.109.163:51532.service. Jul 12 00:25:27.667693 sshd[4142]: Accepted publickey for core from 147.75.109.163 port 51532 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:27.671360 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:27.681034 systemd-logind[1747]: New session 18 of user core. Jul 12 00:25:27.682211 systemd[1]: Started session-18.scope. Jul 12 00:25:27.947982 sshd[4142]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:27.954205 systemd-logind[1747]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:25:27.956236 systemd[1]: sshd@17-172.31.16.189:22-147.75.109.163:51532.service: Deactivated successfully. Jul 12 00:25:27.959136 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:25:27.962341 systemd-logind[1747]: Removed session 18. Jul 12 00:25:32.975122 systemd[1]: Started sshd@18-172.31.16.189:22-147.75.109.163:51544.service. Jul 12 00:25:33.154408 sshd[4176]: Accepted publickey for core from 147.75.109.163 port 51544 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:33.157272 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:33.168500 systemd-logind[1747]: New session 19 of user core. Jul 12 00:25:33.168698 systemd[1]: Started session-19.scope. Jul 12 00:25:33.436486 sshd[4176]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:33.446520 systemd[1]: sshd@18-172.31.16.189:22-147.75.109.163:51544.service: Deactivated successfully. Jul 12 00:25:33.448041 systemd-logind[1747]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:25:33.450269 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:25:33.454339 systemd-logind[1747]: Removed session 19. Jul 12 00:25:36.458428 amazon-ssm-agent[1730]: 2025-07-12 00:25:36 INFO [HealthCheck] HealthCheck reporting agent health. Jul 12 00:25:38.461935 systemd[1]: Started sshd@19-172.31.16.189:22-147.75.109.163:46904.service. Jul 12 00:25:38.640581 sshd[4211]: Accepted publickey for core from 147.75.109.163 port 46904 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:38.643108 sshd[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:38.652403 systemd-logind[1747]: New session 20 of user core. Jul 12 00:25:38.653507 systemd[1]: Started session-20.scope. Jul 12 00:25:38.911065 sshd[4211]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:38.916774 systemd-logind[1747]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:25:38.917343 systemd[1]: sshd@19-172.31.16.189:22-147.75.109.163:46904.service: Deactivated successfully. Jul 12 00:25:38.919595 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:25:38.921229 systemd-logind[1747]: Removed session 20.