Jan 23 17:30:54.228896 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 17:30:54.228945 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Jan 23 15:38:20 -00 2026 Jan 23 17:30:54.228969 kernel: KASLR disabled due to lack of seed Jan 23 17:30:54.228986 kernel: efi: EFI v2.7 by EDK II Jan 23 17:30:54.229002 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78551598 Jan 23 17:30:54.229017 kernel: secureboot: Secure boot disabled Jan 23 17:30:54.229036 kernel: ACPI: Early table checksum verification disabled Jan 23 17:30:54.229051 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 17:30:54.229068 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 17:30:54.229088 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 17:30:54.229104 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 17:30:54.229120 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 23 17:30:54.229136 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 17:30:54.229151 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 17:30:54.229175 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 17:30:54.229192 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 17:30:54.229209 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 17:30:54.229226 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 17:30:54.229242 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 17:30:54.229259 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 17:30:54.229275 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 17:30:54.229292 kernel: printk: legacy bootconsole [uart0] enabled Jan 23 17:30:54.229309 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 17:30:54.229325 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 17:30:54.229346 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Jan 23 17:30:54.229363 kernel: Zone ranges: Jan 23 17:30:54.229380 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 17:30:54.229396 kernel: DMA32 empty Jan 23 17:30:54.229412 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 17:30:54.229429 kernel: Device empty Jan 23 17:30:54.229445 kernel: Movable zone start for each node Jan 23 17:30:54.229461 kernel: Early memory node ranges Jan 23 17:30:54.229478 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 17:30:54.229495 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 17:30:54.229511 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 17:30:54.229527 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 17:30:54.229580 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 17:30:54.229601 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 17:30:54.229618 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 17:30:54.229635 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 17:30:54.229660 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 17:30:54.229683 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 17:30:54.229701 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 23 17:30:54.229718 kernel: psci: probing for conduit method from ACPI. Jan 23 17:30:54.229735 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 17:30:54.229753 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 17:30:54.229770 kernel: psci: Trusted OS migration not required Jan 23 17:30:54.229787 kernel: psci: SMC Calling Convention v1.1 Jan 23 17:30:54.229806 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 17:30:54.229823 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 17:30:54.229845 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 17:30:54.229863 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 17:30:54.229881 kernel: Detected PIPT I-cache on CPU0 Jan 23 17:30:54.229898 kernel: CPU features: detected: GIC system register CPU interface Jan 23 17:30:54.229916 kernel: CPU features: detected: Spectre-v2 Jan 23 17:30:54.229933 kernel: CPU features: detected: Spectre-v3a Jan 23 17:30:54.229950 kernel: CPU features: detected: Spectre-BHB Jan 23 17:30:54.229968 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 17:30:54.229985 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 17:30:54.230003 kernel: alternatives: applying boot alternatives Jan 23 17:30:54.230023 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=35f959b0e84cd72dec35dcaa9fdae098b059a7436b8ff34bc604c87ac6375079 Jan 23 17:30:54.230046 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 17:30:54.230064 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 17:30:54.230081 kernel: Fallback order for Node 0: 0 Jan 23 17:30:54.230099 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 23 17:30:54.230116 kernel: Policy zone: Normal Jan 23 17:30:54.230133 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 17:30:54.230150 kernel: software IO TLB: area num 2. Jan 23 17:30:54.230168 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Jan 23 17:30:54.230185 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 17:30:54.230203 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 17:30:54.230226 kernel: rcu: RCU event tracing is enabled. Jan 23 17:30:54.230244 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 17:30:54.230262 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 17:30:54.230280 kernel: Tracing variant of Tasks RCU enabled. Jan 23 17:30:54.230297 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 17:30:54.230315 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 17:30:54.230332 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:30:54.230350 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:30:54.230368 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 17:30:54.230385 kernel: GICv3: 96 SPIs implemented Jan 23 17:30:54.230403 kernel: GICv3: 0 Extended SPIs implemented Jan 23 17:30:54.230425 kernel: Root IRQ handler: gic_handle_irq Jan 23 17:30:54.230442 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 17:30:54.230459 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 17:30:54.230477 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 17:30:54.230494 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 17:30:54.230512 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 17:30:54.230530 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 23 17:30:54.230564 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 23 17:30:54.230585 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 17:30:54.230603 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 23 17:30:54.230620 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 17:30:54.230643 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 17:30:54.230661 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 17:30:54.230679 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 17:30:54.230697 kernel: Console: colour dummy device 80x25 Jan 23 17:30:54.230715 kernel: printk: legacy console [tty1] enabled Jan 23 17:30:54.230734 kernel: ACPI: Core revision 20240827 Jan 23 17:30:54.230753 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 17:30:54.230772 kernel: pid_max: default: 32768 minimum: 301 Jan 23 17:30:54.230802 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 17:30:54.230821 kernel: landlock: Up and running. Jan 23 17:30:54.230839 kernel: SELinux: Initializing. Jan 23 17:30:54.230857 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:30:54.230876 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:30:54.230894 kernel: rcu: Hierarchical SRCU implementation. Jan 23 17:30:54.230913 kernel: rcu: Max phase no-delay instances is 400. Jan 23 17:30:54.230932 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 17:30:54.230955 kernel: Remapping and enabling EFI services. Jan 23 17:30:54.230973 kernel: smp: Bringing up secondary CPUs ... Jan 23 17:30:54.230991 kernel: Detected PIPT I-cache on CPU1 Jan 23 17:30:54.231010 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 17:30:54.231028 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 23 17:30:54.231046 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 17:30:54.231064 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 17:30:54.231086 kernel: SMP: Total of 2 processors activated. Jan 23 17:30:54.231104 kernel: CPU: All CPU(s) started at EL1 Jan 23 17:30:54.231133 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 17:30:54.231156 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 17:30:54.231174 kernel: CPU features: detected: CRC32 instructions Jan 23 17:30:54.231193 kernel: alternatives: applying system-wide alternatives Jan 23 17:30:54.231212 kernel: Memory: 3823400K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12480K init, 1038K bss, 185716K reserved, 16384K cma-reserved) Jan 23 17:30:54.231232 kernel: devtmpfs: initialized Jan 23 17:30:54.231255 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 17:30:54.231274 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 17:30:54.231293 kernel: 23648 pages in range for non-PLT usage Jan 23 17:30:54.231311 kernel: 515168 pages in range for PLT usage Jan 23 17:30:54.231330 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 17:30:54.231353 kernel: SMBIOS 3.0.0 present. Jan 23 17:30:54.231371 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 17:30:54.231390 kernel: DMI: Memory slots populated: 0/0 Jan 23 17:30:54.231408 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 17:30:54.231447 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 17:30:54.231469 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 17:30:54.231489 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 17:30:54.231513 kernel: audit: initializing netlink subsys (disabled) Jan 23 17:30:54.231532 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Jan 23 17:30:54.231576 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 17:30:54.231598 kernel: cpuidle: using governor menu Jan 23 17:30:54.231616 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 17:30:54.231635 kernel: ASID allocator initialised with 65536 entries Jan 23 17:30:54.231654 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 17:30:54.231678 kernel: Serial: AMBA PL011 UART driver Jan 23 17:30:54.231697 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 17:30:54.231716 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 17:30:54.231735 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 17:30:54.231754 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 17:30:54.231773 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 17:30:54.231792 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 17:30:54.231814 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 17:30:54.231834 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 17:30:54.231852 kernel: ACPI: Added _OSI(Module Device) Jan 23 17:30:54.231871 kernel: ACPI: Added _OSI(Processor Device) Jan 23 17:30:54.231890 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 17:30:54.231908 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 17:30:54.231927 kernel: ACPI: Interpreter enabled Jan 23 17:30:54.231950 kernel: ACPI: Using GIC for interrupt routing Jan 23 17:30:54.231969 kernel: ACPI: MCFG table detected, 1 entries Jan 23 17:30:54.231987 kernel: ACPI: CPU0 has been hot-added Jan 23 17:30:54.232006 kernel: ACPI: CPU1 has been hot-added Jan 23 17:30:54.232024 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 17:30:54.232395 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 17:30:54.232692 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 17:30:54.232956 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 17:30:54.233209 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 17:30:54.233458 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 17:30:54.233484 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 17:30:54.233503 kernel: acpiphp: Slot [1] registered Jan 23 17:30:54.233522 kernel: acpiphp: Slot [2] registered Jan 23 17:30:54.233565 kernel: acpiphp: Slot [3] registered Jan 23 17:30:54.233588 kernel: acpiphp: Slot [4] registered Jan 23 17:30:54.233607 kernel: acpiphp: Slot [5] registered Jan 23 17:30:54.233626 kernel: acpiphp: Slot [6] registered Jan 23 17:30:54.233645 kernel: acpiphp: Slot [7] registered Jan 23 17:30:54.233664 kernel: acpiphp: Slot [8] registered Jan 23 17:30:54.233682 kernel: acpiphp: Slot [9] registered Jan 23 17:30:54.233701 kernel: acpiphp: Slot [10] registered Jan 23 17:30:54.233725 kernel: acpiphp: Slot [11] registered Jan 23 17:30:54.233744 kernel: acpiphp: Slot [12] registered Jan 23 17:30:54.233762 kernel: acpiphp: Slot [13] registered Jan 23 17:30:54.233781 kernel: acpiphp: Slot [14] registered Jan 23 17:30:54.233800 kernel: acpiphp: Slot [15] registered Jan 23 17:30:54.233818 kernel: acpiphp: Slot [16] registered Jan 23 17:30:54.233837 kernel: acpiphp: Slot [17] registered Jan 23 17:30:54.233860 kernel: acpiphp: Slot [18] registered Jan 23 17:30:54.233879 kernel: acpiphp: Slot [19] registered Jan 23 17:30:54.233897 kernel: acpiphp: Slot [20] registered Jan 23 17:30:54.233916 kernel: acpiphp: Slot [21] registered Jan 23 17:30:54.233935 kernel: acpiphp: Slot [22] registered Jan 23 17:30:54.233953 kernel: acpiphp: Slot [23] registered Jan 23 17:30:54.233972 kernel: acpiphp: Slot [24] registered Jan 23 17:30:54.233995 kernel: acpiphp: Slot [25] registered Jan 23 17:30:54.234014 kernel: acpiphp: Slot [26] registered Jan 23 17:30:54.234032 kernel: acpiphp: Slot [27] registered Jan 23 17:30:54.234051 kernel: acpiphp: Slot [28] registered Jan 23 17:30:54.234070 kernel: acpiphp: Slot [29] registered Jan 23 17:30:54.234088 kernel: acpiphp: Slot [30] registered Jan 23 17:30:54.234106 kernel: acpiphp: Slot [31] registered Jan 23 17:30:54.234125 kernel: PCI host bridge to bus 0000:00 Jan 23 17:30:54.234401 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 17:30:54.234665 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 17:30:54.234898 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 17:30:54.235134 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 17:30:54.235449 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 23 17:30:54.235794 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 23 17:30:54.236065 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 23 17:30:54.236344 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 23 17:30:54.236658 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 23 17:30:54.236927 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 17:30:54.237207 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 23 17:30:54.237459 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 23 17:30:54.237759 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 23 17:30:54.238019 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 23 17:30:54.238284 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 17:30:54.238539 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 17:30:54.238860 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 17:30:54.239104 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 17:30:54.239131 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 17:30:54.239151 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 17:30:54.239171 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 17:30:54.239190 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 17:30:54.239209 kernel: iommu: Default domain type: Translated Jan 23 17:30:54.239237 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 17:30:54.239257 kernel: efivars: Registered efivars operations Jan 23 17:30:54.239276 kernel: vgaarb: loaded Jan 23 17:30:54.239295 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 17:30:54.239314 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 17:30:54.239333 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 17:30:54.239352 kernel: pnp: PnP ACPI init Jan 23 17:30:54.239707 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 17:30:54.239740 kernel: pnp: PnP ACPI: found 1 devices Jan 23 17:30:54.239760 kernel: NET: Registered PF_INET protocol family Jan 23 17:30:54.239779 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 17:30:54.239799 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 17:30:54.239818 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 17:30:54.239837 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:30:54.239865 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 17:30:54.239884 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 17:30:54.239903 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:30:54.239922 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:30:54.239941 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 17:30:54.239961 kernel: PCI: CLS 0 bytes, default 64 Jan 23 17:30:54.239980 kernel: kvm [1]: HYP mode not available Jan 23 17:30:54.240004 kernel: Initialise system trusted keyrings Jan 23 17:30:54.240022 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 17:30:54.240041 kernel: Key type asymmetric registered Jan 23 17:30:54.240061 kernel: Asymmetric key parser 'x509' registered Jan 23 17:30:54.240080 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 17:30:54.240099 kernel: io scheduler mq-deadline registered Jan 23 17:30:54.240118 kernel: io scheduler kyber registered Jan 23 17:30:54.240142 kernel: io scheduler bfq registered Jan 23 17:30:54.240417 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 17:30:54.240445 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 17:30:54.240464 kernel: ACPI: button: Power Button [PWRB] Jan 23 17:30:54.240483 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 17:30:54.240502 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 17:30:54.240526 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 17:30:54.240586 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 17:30:54.240867 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 17:30:54.240894 kernel: printk: legacy console [ttyS0] disabled Jan 23 17:30:54.240914 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 17:30:54.240933 kernel: printk: legacy console [ttyS0] enabled Jan 23 17:30:54.240953 kernel: printk: legacy bootconsole [uart0] disabled Jan 23 17:30:54.240978 kernel: thunder_xcv, ver 1.0 Jan 23 17:30:54.240998 kernel: thunder_bgx, ver 1.0 Jan 23 17:30:54.241017 kernel: nicpf, ver 1.0 Jan 23 17:30:54.241035 kernel: nicvf, ver 1.0 Jan 23 17:30:54.241334 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 17:30:54.241642 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T17:30:50 UTC (1769189450) Jan 23 17:30:54.241676 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 17:30:54.241706 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 23 17:30:54.241726 kernel: NET: Registered PF_INET6 protocol family Jan 23 17:30:54.241744 kernel: watchdog: NMI not fully supported Jan 23 17:30:54.241763 kernel: watchdog: Hard watchdog permanently disabled Jan 23 17:30:54.241782 kernel: Segment Routing with IPv6 Jan 23 17:30:54.241801 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 17:30:54.241820 kernel: NET: Registered PF_PACKET protocol family Jan 23 17:30:54.241844 kernel: Key type dns_resolver registered Jan 23 17:30:54.241863 kernel: registered taskstats version 1 Jan 23 17:30:54.241883 kernel: Loading compiled-in X.509 certificates Jan 23 17:30:54.241902 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2bef814d3854848add18d21bd2681c3d03c60f56' Jan 23 17:30:54.241922 kernel: Demotion targets for Node 0: null Jan 23 17:30:54.241941 kernel: Key type .fscrypt registered Jan 23 17:30:54.241960 kernel: Key type fscrypt-provisioning registered Jan 23 17:30:54.241983 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 17:30:54.242002 kernel: ima: Allocated hash algorithm: sha1 Jan 23 17:30:54.242022 kernel: ima: No architecture policies found Jan 23 17:30:54.242041 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 17:30:54.242061 kernel: clk: Disabling unused clocks Jan 23 17:30:54.242080 kernel: PM: genpd: Disabling unused power domains Jan 23 17:30:54.242099 kernel: Freeing unused kernel memory: 12480K Jan 23 17:30:54.242119 kernel: Run /init as init process Jan 23 17:30:54.242144 kernel: with arguments: Jan 23 17:30:54.242164 kernel: /init Jan 23 17:30:54.242182 kernel: with environment: Jan 23 17:30:54.242201 kernel: HOME=/ Jan 23 17:30:54.242220 kernel: TERM=linux Jan 23 17:30:54.242240 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 17:30:54.242488 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 17:30:54.242740 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 17:30:54.242770 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 17:30:54.242790 kernel: GPT:25804799 != 33554431 Jan 23 17:30:54.242809 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 17:30:54.242828 kernel: GPT:25804799 != 33554431 Jan 23 17:30:54.242846 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 17:30:54.242874 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:30:54.242893 kernel: SCSI subsystem initialized Jan 23 17:30:54.242912 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 17:30:54.242932 kernel: device-mapper: uevent: version 1.0.3 Jan 23 17:30:54.242951 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 17:30:54.242970 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 17:30:54.242989 kernel: raid6: neonx8 gen() 6336 MB/s Jan 23 17:30:54.243013 kernel: raid6: neonx4 gen() 6399 MB/s Jan 23 17:30:54.243032 kernel: raid6: neonx2 gen() 5332 MB/s Jan 23 17:30:54.243051 kernel: raid6: neonx1 gen() 3931 MB/s Jan 23 17:30:54.243070 kernel: raid6: int64x8 gen() 3594 MB/s Jan 23 17:30:54.243089 kernel: raid6: int64x4 gen() 3670 MB/s Jan 23 17:30:54.243109 kernel: raid6: int64x2 gen() 3499 MB/s Jan 23 17:30:54.243128 kernel: raid6: int64x1 gen() 2729 MB/s Jan 23 17:30:54.243151 kernel: raid6: using algorithm neonx4 gen() 6399 MB/s Jan 23 17:30:54.243170 kernel: raid6: .... xor() 4878 MB/s, rmw enabled Jan 23 17:30:54.243189 kernel: raid6: using neon recovery algorithm Jan 23 17:30:54.243208 kernel: xor: measuring software checksum speed Jan 23 17:30:54.243228 kernel: 8regs : 12990 MB/sec Jan 23 17:30:54.243247 kernel: 32regs : 12494 MB/sec Jan 23 17:30:54.243266 kernel: arm64_neon : 9190 MB/sec Jan 23 17:30:54.243289 kernel: xor: using function: 8regs (12990 MB/sec) Jan 23 17:30:54.243309 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 17:30:54.243329 kernel: BTRFS: device fsid 8d2a73a7-ed2a-4757-891b-9df844aa914e devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (222) Jan 23 17:30:54.243349 kernel: BTRFS info (device dm-0): first mount of filesystem 8d2a73a7-ed2a-4757-891b-9df844aa914e Jan 23 17:30:54.243369 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:30:54.243388 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 17:30:54.243407 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 17:30:54.243452 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 17:30:54.243475 kernel: loop: module loaded Jan 23 17:30:54.243495 kernel: loop0: detected capacity change from 0 to 91840 Jan 23 17:30:54.243514 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 17:30:54.243536 systemd[1]: Successfully made /usr/ read-only. Jan 23 17:30:54.243589 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:30:54.243618 systemd[1]: Detected virtualization amazon. Jan 23 17:30:54.243639 systemd[1]: Detected architecture arm64. Jan 23 17:30:54.243660 systemd[1]: Running in initrd. Jan 23 17:30:54.243680 systemd[1]: No hostname configured, using default hostname. Jan 23 17:30:54.243701 systemd[1]: Hostname set to . Jan 23 17:30:54.243722 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 17:30:54.243742 systemd[1]: Queued start job for default target initrd.target. Jan 23 17:30:54.243768 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:30:54.243789 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:30:54.243810 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:30:54.243833 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 17:30:54.243854 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:30:54.243896 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 17:30:54.243918 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 17:30:54.243941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:30:54.243962 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:30:54.243984 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:30:54.244011 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:30:54.244033 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:30:54.244054 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:30:54.244076 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:30:54.244097 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:30:54.244118 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:30:54.244140 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 17:30:54.244166 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 17:30:54.244188 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 17:30:54.244210 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:30:54.244231 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:30:54.244253 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:30:54.244275 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:30:54.244299 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 17:30:54.244326 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 17:30:54.244348 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:30:54.244370 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 17:30:54.244393 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 17:30:54.244416 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 17:30:54.244438 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:30:54.244461 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:30:54.244489 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:30:54.244512 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 17:30:54.244539 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:30:54.244620 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:30:54.244645 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 17:30:54.244741 systemd-journald[360]: Collecting audit messages is enabled. Jan 23 17:30:54.244795 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 17:30:54.244817 systemd-journald[360]: Journal started Jan 23 17:30:54.244855 systemd-journald[360]: Runtime Journal (/run/log/journal/ec278755506f2ff16fd4fcec7329250f) is 8M, max 75.3M, 67.3M free. Jan 23 17:30:54.248594 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:30:54.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.257611 kernel: audit: type=1130 audit(1769189454.247:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.261832 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:30:54.278570 kernel: Bridge firewalling registered Jan 23 17:30:54.278536 systemd-modules-load[361]: Inserted module 'br_netfilter' Jan 23 17:30:54.284263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:30:54.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.296775 kernel: audit: type=1130 audit(1769189454.289:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.298453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:30:54.310794 kernel: audit: type=1130 audit(1769189454.301:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.310318 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:30:54.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.325600 kernel: audit: type=1130 audit(1769189454.316:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.326500 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 17:30:54.332451 systemd-tmpfiles[374]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 17:30:54.347870 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:30:54.366849 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:30:54.376998 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:30:54.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.398624 kernel: audit: type=1130 audit(1769189454.385:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.407159 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:30:54.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.420747 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:30:54.425507 kernel: audit: type=1130 audit(1769189454.413:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.425593 kernel: audit: type=1130 audit(1769189454.419:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.426631 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:30:54.424000 audit: BPF prog-id=6 op=LOAD Jan 23 17:30:54.436857 kernel: audit: type=1334 audit(1769189454.424:9): prog-id=6 op=LOAD Jan 23 17:30:54.453048 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:30:54.463851 kernel: audit: type=1130 audit(1769189454.451:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.455484 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 17:30:54.510420 dracut-cmdline[400]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=35f959b0e84cd72dec35dcaa9fdae098b059a7436b8ff34bc604c87ac6375079 Jan 23 17:30:54.607344 systemd-resolved[393]: Positive Trust Anchors: Jan 23 17:30:54.608039 systemd-resolved[393]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:30:54.608050 systemd-resolved[393]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 17:30:54.608115 systemd-resolved[393]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:30:54.842586 kernel: Loading iSCSI transport class v2.0-870. Jan 23 17:30:54.894635 kernel: random: crng init done Jan 23 17:30:54.897601 kernel: iscsi: registered transport (tcp) Jan 23 17:30:54.899885 systemd-resolved[393]: Defaulting to hostname 'linux'. Jan 23 17:30:54.902801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:30:54.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.909823 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:30:54.919398 kernel: audit: type=1130 audit(1769189454.908:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.957793 kernel: iscsi: registered transport (qla4xxx) Jan 23 17:30:54.957876 kernel: QLogic iSCSI HBA Driver Jan 23 17:30:54.998211 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:30:55.021256 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:30:55.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.025406 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:30:55.115284 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 17:30:55.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.122482 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 17:30:55.132408 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 17:30:55.196423 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:30:55.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.203000 audit: BPF prog-id=7 op=LOAD Jan 23 17:30:55.204000 audit: BPF prog-id=8 op=LOAD Jan 23 17:30:55.206802 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:30:55.268024 systemd-udevd[642]: Using default interface naming scheme 'v257'. Jan 23 17:30:55.290842 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:30:55.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.300147 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 17:30:55.357494 dracut-pre-trigger[713]: rd.md=0: removing MD RAID activation Jan 23 17:30:55.363761 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:30:55.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.371000 audit: BPF prog-id=9 op=LOAD Jan 23 17:30:55.375516 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:30:55.438709 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:30:55.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.447438 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:30:55.487296 systemd-networkd[755]: lo: Link UP Jan 23 17:30:55.487317 systemd-networkd[755]: lo: Gained carrier Jan 23 17:30:55.492617 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:30:55.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.496755 systemd[1]: Reached target network.target - Network. Jan 23 17:30:55.611367 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:30:55.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.622258 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 17:30:55.839317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:30:55.842207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:30:55.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.850568 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:30:55.857911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:30:55.868440 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 17:30:55.868482 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 17:30:55.873580 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 17:30:55.873990 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 17:30:55.881127 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:93:cb:ea:4c:a3 Jan 23 17:30:55.887505 (udev-worker)[786]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:30:55.913590 kernel: nvme nvme0: using unchecked data buffer Jan 23 17:30:55.921529 systemd-networkd[755]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:30:55.921579 systemd-networkd[755]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:30:55.933802 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:30:55.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.948281 systemd-networkd[755]: eth0: Link UP Jan 23 17:30:55.948597 systemd-networkd[755]: eth0: Gained carrier Jan 23 17:30:55.948620 systemd-networkd[755]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:30:55.968642 systemd-networkd[755]: eth0: DHCPv4 address 172.31.16.139/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 17:30:56.074145 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 17:30:56.101976 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 17:30:56.134962 disk-uuid[861]: Primary Header is updated. Jan 23 17:30:56.134962 disk-uuid[861]: Secondary Entries is updated. Jan 23 17:30:56.134962 disk-uuid[861]: Secondary Header is updated. Jan 23 17:30:56.223675 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 17:30:56.260720 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 17:30:56.304970 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 17:30:56.477661 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 17:30:56.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:56.486755 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:30:56.493065 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:30:56.497373 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:30:56.506593 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 17:30:56.546988 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:30:56.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:57.112812 systemd-networkd[755]: eth0: Gained IPv6LL Jan 23 17:30:57.279238 disk-uuid[866]: Warning: The kernel is still using the old partition table. Jan 23 17:30:57.279238 disk-uuid[866]: The new table will be used at the next reboot or after you Jan 23 17:30:57.279238 disk-uuid[866]: run partprobe(8) or kpartx(8) Jan 23 17:30:57.279238 disk-uuid[866]: The operation has completed successfully. Jan 23 17:30:57.296885 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 17:30:57.297283 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 17:30:57.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:57.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:57.307447 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 17:30:57.375591 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1013) Jan 23 17:30:57.379974 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:30:57.380111 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:30:57.420018 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:30:57.420105 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:30:57.430605 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:30:57.432086 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 17:30:57.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:57.438944 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 17:30:58.753894 ignition[1032]: Ignition 2.24.0 Jan 23 17:30:58.754433 ignition[1032]: Stage: fetch-offline Jan 23 17:30:58.754925 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:30:58.754957 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:30:58.755595 ignition[1032]: Ignition finished successfully Jan 23 17:30:58.766695 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:30:58.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:58.773165 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 17:30:58.812060 ignition[1040]: Ignition 2.24.0 Jan 23 17:30:58.812608 ignition[1040]: Stage: fetch Jan 23 17:30:58.813009 ignition[1040]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:30:58.813033 ignition[1040]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:30:58.813161 ignition[1040]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:30:58.834969 ignition[1040]: PUT result: OK Jan 23 17:30:58.839647 ignition[1040]: parsed url from cmdline: "" Jan 23 17:30:58.839672 ignition[1040]: no config URL provided Jan 23 17:30:58.839691 ignition[1040]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:30:58.839724 ignition[1040]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:30:58.839758 ignition[1040]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:30:58.848334 ignition[1040]: PUT result: OK Jan 23 17:30:58.849761 ignition[1040]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 17:30:58.853450 ignition[1040]: GET result: OK Jan 23 17:30:58.853721 ignition[1040]: parsing config with SHA512: d3165b45a2846d46c3789b0317058809cd4a1ac69fbea4c36d0c7b00035be87b13571ae2bdf55fc933b3aba94e183decf25dd8591a5469313409f02fde7467a6 Jan 23 17:30:58.866202 unknown[1040]: fetched base config from "system" Jan 23 17:30:58.866627 unknown[1040]: fetched base config from "system" Jan 23 17:30:58.867740 ignition[1040]: fetch: fetch complete Jan 23 17:30:58.866641 unknown[1040]: fetched user config from "aws" Jan 23 17:30:58.867753 ignition[1040]: fetch: fetch passed Jan 23 17:30:58.867901 ignition[1040]: Ignition finished successfully Jan 23 17:30:58.881012 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 17:30:58.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:58.887308 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 17:30:58.936283 ignition[1047]: Ignition 2.24.0 Jan 23 17:30:58.936848 ignition[1047]: Stage: kargs Jan 23 17:30:58.938279 ignition[1047]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:30:58.938322 ignition[1047]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:30:58.938487 ignition[1047]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:30:58.945268 ignition[1047]: PUT result: OK Jan 23 17:30:58.955284 ignition[1047]: kargs: kargs passed Jan 23 17:30:58.958261 ignition[1047]: Ignition finished successfully Jan 23 17:30:58.962102 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 17:30:58.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:58.969229 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 17:30:59.014003 ignition[1053]: Ignition 2.24.0 Jan 23 17:30:59.014606 ignition[1053]: Stage: disks Jan 23 17:30:59.015373 ignition[1053]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:30:59.015414 ignition[1053]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:30:59.015585 ignition[1053]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:30:59.025309 ignition[1053]: PUT result: OK Jan 23 17:30:59.033424 ignition[1053]: disks: disks passed Jan 23 17:30:59.033818 ignition[1053]: Ignition finished successfully Jan 23 17:30:59.040138 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 17:30:59.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:59.046742 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 17:30:59.052383 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 17:30:59.057031 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:30:59.063696 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:30:59.068152 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:30:59.074838 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 17:30:59.219706 systemd-fsck[1061]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 23 17:30:59.226025 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 17:30:59.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:59.240521 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 17:30:59.500589 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6e8555bb-6998-46ec-8ba6-5a7a415f09ac r/w with ordered data mode. Quota mode: none. Jan 23 17:30:59.501046 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 17:30:59.505848 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 17:30:59.568042 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:30:59.573136 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 17:30:59.577957 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 17:30:59.582107 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 17:30:59.582180 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:30:59.609892 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 17:30:59.616493 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 17:30:59.629605 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1080) Jan 23 17:30:59.635275 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:30:59.635353 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:30:59.644213 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:30:59.644290 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:30:59.646881 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:31:01.559374 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 17:31:01.573527 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 23 17:31:01.573606 kernel: audit: type=1130 audit(1769189461.558:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:01.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:01.568967 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 17:31:01.588718 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 17:31:01.609372 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 17:31:01.613044 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:31:01.661625 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 17:31:01.674707 kernel: audit: type=1130 audit(1769189461.664:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:01.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:01.676137 ignition[1177]: INFO : Ignition 2.24.0 Jan 23 17:31:01.676137 ignition[1177]: INFO : Stage: mount Jan 23 17:31:01.680208 ignition[1177]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:31:01.680208 ignition[1177]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:31:01.680208 ignition[1177]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:31:01.688781 ignition[1177]: INFO : PUT result: OK Jan 23 17:31:01.697588 ignition[1177]: INFO : mount: mount passed Jan 23 17:31:01.699506 ignition[1177]: INFO : Ignition finished successfully Jan 23 17:31:01.704678 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 17:31:01.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:01.716522 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 17:31:01.719101 kernel: audit: type=1130 audit(1769189461.707:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:01.763683 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:31:01.802574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1188) Jan 23 17:31:01.807438 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:31:01.807505 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:31:01.815302 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:31:01.815408 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:31:01.819093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:31:01.865616 ignition[1205]: INFO : Ignition 2.24.0 Jan 23 17:31:01.867828 ignition[1205]: INFO : Stage: files Jan 23 17:31:01.869999 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:31:01.872848 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:31:01.876334 ignition[1205]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:31:01.879939 ignition[1205]: INFO : PUT result: OK Jan 23 17:31:01.891539 ignition[1205]: DEBUG : files: compiled without relabeling support, skipping Jan 23 17:31:01.896853 ignition[1205]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 17:31:01.896853 ignition[1205]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 17:31:01.973393 ignition[1205]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 17:31:01.976813 ignition[1205]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 17:31:01.981442 unknown[1205]: wrote ssh authorized keys file for user: core Jan 23 17:31:01.984999 ignition[1205]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 17:31:01.988706 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:31:01.988706 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 17:31:02.082982 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 17:31:02.322334 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:31:02.322334 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 17:31:02.331027 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 17:31:02.331027 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:31:02.331027 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:31:02.331027 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:31:02.331027 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:31:02.331027 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:31:02.331027 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:31:02.361635 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:31:02.366084 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:31:02.366084 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:31:02.376083 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:31:02.376083 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:31:02.376083 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 23 17:31:02.841048 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 17:31:03.254636 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:31:03.254636 ignition[1205]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 17:31:03.263119 ignition[1205]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:31:03.268095 ignition[1205]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:31:03.268095 ignition[1205]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 17:31:03.268095 ignition[1205]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 17:31:03.268095 ignition[1205]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 17:31:03.268095 ignition[1205]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:31:03.268095 ignition[1205]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:31:03.268095 ignition[1205]: INFO : files: files passed Jan 23 17:31:03.268095 ignition[1205]: INFO : Ignition finished successfully Jan 23 17:31:03.305500 kernel: audit: type=1130 audit(1769189463.295:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.291253 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 17:31:03.299237 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 17:31:03.316044 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 17:31:03.340703 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 17:31:03.343357 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 17:31:03.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.358105 kernel: audit: type=1130 audit(1769189463.347:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.358189 kernel: audit: type=1131 audit(1769189463.347:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.370221 initrd-setup-root-after-ignition[1237]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:31:03.374381 initrd-setup-root-after-ignition[1237]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:31:03.381997 initrd-setup-root-after-ignition[1241]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:31:03.389729 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:31:03.407135 kernel: audit: type=1130 audit(1769189463.388:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.390315 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 17:31:03.412711 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 17:31:03.521440 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 17:31:03.521936 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 17:31:03.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.534737 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 17:31:03.547175 kernel: audit: type=1130 audit(1769189463.531:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.547225 kernel: audit: type=1131 audit(1769189463.531:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.547589 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 17:31:03.552819 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 17:31:03.558153 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 17:31:03.605264 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:31:03.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.613460 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 17:31:03.622364 kernel: audit: type=1130 audit(1769189463.609:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.657925 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:31:03.658532 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:31:03.664827 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:31:03.667763 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 17:31:03.673202 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 17:31:03.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.673465 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:31:03.683638 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 17:31:03.689161 systemd[1]: Stopped target basic.target - Basic System. Jan 23 17:31:03.691731 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 17:31:03.694800 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:31:03.698078 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 17:31:03.704241 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:31:03.708743 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 17:31:03.713980 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:31:03.723685 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 17:31:03.730187 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 17:31:03.734785 systemd[1]: Stopped target swap.target - Swaps. Jan 23 17:31:03.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.739313 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 17:31:03.740181 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:31:03.751996 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:31:03.760858 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:31:03.764772 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 17:31:03.766758 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:31:03.770412 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 17:31:03.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.770768 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 17:31:03.784334 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 17:31:03.785406 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:31:03.793756 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 17:31:03.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.794033 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 17:31:03.803022 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 17:31:03.811029 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 17:31:03.816809 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 17:31:03.820056 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:31:03.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.829373 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 17:31:03.832233 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:31:03.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.838727 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 17:31:03.840615 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:31:03.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.857279 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 17:31:03.863424 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 17:31:03.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.900184 ignition[1261]: INFO : Ignition 2.24.0 Jan 23 17:31:03.903066 ignition[1261]: INFO : Stage: umount Jan 23 17:31:03.905737 ignition[1261]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:31:03.905737 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:31:03.905737 ignition[1261]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:31:03.915765 ignition[1261]: INFO : PUT result: OK Jan 23 17:31:03.932633 ignition[1261]: INFO : umount: umount passed Jan 23 17:31:03.932633 ignition[1261]: INFO : Ignition finished successfully Jan 23 17:31:03.939136 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 17:31:03.944641 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 17:31:03.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.956762 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 17:31:03.960275 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 17:31:03.960539 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 17:31:03.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.968072 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 17:31:03.970156 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 17:31:03.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.976008 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 17:31:03.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.976159 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 17:31:03.980370 systemd[1]: Stopped target network.target - Network. Jan 23 17:31:03.982568 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 17:31:03.982721 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:31:03.986868 systemd[1]: Stopped target paths.target - Path Units. Jan 23 17:31:03.989936 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 17:31:04.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.994720 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:31:04.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.997895 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 17:31:04.002145 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 17:31:04.007087 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 17:31:04.007181 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:31:04.009798 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 17:31:04.009896 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:31:04.012829 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 23 17:31:04.012904 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 23 17:31:04.015629 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 17:31:04.015774 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 17:31:04.025302 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 17:31:04.025432 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 17:31:04.029004 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 17:31:04.037305 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 17:31:04.079935 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 17:31:04.085716 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 17:31:04.106000 audit: BPF prog-id=9 op=UNLOAD Jan 23 17:31:04.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.119926 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 17:31:04.120449 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 17:31:04.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.135000 audit: BPF prog-id=6 op=UNLOAD Jan 23 17:31:04.137040 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 17:31:04.142313 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 17:31:04.142415 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:31:04.149364 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 17:31:04.159740 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 17:31:04.162748 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:31:04.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.170133 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:31:04.170774 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:31:04.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.178524 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 17:31:04.178679 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 17:31:04.181327 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:31:04.186883 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 17:31:04.200381 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 17:31:04.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.208727 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 17:31:04.208935 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 17:31:04.231408 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 17:31:04.234138 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:31:04.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.243842 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 17:31:04.244792 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 17:31:04.255976 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 17:31:04.256333 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:31:04.267603 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 17:31:04.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.267748 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:31:04.277061 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 17:31:04.277356 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 17:31:04.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.285170 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 17:31:04.285481 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:31:04.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.297106 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 17:31:04.303989 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 17:31:04.307775 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:31:04.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.318055 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 17:31:04.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.318192 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:31:04.321617 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 17:31:04.321756 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:31:04.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.338197 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 17:31:04.338323 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:31:04.341378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:31:04.341513 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:31:04.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.346345 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 17:31:04.352773 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 17:31:04.373645 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 17:31:04.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.373863 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 17:31:04.380401 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 17:31:04.390375 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 17:31:04.423117 systemd[1]: Switching root. Jan 23 17:31:04.499243 systemd-journald[360]: Journal stopped Jan 23 17:31:08.080279 systemd-journald[360]: Received SIGTERM from PID 1 (systemd). Jan 23 17:31:08.080437 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 17:31:08.080481 kernel: SELinux: policy capability open_perms=1 Jan 23 17:31:08.080523 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 17:31:08.090840 kernel: SELinux: policy capability always_check_network=0 Jan 23 17:31:08.090906 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 17:31:08.090943 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 17:31:08.090975 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 17:31:08.091016 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 17:31:08.091055 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 17:31:08.091091 systemd[1]: Successfully loaded SELinux policy in 155.678ms. Jan 23 17:31:08.091143 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.708ms. Jan 23 17:31:08.091178 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:31:08.091212 systemd[1]: Detected virtualization amazon. Jan 23 17:31:08.091255 systemd[1]: Detected architecture arm64. Jan 23 17:31:08.091294 systemd[1]: Detected first boot. Jan 23 17:31:08.091329 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 17:31:08.091382 kernel: NET: Registered PF_VSOCK protocol family Jan 23 17:31:08.091422 zram_generator::config[1307]: No configuration found. Jan 23 17:31:08.091485 systemd[1]: Populated /etc with preset unit settings. Jan 23 17:31:08.091519 kernel: kauditd_printk_skb: 44 callbacks suppressed Jan 23 17:31:08.091581 kernel: audit: type=1334 audit(1769189467.321:89): prog-id=12 op=LOAD Jan 23 17:31:08.091613 kernel: audit: type=1334 audit(1769189467.321:90): prog-id=3 op=UNLOAD Jan 23 17:31:08.091644 kernel: audit: type=1334 audit(1769189467.322:91): prog-id=13 op=LOAD Jan 23 17:31:08.091676 kernel: audit: type=1334 audit(1769189467.324:92): prog-id=14 op=LOAD Jan 23 17:31:08.091707 kernel: audit: type=1334 audit(1769189467.324:93): prog-id=4 op=UNLOAD Jan 23 17:31:08.091734 kernel: audit: type=1334 audit(1769189467.324:94): prog-id=5 op=UNLOAD Jan 23 17:31:08.091766 kernel: audit: type=1131 audit(1769189467.330:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.091801 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 17:31:08.091835 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 17:31:08.091867 kernel: audit: type=1334 audit(1769189467.340:96): prog-id=12 op=UNLOAD Jan 23 17:31:08.091903 kernel: audit: type=1130 audit(1769189467.344:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.091938 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 17:31:08.091972 kernel: audit: type=1131 audit(1769189467.344:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.092010 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 17:31:08.092051 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 17:31:08.092084 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 17:31:08.092120 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 17:31:08.092152 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 17:31:08.092183 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 17:31:08.092216 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 17:31:08.092249 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 17:31:08.092287 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:31:08.092319 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:31:08.092352 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 17:31:08.092382 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 17:31:08.092414 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 17:31:08.092450 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:31:08.092480 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 17:31:08.092512 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:31:08.111168 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:31:08.111242 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 17:31:08.111278 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 17:31:08.111310 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 17:31:08.111372 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 17:31:08.111413 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:31:08.111447 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:31:08.111482 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 23 17:31:08.111517 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:31:08.111571 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:31:08.111635 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 17:31:08.111675 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 17:31:08.111707 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 17:31:08.111738 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 17:31:08.111769 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 23 17:31:08.111800 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:31:08.111835 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 23 17:31:08.111868 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 23 17:31:08.111906 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:31:08.111938 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:31:08.111967 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 17:31:08.112001 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 17:31:08.112033 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 17:31:08.112067 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 17:31:08.112098 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 17:31:08.112136 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 17:31:08.112166 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 17:31:08.112201 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 17:31:08.119687 systemd[1]: Reached target machines.target - Containers. Jan 23 17:31:08.119750 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 17:31:08.119782 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:31:08.119819 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:31:08.119860 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 17:31:08.119890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:31:08.119920 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:31:08.119949 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:31:08.119980 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 17:31:08.120010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:31:08.120039 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 17:31:08.120072 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 17:31:08.120102 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 17:31:08.120131 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 17:31:08.120160 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 17:31:08.120191 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:31:08.120220 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:31:08.120254 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:31:08.120285 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:31:08.120317 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 17:31:08.120346 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 17:31:08.120381 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:31:08.120411 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 17:31:08.120443 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 17:31:08.120473 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 17:31:08.120502 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 17:31:08.120533 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 17:31:08.122599 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 17:31:08.122647 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:31:08.122678 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 17:31:08.122708 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 17:31:08.122743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:31:08.122773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:31:08.122806 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:31:08.122837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:31:08.122866 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:31:08.122898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:31:08.122927 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:31:08.122962 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 17:31:08.122997 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:31:08.123027 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:31:08.123058 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:31:08.123090 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 17:31:08.123123 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:31:08.123152 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 17:31:08.123182 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:31:08.123212 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 23 17:31:08.123241 kernel: fuse: init (API version 7.41) Jan 23 17:31:08.123271 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 17:31:08.123300 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:31:08.123329 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 17:31:08.123382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:31:08.123420 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:31:08.123501 systemd-journald[1385]: Collecting audit messages is enabled. Jan 23 17:31:08.126632 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 17:31:08.126702 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:31:08.126746 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 17:31:08.126785 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 17:31:08.126819 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 17:31:08.126850 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 17:31:08.126883 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 17:31:08.126914 systemd-journald[1385]: Journal started Jan 23 17:31:08.126969 systemd-journald[1385]: Runtime Journal (/run/log/journal/ec278755506f2ff16fd4fcec7329250f) is 8M, max 75.3M, 67.3M free. Jan 23 17:31:07.504000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 23 17:31:07.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.737000 audit: BPF prog-id=14 op=UNLOAD Jan 23 17:31:07.737000 audit: BPF prog-id=13 op=UNLOAD Jan 23 17:31:07.740000 audit: BPF prog-id=15 op=LOAD Jan 23 17:31:07.740000 audit: BPF prog-id=16 op=LOAD Jan 23 17:31:07.741000 audit: BPF prog-id=17 op=LOAD Jan 23 17:31:07.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.074000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 23 17:31:08.074000 audit[1385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffcd49a400 a2=4000 a3=0 items=0 ppid=1 pid=1385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:08.074000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 23 17:31:08.137685 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:31:08.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.310466 systemd[1]: Queued start job for default target multi-user.target. Jan 23 17:31:07.327226 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 17:31:07.329696 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 17:31:08.146859 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 17:31:08.179010 kernel: ACPI: bus type drm_connector registered Jan 23 17:31:08.181013 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:31:08.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.184286 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:31:08.190847 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 17:31:08.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.201541 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:31:08.228508 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 17:31:08.257652 kernel: loop1: detected capacity change from 0 to 45344 Jan 23 17:31:08.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.230761 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Jan 23 17:31:08.230787 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Jan 23 17:31:08.241115 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 17:31:08.244722 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 17:31:08.249410 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:31:08.267980 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 17:31:08.274706 systemd-journald[1385]: Time spent on flushing to /var/log/journal/ec278755506f2ff16fd4fcec7329250f is 130.002ms for 1066 entries. Jan 23 17:31:08.274706 systemd-journald[1385]: System Journal (/var/log/journal/ec278755506f2ff16fd4fcec7329250f) is 8M, max 588.1M, 580.1M free. Jan 23 17:31:08.438948 systemd-journald[1385]: Received client request to flush runtime journal. Jan 23 17:31:08.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.439000 audit: BPF prog-id=18 op=LOAD Jan 23 17:31:08.339854 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 17:31:08.371956 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 17:31:08.380642 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 17:31:08.441000 audit: BPF prog-id=19 op=LOAD Jan 23 17:31:08.441000 audit: BPF prog-id=20 op=LOAD Jan 23 17:31:08.383667 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 17:31:08.411451 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:31:08.433696 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 17:31:08.447000 audit: BPF prog-id=21 op=LOAD Jan 23 17:31:08.445015 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 23 17:31:08.454997 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:31:08.463207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:31:08.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.468687 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 17:31:08.488000 audit: BPF prog-id=22 op=LOAD Jan 23 17:31:08.490000 audit: BPF prog-id=23 op=LOAD Jan 23 17:31:08.490000 audit: BPF prog-id=24 op=LOAD Jan 23 17:31:08.493958 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 17:31:08.496000 audit: BPF prog-id=25 op=LOAD Jan 23 17:31:08.498000 audit: BPF prog-id=26 op=LOAD Jan 23 17:31:08.498000 audit: BPF prog-id=27 op=LOAD Jan 23 17:31:08.502679 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 23 17:31:08.560440 systemd-tmpfiles[1459]: ACLs are not supported, ignoring. Jan 23 17:31:08.560474 systemd-tmpfiles[1459]: ACLs are not supported, ignoring. Jan 23 17:31:08.594591 kernel: loop2: detected capacity change from 0 to 100192 Jan 23 17:31:08.639407 systemd-nsresourced[1465]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 23 17:31:08.646828 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:31:08.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.678785 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 23 17:31:08.684319 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 17:31:08.792506 systemd-oomd[1457]: No swap; memory pressure usage will be degraded Jan 23 17:31:08.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.794247 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 23 17:31:08.945009 systemd-resolved[1458]: Positive Trust Anchors: Jan 23 17:31:08.945602 systemd-resolved[1458]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:31:08.945618 systemd-resolved[1458]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 17:31:08.945681 systemd-resolved[1458]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:31:08.949688 kernel: loop3: detected capacity change from 0 to 200800 Jan 23 17:31:08.961881 systemd-resolved[1458]: Defaulting to hostname 'linux'. Jan 23 17:31:08.964438 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:31:08.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:08.967157 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:31:09.147671 kernel: loop4: detected capacity change from 0 to 61504 Jan 23 17:31:09.365127 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 17:31:09.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:09.367000 audit: BPF prog-id=8 op=UNLOAD Jan 23 17:31:09.367000 audit: BPF prog-id=7 op=UNLOAD Jan 23 17:31:09.368000 audit: BPF prog-id=28 op=LOAD Jan 23 17:31:09.368000 audit: BPF prog-id=29 op=LOAD Jan 23 17:31:09.372306 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:31:09.440977 systemd-udevd[1488]: Using default interface naming scheme 'v257'. Jan 23 17:31:09.449646 kernel: loop5: detected capacity change from 0 to 45344 Jan 23 17:31:09.470632 kernel: loop6: detected capacity change from 0 to 100192 Jan 23 17:31:09.485619 kernel: loop7: detected capacity change from 0 to 200800 Jan 23 17:31:09.513653 kernel: loop1: detected capacity change from 0 to 61504 Jan 23 17:31:09.526464 (sd-merge)[1490]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 23 17:31:09.534048 (sd-merge)[1490]: Merged extensions into '/usr'. Jan 23 17:31:09.545118 systemd[1]: Reload requested from client PID 1424 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 17:31:09.545167 systemd[1]: Reloading... Jan 23 17:31:09.757865 (udev-worker)[1511]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:31:09.837152 zram_generator::config[1546]: No configuration found. Jan 23 17:31:10.578513 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 17:31:10.581003 systemd[1]: Reloading finished in 1034 ms. Jan 23 17:31:10.602639 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:31:10.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:10.612028 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 17:31:10.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:10.694519 systemd[1]: Starting ensure-sysext.service... Jan 23 17:31:10.699000 audit: BPF prog-id=30 op=LOAD Jan 23 17:31:10.705966 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:31:10.711984 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:31:10.721175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:31:10.728000 audit: BPF prog-id=31 op=LOAD Jan 23 17:31:10.731000 audit: BPF prog-id=18 op=UNLOAD Jan 23 17:31:10.731000 audit: BPF prog-id=32 op=LOAD Jan 23 17:31:10.731000 audit: BPF prog-id=33 op=LOAD Jan 23 17:31:10.731000 audit: BPF prog-id=19 op=UNLOAD Jan 23 17:31:10.731000 audit: BPF prog-id=20 op=UNLOAD Jan 23 17:31:10.733000 audit: BPF prog-id=34 op=LOAD Jan 23 17:31:10.733000 audit: BPF prog-id=25 op=UNLOAD Jan 23 17:31:10.734000 audit: BPF prog-id=35 op=LOAD Jan 23 17:31:10.734000 audit: BPF prog-id=36 op=LOAD Jan 23 17:31:10.734000 audit: BPF prog-id=26 op=UNLOAD Jan 23 17:31:10.734000 audit: BPF prog-id=27 op=UNLOAD Jan 23 17:31:10.737000 audit: BPF prog-id=37 op=LOAD Jan 23 17:31:10.744000 audit: BPF prog-id=22 op=UNLOAD Jan 23 17:31:10.744000 audit: BPF prog-id=38 op=LOAD Jan 23 17:31:10.744000 audit: BPF prog-id=39 op=LOAD Jan 23 17:31:10.744000 audit: BPF prog-id=23 op=UNLOAD Jan 23 17:31:10.744000 audit: BPF prog-id=24 op=UNLOAD Jan 23 17:31:10.747000 audit: BPF prog-id=40 op=LOAD Jan 23 17:31:10.747000 audit: BPF prog-id=21 op=UNLOAD Jan 23 17:31:10.749000 audit: BPF prog-id=41 op=LOAD Jan 23 17:31:10.749000 audit: BPF prog-id=42 op=LOAD Jan 23 17:31:10.749000 audit: BPF prog-id=28 op=UNLOAD Jan 23 17:31:10.749000 audit: BPF prog-id=29 op=UNLOAD Jan 23 17:31:10.757000 audit: BPF prog-id=43 op=LOAD Jan 23 17:31:10.758000 audit: BPF prog-id=15 op=UNLOAD Jan 23 17:31:10.758000 audit: BPF prog-id=44 op=LOAD Jan 23 17:31:10.758000 audit: BPF prog-id=45 op=LOAD Jan 23 17:31:10.758000 audit: BPF prog-id=16 op=UNLOAD Jan 23 17:31:10.758000 audit: BPF prog-id=17 op=UNLOAD Jan 23 17:31:10.805206 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 17:31:10.819949 systemd[1]: Reload requested from client PID 1695 ('systemctl') (unit ensure-sysext.service)... Jan 23 17:31:10.819986 systemd[1]: Reloading... Jan 23 17:31:10.842321 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 17:31:10.842422 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 17:31:10.846439 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 17:31:10.855485 systemd-tmpfiles[1698]: ACLs are not supported, ignoring. Jan 23 17:31:10.856271 systemd-tmpfiles[1698]: ACLs are not supported, ignoring. Jan 23 17:31:10.892748 systemd-tmpfiles[1698]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:31:10.892778 systemd-tmpfiles[1698]: Skipping /boot Jan 23 17:31:10.971702 systemd-tmpfiles[1698]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:31:10.971737 systemd-tmpfiles[1698]: Skipping /boot Jan 23 17:31:11.077108 zram_generator::config[1744]: No configuration found. Jan 23 17:31:11.152272 systemd-networkd[1697]: lo: Link UP Jan 23 17:31:11.152946 systemd-networkd[1697]: lo: Gained carrier Jan 23 17:31:11.158816 systemd-networkd[1697]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:31:11.159009 systemd-networkd[1697]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:31:11.163027 systemd-networkd[1697]: eth0: Link UP Jan 23 17:31:11.163533 systemd-networkd[1697]: eth0: Gained carrier Jan 23 17:31:11.163600 systemd-networkd[1697]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:31:11.175088 systemd-networkd[1697]: eth0: DHCPv4 address 172.31.16.139/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 17:31:11.555222 systemd[1]: Reloading finished in 734 ms. Jan 23 17:31:11.584981 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:31:11.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.588000 audit: BPF prog-id=46 op=LOAD Jan 23 17:31:11.589000 audit: BPF prog-id=37 op=UNLOAD Jan 23 17:31:11.589000 audit: BPF prog-id=47 op=LOAD Jan 23 17:31:11.589000 audit: BPF prog-id=48 op=LOAD Jan 23 17:31:11.589000 audit: BPF prog-id=38 op=UNLOAD Jan 23 17:31:11.589000 audit: BPF prog-id=39 op=UNLOAD Jan 23 17:31:11.592000 audit: BPF prog-id=49 op=LOAD Jan 23 17:31:11.592000 audit: BPF prog-id=34 op=UNLOAD Jan 23 17:31:11.592000 audit: BPF prog-id=50 op=LOAD Jan 23 17:31:11.592000 audit: BPF prog-id=51 op=LOAD Jan 23 17:31:11.592000 audit: BPF prog-id=35 op=UNLOAD Jan 23 17:31:11.592000 audit: BPF prog-id=36 op=UNLOAD Jan 23 17:31:11.593000 audit: BPF prog-id=52 op=LOAD Jan 23 17:31:11.593000 audit: BPF prog-id=53 op=LOAD Jan 23 17:31:11.593000 audit: BPF prog-id=41 op=UNLOAD Jan 23 17:31:11.593000 audit: BPF prog-id=42 op=UNLOAD Jan 23 17:31:11.595000 audit: BPF prog-id=54 op=LOAD Jan 23 17:31:11.595000 audit: BPF prog-id=43 op=UNLOAD Jan 23 17:31:11.596000 audit: BPF prog-id=55 op=LOAD Jan 23 17:31:11.596000 audit: BPF prog-id=56 op=LOAD Jan 23 17:31:11.596000 audit: BPF prog-id=44 op=UNLOAD Jan 23 17:31:11.596000 audit: BPF prog-id=45 op=UNLOAD Jan 23 17:31:11.597000 audit: BPF prog-id=57 op=LOAD Jan 23 17:31:11.597000 audit: BPF prog-id=31 op=UNLOAD Jan 23 17:31:11.597000 audit: BPF prog-id=58 op=LOAD Jan 23 17:31:11.598000 audit: BPF prog-id=59 op=LOAD Jan 23 17:31:11.598000 audit: BPF prog-id=32 op=UNLOAD Jan 23 17:31:11.598000 audit: BPF prog-id=33 op=UNLOAD Jan 23 17:31:11.600000 audit: BPF prog-id=60 op=LOAD Jan 23 17:31:11.604000 audit: BPF prog-id=40 op=UNLOAD Jan 23 17:31:11.606000 audit: BPF prog-id=61 op=LOAD Jan 23 17:31:11.606000 audit: BPF prog-id=30 op=UNLOAD Jan 23 17:31:11.613808 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:31:11.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.621728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:31:11.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.668189 systemd[1]: Reached target network.target - Network. Jan 23 17:31:11.674990 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:31:11.679897 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 17:31:11.683084 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:31:11.688132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:31:11.694022 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:31:11.702372 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:31:11.705169 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:31:11.705620 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:31:11.713138 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 17:31:11.719237 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 17:31:11.722763 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:31:11.726013 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 17:31:11.732302 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 17:31:11.741568 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 17:31:11.748202 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 17:31:11.762731 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:31:11.763235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:31:11.763664 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:31:11.763900 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:31:11.776963 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:31:11.787417 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:31:11.790091 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:31:11.790446 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:31:11.790676 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:31:11.790980 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 17:31:11.814990 systemd[1]: Finished ensure-sysext.service. Jan 23 17:31:11.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.861000 audit[1804]: SYSTEM_BOOT pid=1804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.864047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:31:11.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.865077 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:31:11.873694 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:31:11.881700 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 17:31:11.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.890652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:31:11.892840 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:31:11.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.900476 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:31:11.900947 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:31:11.908424 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 17:31:11.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.910934 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 17:31:11.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.919270 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:31:11.919891 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:31:11.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.926793 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 17:31:11.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:11.938859 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:31:12.059000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 17:31:12.059000 audit[1835]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffead61240 a2=420 a3=0 items=0 ppid=1793 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:12.059000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 17:31:12.061473 augenrules[1835]: No rules Jan 23 17:31:12.065358 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:31:12.066458 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:31:12.113689 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 17:31:12.117185 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:31:12.472769 systemd-networkd[1697]: eth0: Gained IPv6LL Jan 23 17:31:12.477634 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 17:31:12.482097 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 17:31:14.664574 ldconfig[1798]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 17:31:14.671442 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 17:31:14.677095 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 17:31:14.710009 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 17:31:14.713111 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:31:14.715765 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 17:31:14.718615 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 17:31:14.721767 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 17:31:14.724376 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 17:31:14.727403 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 23 17:31:14.730530 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 23 17:31:14.733003 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 17:31:14.735797 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 17:31:14.735859 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:31:14.737992 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:31:14.741387 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 17:31:14.746769 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 17:31:14.752969 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 17:31:14.756258 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 17:31:14.759201 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 17:31:14.770749 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 17:31:14.776069 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 17:31:14.780194 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 17:31:14.782830 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:31:14.785172 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:31:14.787432 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:31:14.787628 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:31:14.789565 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 17:31:14.797139 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 17:31:14.804899 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 17:31:14.816675 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 17:31:14.827443 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 17:31:14.834820 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 17:31:14.837235 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 17:31:14.841125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:14.851490 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 17:31:14.858023 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 17:31:14.866891 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 17:31:14.879977 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 17:31:14.888927 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 17:31:14.896330 jq[1852]: false Jan 23 17:31:14.897180 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 17:31:14.907956 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 17:31:14.925077 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 17:31:14.928121 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 17:31:14.929055 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 17:31:14.935350 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 17:31:14.944798 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 17:31:14.966658 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 17:31:14.970318 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 17:31:14.971402 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 17:31:15.015829 extend-filesystems[1853]: Found /dev/nvme0n1p6 Jan 23 17:31:15.050887 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 17:31:15.058591 jq[1868]: true Jan 23 17:31:15.061451 extend-filesystems[1853]: Found /dev/nvme0n1p9 Jan 23 17:31:15.070025 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 17:31:15.100506 extend-filesystems[1853]: Checking size of /dev/nvme0n1p9 Jan 23 17:31:15.118082 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 17:31:15.119766 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 17:31:15.176429 update_engine[1866]: I20260123 17:31:15.175945 1866 main.cc:92] Flatcar Update Engine starting Jan 23 17:31:15.187203 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 17:31:15.194808 jq[1893]: true Jan 23 17:31:15.206247 ntpd[1856]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:08:45 UTC 2026 (1): Starting Jan 23 17:31:15.208027 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:08:45 UTC 2026 (1): Starting Jan 23 17:31:15.208027 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:31:15.208027 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: ---------------------------------------------------- Jan 23 17:31:15.208027 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:31:15.208027 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:31:15.208027 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: corporation. Support and training for ntp-4 are Jan 23 17:31:15.208027 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: available at https://www.nwtime.org/support Jan 23 17:31:15.208027 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: ---------------------------------------------------- Jan 23 17:31:15.206373 ntpd[1856]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:31:15.206391 ntpd[1856]: ---------------------------------------------------- Jan 23 17:31:15.206409 ntpd[1856]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:31:15.206425 ntpd[1856]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:31:15.206442 ntpd[1856]: corporation. Support and training for ntp-4 are Jan 23 17:31:15.206459 ntpd[1856]: available at https://www.nwtime.org/support Jan 23 17:31:15.206475 ntpd[1856]: ---------------------------------------------------- Jan 23 17:31:15.215453 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 17:31:15.230923 dbus-daemon[1850]: [system] SELinux support is enabled Jan 23 17:31:15.220998 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 17:31:15.231343 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 17:31:15.236425 ntpd[1856]: proto: precision = 0.096 usec (-23) Jan 23 17:31:15.239834 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 17:31:15.247234 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: proto: precision = 0.096 usec (-23) Jan 23 17:31:15.247234 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: basedate set to 2026-01-11 Jan 23 17:31:15.247234 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: gps base set to 2026-01-11 (week 2401) Jan 23 17:31:15.247234 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:31:15.247234 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:31:15.239935 ntpd[1856]: basedate set to 2026-01-11 Jan 23 17:31:15.239885 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 17:31:15.239968 ntpd[1856]: gps base set to 2026-01-11 (week 2401) Jan 23 17:31:15.243013 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 17:31:15.246749 ntpd[1856]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:31:15.243046 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 17:31:15.246808 ntpd[1856]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:31:15.248940 ntpd[1856]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:31:15.251755 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:31:15.251755 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: Listen normally on 3 eth0 172.31.16.139:123 Jan 23 17:31:15.251755 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: Listen normally on 4 lo [::1]:123 Jan 23 17:31:15.251755 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: Listen normally on 5 eth0 [fe80::493:cbff:feea:4ca3%2]:123 Jan 23 17:31:15.251755 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: Listening on routing socket on fd #22 for interface updates Jan 23 17:31:15.249003 ntpd[1856]: Listen normally on 3 eth0 172.31.16.139:123 Jan 23 17:31:15.249054 ntpd[1856]: Listen normally on 4 lo [::1]:123 Jan 23 17:31:15.249101 ntpd[1856]: Listen normally on 5 eth0 [fe80::493:cbff:feea:4ca3%2]:123 Jan 23 17:31:15.249146 ntpd[1856]: Listening on routing socket on fd #22 for interface updates Jan 23 17:31:15.267650 tar[1875]: linux-arm64/LICENSE Jan 23 17:31:15.267650 tar[1875]: linux-arm64/helm Jan 23 17:31:15.275642 extend-filesystems[1853]: Resized partition /dev/nvme0n1p9 Jan 23 17:31:15.270371 dbus-daemon[1850]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1697 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 17:31:15.277367 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 17:31:15.284761 update_engine[1866]: I20260123 17:31:15.281529 1866 update_check_scheduler.cc:74] Next update check in 6m28s Jan 23 17:31:15.284669 systemd[1]: Started update-engine.service - Update Engine. Jan 23 17:31:15.299991 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 17:31:15.316952 extend-filesystems[1926]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 17:31:15.328525 ntpd[1856]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:31:15.328769 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:31:15.328871 ntpd[1856]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:31:15.328982 ntpd[1856]: 23 Jan 17:31:15 ntpd[1856]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:31:15.338578 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 23 17:31:15.355697 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 23 17:31:15.373086 extend-filesystems[1926]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 17:31:15.373086 extend-filesystems[1926]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 17:31:15.373086 extend-filesystems[1926]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 23 17:31:15.389794 extend-filesystems[1853]: Resized filesystem in /dev/nvme0n1p9 Jan 23 17:31:15.382382 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 17:31:15.387212 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 17:31:15.569907 bash[1955]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:31:15.571720 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 17:31:15.580005 systemd[1]: Starting sshkeys.service... Jan 23 17:31:15.599907 coreos-metadata[1849]: Jan 23 17:31:15.599 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 17:31:15.602192 systemd-logind[1862]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 17:31:15.602267 systemd-logind[1862]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 17:31:15.604091 systemd-logind[1862]: New seat seat0. Jan 23 17:31:15.608035 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 17:31:15.617695 coreos-metadata[1849]: Jan 23 17:31:15.615 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 17:31:15.617695 coreos-metadata[1849]: Jan 23 17:31:15.617 INFO Fetch successful Jan 23 17:31:15.617695 coreos-metadata[1849]: Jan 23 17:31:15.617 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 17:31:15.623199 coreos-metadata[1849]: Jan 23 17:31:15.622 INFO Fetch successful Jan 23 17:31:15.623199 coreos-metadata[1849]: Jan 23 17:31:15.622 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 17:31:15.623878 coreos-metadata[1849]: Jan 23 17:31:15.623 INFO Fetch successful Jan 23 17:31:15.623878 coreos-metadata[1849]: Jan 23 17:31:15.623 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 17:31:15.629844 coreos-metadata[1849]: Jan 23 17:31:15.626 INFO Fetch successful Jan 23 17:31:15.629844 coreos-metadata[1849]: Jan 23 17:31:15.626 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 17:31:15.630671 coreos-metadata[1849]: Jan 23 17:31:15.630 INFO Fetch failed with 404: resource not found Jan 23 17:31:15.630671 coreos-metadata[1849]: Jan 23 17:31:15.630 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 17:31:15.636630 coreos-metadata[1849]: Jan 23 17:31:15.636 INFO Fetch successful Jan 23 17:31:15.638619 coreos-metadata[1849]: Jan 23 17:31:15.636 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 17:31:15.642177 coreos-metadata[1849]: Jan 23 17:31:15.642 INFO Fetch successful Jan 23 17:31:15.642177 coreos-metadata[1849]: Jan 23 17:31:15.642 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 17:31:15.649844 coreos-metadata[1849]: Jan 23 17:31:15.649 INFO Fetch successful Jan 23 17:31:15.649844 coreos-metadata[1849]: Jan 23 17:31:15.649 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 17:31:15.659263 coreos-metadata[1849]: Jan 23 17:31:15.659 INFO Fetch successful Jan 23 17:31:15.659263 coreos-metadata[1849]: Jan 23 17:31:15.659 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 17:31:15.665658 coreos-metadata[1849]: Jan 23 17:31:15.665 INFO Fetch successful Jan 23 17:31:15.719812 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 17:31:15.728206 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 17:31:15.769982 amazon-ssm-agent[1921]: Initializing new seelog logger Jan 23 17:31:15.769982 amazon-ssm-agent[1921]: New Seelog Logger Creation Complete Jan 23 17:31:15.770530 amazon-ssm-agent[1921]: 2026/01/23 17:31:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:31:15.770530 amazon-ssm-agent[1921]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:31:15.784504 amazon-ssm-agent[1921]: 2026/01/23 17:31:15 processing appconfig overrides Jan 23 17:31:15.786285 amazon-ssm-agent[1921]: 2026/01/23 17:31:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:31:15.786285 amazon-ssm-agent[1921]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:31:15.786480 amazon-ssm-agent[1921]: 2026/01/23 17:31:15 processing appconfig overrides Jan 23 17:31:15.794934 amazon-ssm-agent[1921]: 2026-01-23 17:31:15.7861 INFO Proxy environment variables: Jan 23 17:31:15.799231 amazon-ssm-agent[1921]: 2026/01/23 17:31:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:31:15.799231 amazon-ssm-agent[1921]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:31:15.799420 amazon-ssm-agent[1921]: 2026/01/23 17:31:15 processing appconfig overrides Jan 23 17:31:15.819261 amazon-ssm-agent[1921]: 2026/01/23 17:31:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:31:15.819261 amazon-ssm-agent[1921]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:31:15.819261 amazon-ssm-agent[1921]: 2026/01/23 17:31:15 processing appconfig overrides Jan 23 17:31:15.942895 amazon-ssm-agent[1921]: 2026-01-23 17:31:15.7861 INFO http_proxy: Jan 23 17:31:15.988739 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 17:31:15.992678 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 17:31:16.045609 amazon-ssm-agent[1921]: 2026-01-23 17:31:15.7861 INFO no_proxy: Jan 23 17:31:16.069959 coreos-metadata[1969]: Jan 23 17:31:16.069 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 17:31:16.072010 coreos-metadata[1969]: Jan 23 17:31:16.071 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 17:31:16.073244 coreos-metadata[1969]: Jan 23 17:31:16.073 INFO Fetch successful Jan 23 17:31:16.073441 coreos-metadata[1969]: Jan 23 17:31:16.073 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 17:31:16.074826 coreos-metadata[1969]: Jan 23 17:31:16.074 INFO Fetch successful Jan 23 17:31:16.076880 unknown[1969]: wrote ssh authorized keys file for user: core Jan 23 17:31:16.092654 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 17:31:16.104231 dbus-daemon[1850]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 17:31:16.115578 dbus-daemon[1850]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1925 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 17:31:16.130851 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 17:31:16.145094 amazon-ssm-agent[1921]: 2026-01-23 17:31:15.7861 INFO https_proxy: Jan 23 17:31:16.246408 amazon-ssm-agent[1921]: 2026-01-23 17:31:15.7864 INFO Checking if agent identity type OnPrem can be assumed Jan 23 17:31:16.347407 update-ssh-keys[2032]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:31:16.338230 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 17:31:16.354526 systemd[1]: Finished sshkeys.service. Jan 23 17:31:16.358288 amazon-ssm-agent[1921]: 2026-01-23 17:31:15.7989 INFO Checking if agent identity type EC2 can be assumed Jan 23 17:31:16.459601 amazon-ssm-agent[1921]: 2026-01-23 17:31:16.3566 INFO Agent will take identity from EC2 Jan 23 17:31:16.494825 containerd[1897]: time="2026-01-23T17:31:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 17:31:16.502324 containerd[1897]: time="2026-01-23T17:31:16.502173165Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 23 17:31:16.559877 amazon-ssm-agent[1921]: 2026-01-23 17:31:16.3669 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 23 17:31:16.627244 containerd[1897]: time="2026-01-23T17:31:16.627173313Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.144µs" Jan 23 17:31:16.630249 containerd[1897]: time="2026-01-23T17:31:16.627806397Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 17:31:16.630249 containerd[1897]: time="2026-01-23T17:31:16.627896061Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 17:31:16.630249 containerd[1897]: time="2026-01-23T17:31:16.627925305Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 17:31:16.630249 containerd[1897]: time="2026-01-23T17:31:16.628210497Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 17:31:16.630249 containerd[1897]: time="2026-01-23T17:31:16.628248213Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:31:16.630249 containerd[1897]: time="2026-01-23T17:31:16.628367121Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:31:16.630249 containerd[1897]: time="2026-01-23T17:31:16.628393809Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:31:16.630249 containerd[1897]: time="2026-01-23T17:31:16.628942413Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:31:16.630249 containerd[1897]: time="2026-01-23T17:31:16.628983969Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:31:16.630249 containerd[1897]: time="2026-01-23T17:31:16.629015733Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:31:16.630249 containerd[1897]: time="2026-01-23T17:31:16.629038989Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 17:31:16.630249 containerd[1897]: time="2026-01-23T17:31:16.629367525Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 17:31:16.630933 containerd[1897]: time="2026-01-23T17:31:16.629395377Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 17:31:16.636694 containerd[1897]: time="2026-01-23T17:31:16.635117517Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 17:31:16.636694 containerd[1897]: time="2026-01-23T17:31:16.636370941Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:31:16.636694 containerd[1897]: time="2026-01-23T17:31:16.636451713Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:31:16.636694 containerd[1897]: time="2026-01-23T17:31:16.636480681Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 17:31:16.641834 containerd[1897]: time="2026-01-23T17:31:16.639612117Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 17:31:16.643250 containerd[1897]: time="2026-01-23T17:31:16.643195413Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 17:31:16.644082 containerd[1897]: time="2026-01-23T17:31:16.643604313Z" level=info msg="metadata content store policy set" policy=shared Jan 23 17:31:16.657728 containerd[1897]: time="2026-01-23T17:31:16.657600045Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 17:31:16.658363 amazon-ssm-agent[1921]: 2026-01-23 17:31:16.3669 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 17:31:16.661822 containerd[1897]: time="2026-01-23T17:31:16.659792457Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 17:31:16.661822 containerd[1897]: time="2026-01-23T17:31:16.660023289Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 17:31:16.661822 containerd[1897]: time="2026-01-23T17:31:16.660055449Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 17:31:16.661822 containerd[1897]: time="2026-01-23T17:31:16.660099897Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 17:31:16.661822 containerd[1897]: time="2026-01-23T17:31:16.660142737Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 17:31:16.661822 containerd[1897]: time="2026-01-23T17:31:16.660182685Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 17:31:16.661822 containerd[1897]: time="2026-01-23T17:31:16.660219345Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 17:31:16.661822 containerd[1897]: time="2026-01-23T17:31:16.660251337Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 17:31:16.661822 containerd[1897]: time="2026-01-23T17:31:16.660291765Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 17:31:16.661822 containerd[1897]: time="2026-01-23T17:31:16.660330069Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 17:31:16.661822 containerd[1897]: time="2026-01-23T17:31:16.660437973Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 17:31:16.661822 containerd[1897]: time="2026-01-23T17:31:16.660470961Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 17:31:16.661822 containerd[1897]: time="2026-01-23T17:31:16.660511377Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 17:31:16.662466 containerd[1897]: time="2026-01-23T17:31:16.660776937Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 17:31:16.662466 containerd[1897]: time="2026-01-23T17:31:16.660829077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 17:31:16.662466 containerd[1897]: time="2026-01-23T17:31:16.660881205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 17:31:16.662466 containerd[1897]: time="2026-01-23T17:31:16.660919041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 17:31:16.662466 containerd[1897]: time="2026-01-23T17:31:16.660954009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 17:31:16.662466 containerd[1897]: time="2026-01-23T17:31:16.660990693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 17:31:16.662466 containerd[1897]: time="2026-01-23T17:31:16.661032693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 17:31:16.662466 containerd[1897]: time="2026-01-23T17:31:16.661066101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 17:31:16.662466 containerd[1897]: time="2026-01-23T17:31:16.661105053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 17:31:16.662466 containerd[1897]: time="2026-01-23T17:31:16.661141965Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 17:31:16.662466 containerd[1897]: time="2026-01-23T17:31:16.661508409Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 17:31:16.678041 containerd[1897]: time="2026-01-23T17:31:16.676706554Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 17:31:16.678041 containerd[1897]: time="2026-01-23T17:31:16.676859698Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 17:31:16.678041 containerd[1897]: time="2026-01-23T17:31:16.676908910Z" level=info msg="Start snapshots syncer" Jan 23 17:31:16.678041 containerd[1897]: time="2026-01-23T17:31:16.676954690Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 17:31:16.678334 containerd[1897]: time="2026-01-23T17:31:16.677653054Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 17:31:16.678334 containerd[1897]: time="2026-01-23T17:31:16.677773366Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 17:31:16.678334 containerd[1897]: time="2026-01-23T17:31:16.677888602Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 17:31:16.678695 containerd[1897]: time="2026-01-23T17:31:16.678375394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 17:31:16.678695 containerd[1897]: time="2026-01-23T17:31:16.678447562Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 17:31:16.678695 containerd[1897]: time="2026-01-23T17:31:16.678481978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 17:31:16.678695 containerd[1897]: time="2026-01-23T17:31:16.678521950Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 17:31:16.693379 containerd[1897]: time="2026-01-23T17:31:16.691592578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 17:31:16.693379 containerd[1897]: time="2026-01-23T17:31:16.692369890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 17:31:16.693379 containerd[1897]: time="2026-01-23T17:31:16.692445766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 17:31:16.693379 containerd[1897]: time="2026-01-23T17:31:16.692477770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 17:31:16.693379 containerd[1897]: time="2026-01-23T17:31:16.692535550Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 17:31:16.693379 containerd[1897]: time="2026-01-23T17:31:16.692664670Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:31:16.693379 containerd[1897]: time="2026-01-23T17:31:16.692703382Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:31:16.695852 containerd[1897]: time="2026-01-23T17:31:16.695690302Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:31:16.695852 containerd[1897]: time="2026-01-23T17:31:16.695786722Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:31:16.703623 containerd[1897]: time="2026-01-23T17:31:16.695817478Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 17:31:16.703623 containerd[1897]: time="2026-01-23T17:31:16.701109586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 17:31:16.703623 containerd[1897]: time="2026-01-23T17:31:16.701179378Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 17:31:16.709039 containerd[1897]: time="2026-01-23T17:31:16.707105650Z" level=info msg="runtime interface created" Jan 23 17:31:16.709039 containerd[1897]: time="2026-01-23T17:31:16.708842770Z" level=info msg="created NRI interface" Jan 23 17:31:16.709633 containerd[1897]: time="2026-01-23T17:31:16.708889522Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 17:31:16.710212 containerd[1897]: time="2026-01-23T17:31:16.709605334Z" level=info msg="Connect containerd service" Jan 23 17:31:16.710515 containerd[1897]: time="2026-01-23T17:31:16.710367478Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 17:31:16.716059 containerd[1897]: time="2026-01-23T17:31:16.715456030Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:31:16.757837 amazon-ssm-agent[1921]: 2026-01-23 17:31:16.3670 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 17:31:16.857987 amazon-ssm-agent[1921]: 2026-01-23 17:31:16.3670 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 23 17:31:16.941346 locksmithd[1927]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 17:31:16.959592 amazon-ssm-agent[1921]: 2026-01-23 17:31:16.3670 INFO [Registrar] Starting registrar module Jan 23 17:31:17.010622 polkitd[2033]: Started polkitd version 126 Jan 23 17:31:17.057770 amazon-ssm-agent[1921]: 2026-01-23 17:31:16.3809 INFO [EC2Identity] Checking disk for registration info Jan 23 17:31:17.110480 polkitd[2033]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 17:31:17.128682 polkitd[2033]: Loading rules from directory /run/polkit-1/rules.d Jan 23 17:31:17.128816 polkitd[2033]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 17:31:17.129464 polkitd[2033]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 17:31:17.129518 polkitd[2033]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 17:31:17.129645 polkitd[2033]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 17:31:17.141445 polkitd[2033]: Finished loading, compiling and executing 2 rules Jan 23 17:31:17.142494 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 17:31:17.153541 dbus-daemon[1850]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 17:31:17.159972 polkitd[2033]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 17:31:17.160991 amazon-ssm-agent[1921]: 2026-01-23 17:31:16.3810 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 23 17:31:17.219170 systemd-hostnamed[1925]: Hostname set to (transient) Jan 23 17:31:17.219396 systemd-resolved[1458]: System hostname changed to 'ip-172-31-16-139'. Jan 23 17:31:17.262226 amazon-ssm-agent[1921]: 2026-01-23 17:31:16.3810 INFO [EC2Identity] Generating registration keypair Jan 23 17:31:17.377439 containerd[1897]: time="2026-01-23T17:31:17.373183149Z" level=info msg="Start subscribing containerd event" Jan 23 17:31:17.377439 containerd[1897]: time="2026-01-23T17:31:17.373295229Z" level=info msg="Start recovering state" Jan 23 17:31:17.377439 containerd[1897]: time="2026-01-23T17:31:17.373454409Z" level=info msg="Start event monitor" Jan 23 17:31:17.377439 containerd[1897]: time="2026-01-23T17:31:17.373483365Z" level=info msg="Start cni network conf syncer for default" Jan 23 17:31:17.377439 containerd[1897]: time="2026-01-23T17:31:17.373501497Z" level=info msg="Start streaming server" Jan 23 17:31:17.377439 containerd[1897]: time="2026-01-23T17:31:17.373520745Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 17:31:17.377439 containerd[1897]: time="2026-01-23T17:31:17.373541121Z" level=info msg="runtime interface starting up..." Jan 23 17:31:17.377439 containerd[1897]: time="2026-01-23T17:31:17.373602993Z" level=info msg="starting plugins..." Jan 23 17:31:17.377439 containerd[1897]: time="2026-01-23T17:31:17.373636101Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 17:31:17.382193 containerd[1897]: time="2026-01-23T17:31:17.375824157Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 17:31:17.382193 containerd[1897]: time="2026-01-23T17:31:17.378339129Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 17:31:17.378825 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 17:31:17.382638 amazon-ssm-agent[1921]: 2026-01-23 17:31:17.3820 INFO [EC2Identity] Checking write access before registering Jan 23 17:31:17.382874 containerd[1897]: time="2026-01-23T17:31:17.382826997Z" level=info msg="containerd successfully booted in 0.891229s" Jan 23 17:31:17.433304 amazon-ssm-agent[1921]: 2026/01/23 17:31:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:31:17.433304 amazon-ssm-agent[1921]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:31:17.433537 amazon-ssm-agent[1921]: 2026/01/23 17:31:17 processing appconfig overrides Jan 23 17:31:17.463874 sshd_keygen[1919]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 17:31:17.469368 amazon-ssm-agent[1921]: 2026-01-23 17:31:17.3829 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 23 17:31:17.469368 amazon-ssm-agent[1921]: 2026-01-23 17:31:17.4330 INFO [EC2Identity] EC2 registration was successful. Jan 23 17:31:17.469368 amazon-ssm-agent[1921]: 2026-01-23 17:31:17.4330 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 23 17:31:17.469368 amazon-ssm-agent[1921]: 2026-01-23 17:31:17.4331 INFO [CredentialRefresher] credentialRefresher has started Jan 23 17:31:17.469368 amazon-ssm-agent[1921]: 2026-01-23 17:31:17.4331 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 17:31:17.469368 amazon-ssm-agent[1921]: 2026-01-23 17:31:17.4681 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 17:31:17.469368 amazon-ssm-agent[1921]: 2026-01-23 17:31:17.4684 INFO [CredentialRefresher] Credentials ready Jan 23 17:31:17.483803 amazon-ssm-agent[1921]: 2026-01-23 17:31:17.4687 INFO [CredentialRefresher] Next credential rotation will be in 29.9999908746 minutes Jan 23 17:31:17.567498 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 17:31:17.577146 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 17:31:17.594409 tar[1875]: linux-arm64/README.md Jan 23 17:31:17.627806 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 17:31:17.630712 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 17:31:17.639039 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 17:31:17.643154 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 17:31:17.681684 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 17:31:17.692312 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 17:31:17.700962 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 17:31:17.712119 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 17:31:18.072628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:18.076403 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 17:31:18.079351 systemd[1]: Startup finished in 4.226s (kernel) + 12.177s (initrd) + 12.978s (userspace) = 29.381s. Jan 23 17:31:18.090159 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:31:18.496764 amazon-ssm-agent[1921]: 2026-01-23 17:31:18.4960 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 17:31:18.597675 amazon-ssm-agent[1921]: 2026-01-23 17:31:18.5517 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2144) started Jan 23 17:31:18.698932 amazon-ssm-agent[1921]: 2026-01-23 17:31:18.5518 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 17:31:18.902684 kubelet[2133]: E0123 17:31:18.902603 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:31:18.906945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:31:18.907296 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:31:18.909141 systemd[1]: kubelet.service: Consumed 1.393s CPU time, 248.9M memory peak. Jan 23 17:31:22.018844 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 17:31:22.021954 systemd[1]: Started sshd@0-172.31.16.139:22-4.153.228.146:59188.service - OpenSSH per-connection server daemon (4.153.228.146:59188). Jan 23 17:31:22.596592 sshd[2158]: Accepted publickey for core from 4.153.228.146 port 59188 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:31:22.599797 sshd-session[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:22.613090 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 17:31:22.615088 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 17:31:22.629338 systemd-logind[1862]: New session 1 of user core. Jan 23 17:31:22.656302 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 17:31:22.663022 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 17:31:22.687896 (systemd)[2164]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:22.693522 systemd-logind[1862]: New session 2 of user core. Jan 23 17:31:22.972679 systemd[2164]: Queued start job for default target default.target. Jan 23 17:31:22.981704 systemd[2164]: Created slice app.slice - User Application Slice. Jan 23 17:31:22.981774 systemd[2164]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 23 17:31:22.981805 systemd[2164]: Reached target paths.target - Paths. Jan 23 17:31:22.981896 systemd[2164]: Reached target timers.target - Timers. Jan 23 17:31:22.984656 systemd[2164]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 17:31:22.986817 systemd[2164]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 23 17:31:23.020691 systemd[2164]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 17:31:23.020948 systemd[2164]: Reached target sockets.target - Sockets. Jan 23 17:31:23.024299 systemd[2164]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 23 17:31:23.024581 systemd[2164]: Reached target basic.target - Basic System. Jan 23 17:31:23.024755 systemd[2164]: Reached target default.target - Main User Target. Jan 23 17:31:23.024820 systemd[2164]: Startup finished in 320ms. Jan 23 17:31:23.025199 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 17:31:23.042847 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 17:31:23.307303 systemd[1]: Started sshd@1-172.31.16.139:22-4.153.228.146:59198.service - OpenSSH per-connection server daemon (4.153.228.146:59198). Jan 23 17:31:23.792621 sshd[2178]: Accepted publickey for core from 4.153.228.146 port 59198 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:31:23.795216 sshd-session[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:23.805627 systemd-logind[1862]: New session 3 of user core. Jan 23 17:31:23.811843 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 17:31:24.048066 sshd[2182]: Connection closed by 4.153.228.146 port 59198 Jan 23 17:31:24.049000 sshd-session[2178]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:24.056966 systemd[1]: sshd@1-172.31.16.139:22-4.153.228.146:59198.service: Deactivated successfully. Jan 23 17:31:24.061144 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 17:31:24.063896 systemd-logind[1862]: Session 3 logged out. Waiting for processes to exit. Jan 23 17:31:24.068207 systemd-logind[1862]: Removed session 3. Jan 23 17:31:24.136611 systemd[1]: Started sshd@2-172.31.16.139:22-4.153.228.146:59208.service - OpenSSH per-connection server daemon (4.153.228.146:59208). Jan 23 17:31:24.600619 sshd[2188]: Accepted publickey for core from 4.153.228.146 port 59208 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:31:24.602535 sshd-session[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:24.610734 systemd-logind[1862]: New session 4 of user core. Jan 23 17:31:24.619872 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 17:31:24.831181 sshd[2192]: Connection closed by 4.153.228.146 port 59208 Jan 23 17:31:24.831983 sshd-session[2188]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:24.839242 systemd-logind[1862]: Session 4 logged out. Waiting for processes to exit. Jan 23 17:31:24.839347 systemd[1]: sshd@2-172.31.16.139:22-4.153.228.146:59208.service: Deactivated successfully. Jan 23 17:31:24.842859 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 17:31:24.848215 systemd-logind[1862]: Removed session 4. Jan 23 17:31:24.925722 systemd[1]: Started sshd@3-172.31.16.139:22-4.153.228.146:55496.service - OpenSSH per-connection server daemon (4.153.228.146:55496). Jan 23 17:31:25.383175 sshd[2198]: Accepted publickey for core from 4.153.228.146 port 55496 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:31:25.385818 sshd-session[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:25.395629 systemd-logind[1862]: New session 5 of user core. Jan 23 17:31:25.406853 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 17:31:25.625614 sshd[2202]: Connection closed by 4.153.228.146 port 55496 Jan 23 17:31:25.626804 sshd-session[2198]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:25.634179 systemd-logind[1862]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:31:25.635099 systemd[1]: sshd@3-172.31.16.139:22-4.153.228.146:55496.service: Deactivated successfully. Jan 23 17:31:25.639889 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 17:31:25.645034 systemd-logind[1862]: Removed session 5. Jan 23 17:31:25.727585 systemd[1]: Started sshd@4-172.31.16.139:22-4.153.228.146:55512.service - OpenSSH per-connection server daemon (4.153.228.146:55512). Jan 23 17:31:26.211642 sshd[2208]: Accepted publickey for core from 4.153.228.146 port 55512 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:31:26.214150 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:26.223635 systemd-logind[1862]: New session 6 of user core. Jan 23 17:31:26.230838 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 17:31:26.484505 sudo[2213]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 17:31:26.485185 sudo[2213]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:31:27.693021 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 17:31:27.723316 (dockerd)[2232]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 17:31:28.904660 dockerd[2232]: time="2026-01-23T17:31:28.904250357Z" level=info msg="Starting up" Jan 23 17:31:28.907619 dockerd[2232]: time="2026-01-23T17:31:28.907570032Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 17:31:28.928845 dockerd[2232]: time="2026-01-23T17:31:28.928678429Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 17:31:28.969254 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3723671104-merged.mount: Deactivated successfully. Jan 23 17:31:28.972209 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 17:31:28.975503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:29.014304 systemd[1]: var-lib-docker-metacopy\x2dcheck3927735464-merged.mount: Deactivated successfully. Jan 23 17:31:29.036176 dockerd[2232]: time="2026-01-23T17:31:29.036123668Z" level=info msg="Loading containers: start." Jan 23 17:31:29.054588 kernel: Initializing XFRM netlink socket Jan 23 17:31:29.435507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:29.455042 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:31:29.535468 kubelet[2317]: E0123 17:31:29.535375 2317 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:31:29.540910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:31:29.541234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:31:29.541818 systemd[1]: kubelet.service: Consumed 333ms CPU time, 107.5M memory peak. Jan 23 17:31:29.665860 (udev-worker)[2256]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:31:29.743474 systemd-networkd[1697]: docker0: Link UP Jan 23 17:31:29.757922 dockerd[2232]: time="2026-01-23T17:31:29.757861349Z" level=info msg="Loading containers: done." Jan 23 17:31:29.795824 dockerd[2232]: time="2026-01-23T17:31:29.795766455Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 17:31:29.796284 dockerd[2232]: time="2026-01-23T17:31:29.796252092Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 17:31:29.796831 dockerd[2232]: time="2026-01-23T17:31:29.796747804Z" level=info msg="Initializing buildkit" Jan 23 17:31:29.850245 dockerd[2232]: time="2026-01-23T17:31:29.849803599Z" level=info msg="Completed buildkit initialization" Jan 23 17:31:29.864229 dockerd[2232]: time="2026-01-23T17:31:29.864163895Z" level=info msg="Daemon has completed initialization" Jan 23 17:31:29.865510 dockerd[2232]: time="2026-01-23T17:31:29.864694833Z" level=info msg="API listen on /run/docker.sock" Jan 23 17:31:29.865030 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 17:31:31.844300 containerd[1897]: time="2026-01-23T17:31:31.844113187Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 17:31:32.604463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775986421.mount: Deactivated successfully. Jan 23 17:31:33.903066 containerd[1897]: time="2026-01-23T17:31:33.902979245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:33.906232 containerd[1897]: time="2026-01-23T17:31:33.906144102Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=22975018" Jan 23 17:31:33.907135 containerd[1897]: time="2026-01-23T17:31:33.907074069Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:33.913220 containerd[1897]: time="2026-01-23T17:31:33.913146259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:33.916261 containerd[1897]: time="2026-01-23T17:31:33.916017875Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.071846444s" Jan 23 17:31:33.916261 containerd[1897]: time="2026-01-23T17:31:33.916079848Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 23 17:31:33.917138 containerd[1897]: time="2026-01-23T17:31:33.917076429Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 17:31:35.467031 containerd[1897]: time="2026-01-23T17:31:35.465358421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:35.467031 containerd[1897]: time="2026-01-23T17:31:35.466981294Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19127323" Jan 23 17:31:35.468172 containerd[1897]: time="2026-01-23T17:31:35.468130823Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:35.472771 containerd[1897]: time="2026-01-23T17:31:35.472709304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:35.474911 containerd[1897]: time="2026-01-23T17:31:35.474851588Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.557566511s" Jan 23 17:31:35.475022 containerd[1897]: time="2026-01-23T17:31:35.474908080Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 23 17:31:35.475511 containerd[1897]: time="2026-01-23T17:31:35.475474581Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 17:31:36.956232 containerd[1897]: time="2026-01-23T17:31:36.956141765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:36.958950 containerd[1897]: time="2026-01-23T17:31:36.958878520Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14183580" Jan 23 17:31:36.959819 containerd[1897]: time="2026-01-23T17:31:36.959761890Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:36.965210 containerd[1897]: time="2026-01-23T17:31:36.965128510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:36.967284 containerd[1897]: time="2026-01-23T17:31:36.966933355Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.491213701s" Jan 23 17:31:36.967284 containerd[1897]: time="2026-01-23T17:31:36.966991406Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 23 17:31:36.967664 containerd[1897]: time="2026-01-23T17:31:36.967596671Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 17:31:38.249005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234867189.mount: Deactivated successfully. Jan 23 17:31:38.646930 containerd[1897]: time="2026-01-23T17:31:38.646873941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:38.649847 containerd[1897]: time="2026-01-23T17:31:38.649779200Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=12960247" Jan 23 17:31:38.651364 containerd[1897]: time="2026-01-23T17:31:38.651276915Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:38.654407 containerd[1897]: time="2026-01-23T17:31:38.654359121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:38.655799 containerd[1897]: time="2026-01-23T17:31:38.655583109Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.687934299s" Jan 23 17:31:38.655799 containerd[1897]: time="2026-01-23T17:31:38.655648056Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 23 17:31:38.656253 containerd[1897]: time="2026-01-23T17:31:38.656210299Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 17:31:39.245873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169294273.mount: Deactivated successfully. Jan 23 17:31:39.575898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 17:31:39.579900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:39.960126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:39.974066 (kubelet)[2582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:31:40.064236 kubelet[2582]: E0123 17:31:40.064154 2582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:31:40.071005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:31:40.071495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:31:40.075187 systemd[1]: kubelet.service: Consumed 327ms CPU time, 106.3M memory peak. Jan 23 17:31:40.662374 containerd[1897]: time="2026-01-23T17:31:40.662286889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:40.672850 containerd[1897]: time="2026-01-23T17:31:40.672724871Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=19575910" Jan 23 17:31:40.676372 containerd[1897]: time="2026-01-23T17:31:40.676142466Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:40.685005 containerd[1897]: time="2026-01-23T17:31:40.684875585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:40.687584 containerd[1897]: time="2026-01-23T17:31:40.687488957Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.030947636s" Jan 23 17:31:40.687947 containerd[1897]: time="2026-01-23T17:31:40.687883032Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 23 17:31:40.689038 containerd[1897]: time="2026-01-23T17:31:40.688767218Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 17:31:41.324201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount949491043.mount: Deactivated successfully. Jan 23 17:31:41.340320 containerd[1897]: time="2026-01-23T17:31:41.339214242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:41.341401 containerd[1897]: time="2026-01-23T17:31:41.341313228Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 23 17:31:41.344580 containerd[1897]: time="2026-01-23T17:31:41.344475218Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:41.350684 containerd[1897]: time="2026-01-23T17:31:41.350621927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:41.352220 containerd[1897]: time="2026-01-23T17:31:41.352177477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 663.355411ms" Jan 23 17:31:41.352377 containerd[1897]: time="2026-01-23T17:31:41.352349651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 23 17:31:41.353237 containerd[1897]: time="2026-01-23T17:31:41.353175690Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 17:31:41.905621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366067802.mount: Deactivated successfully. Jan 23 17:31:46.194339 containerd[1897]: time="2026-01-23T17:31:46.194250668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:46.196613 containerd[1897]: time="2026-01-23T17:31:46.196039225Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=96314798" Jan 23 17:31:46.198754 containerd[1897]: time="2026-01-23T17:31:46.198683818Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:46.204596 containerd[1897]: time="2026-01-23T17:31:46.204387769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:46.206950 containerd[1897]: time="2026-01-23T17:31:46.206447054Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 4.853215233s" Jan 23 17:31:46.206950 containerd[1897]: time="2026-01-23T17:31:46.206501207Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 23 17:31:47.236182 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 17:31:50.075858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 17:31:50.081903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:50.431862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:50.445976 (kubelet)[2687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:31:50.515971 kubelet[2687]: E0123 17:31:50.515910 2687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:31:50.520399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:31:50.521639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:31:50.522500 systemd[1]: kubelet.service: Consumed 297ms CPU time, 106.6M memory peak. Jan 23 17:31:55.593917 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:55.594247 systemd[1]: kubelet.service: Consumed 297ms CPU time, 106.6M memory peak. Jan 23 17:31:55.598016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:55.666849 systemd[1]: Reload requested from client PID 2701 ('systemctl') (unit session-6.scope)... Jan 23 17:31:55.666882 systemd[1]: Reloading... Jan 23 17:31:55.931598 zram_generator::config[2757]: No configuration found. Jan 23 17:31:56.399722 systemd[1]: Reloading finished in 732 ms. Jan 23 17:31:56.491616 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 17:31:56.491987 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 17:31:56.492808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:56.492885 systemd[1]: kubelet.service: Consumed 230ms CPU time, 95.1M memory peak. Jan 23 17:31:56.496617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:56.862907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:56.885057 (kubelet)[2811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:31:56.953568 kubelet[2811]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:31:56.954045 kubelet[2811]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:31:56.954277 kubelet[2811]: I0123 17:31:56.954231 2811 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:31:58.190886 kubelet[2811]: I0123 17:31:58.190678 2811 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 17:31:58.190886 kubelet[2811]: I0123 17:31:58.190725 2811 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:31:58.193234 kubelet[2811]: I0123 17:31:58.193204 2811 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 17:31:58.194570 kubelet[2811]: I0123 17:31:58.193343 2811 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:31:58.194570 kubelet[2811]: I0123 17:31:58.193788 2811 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:31:58.206738 kubelet[2811]: E0123 17:31:58.206691 2811 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.139:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 17:31:58.210605 kubelet[2811]: I0123 17:31:58.210542 2811 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:31:58.223169 kubelet[2811]: I0123 17:31:58.223137 2811 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:31:58.228998 kubelet[2811]: I0123 17:31:58.228966 2811 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 17:31:58.229594 kubelet[2811]: I0123 17:31:58.229499 2811 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:31:58.229991 kubelet[2811]: I0123 17:31:58.229720 2811 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-139","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:31:58.230208 kubelet[2811]: I0123 17:31:58.230187 2811 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:31:58.230306 kubelet[2811]: I0123 17:31:58.230289 2811 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 17:31:58.230564 kubelet[2811]: I0123 17:31:58.230523 2811 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 17:31:58.237864 kubelet[2811]: I0123 17:31:58.237830 2811 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:31:58.240429 kubelet[2811]: I0123 17:31:58.240400 2811 kubelet.go:475] "Attempting to sync node with API server" Jan 23 17:31:58.240683 kubelet[2811]: I0123 17:31:58.240564 2811 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:31:58.241305 kubelet[2811]: E0123 17:31:58.241244 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-139&limit=500&resourceVersion=0\": dial tcp 172.31.16.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 17:31:58.241860 kubelet[2811]: I0123 17:31:58.241837 2811 kubelet.go:387] "Adding apiserver pod source" Jan 23 17:31:58.241988 kubelet[2811]: I0123 17:31:58.241969 2811 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:31:58.244517 kubelet[2811]: E0123 17:31:58.244367 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:31:58.245180 kubelet[2811]: I0123 17:31:58.245150 2811 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 17:31:58.246398 kubelet[2811]: I0123 17:31:58.246365 2811 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:31:58.246599 kubelet[2811]: I0123 17:31:58.246577 2811 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 17:31:58.246745 kubelet[2811]: W0123 17:31:58.246726 2811 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 17:31:58.251162 kubelet[2811]: I0123 17:31:58.251131 2811 server.go:1262] "Started kubelet" Jan 23 17:31:58.255323 kubelet[2811]: I0123 17:31:58.254219 2811 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:31:58.255818 kubelet[2811]: I0123 17:31:58.255752 2811 server.go:310] "Adding debug handlers to kubelet server" Jan 23 17:31:58.257249 kubelet[2811]: I0123 17:31:58.257163 2811 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:31:58.257457 kubelet[2811]: I0123 17:31:58.257432 2811 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 17:31:58.258040 kubelet[2811]: I0123 17:31:58.258004 2811 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:31:58.260900 kubelet[2811]: E0123 17:31:58.258386 2811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.139:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.139:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-139.188d6c7e47b34d4e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-139,UID:ip-172-31-16-139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-139,},FirstTimestamp:2026-01-23 17:31:58.251085134 +0000 UTC m=+1.359783881,LastTimestamp:2026-01-23 17:31:58.251085134 +0000 UTC m=+1.359783881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-139,}" Jan 23 17:31:58.265593 kubelet[2811]: I0123 17:31:58.264853 2811 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:31:58.265593 kubelet[2811]: I0123 17:31:58.264015 2811 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:31:58.268501 kubelet[2811]: I0123 17:31:58.268462 2811 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 17:31:58.272479 kubelet[2811]: I0123 17:31:58.272442 2811 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 17:31:58.272745 kubelet[2811]: I0123 17:31:58.272721 2811 reconciler.go:29] "Reconciler: start to sync state" Jan 23 17:31:58.273527 kubelet[2811]: E0123 17:31:58.273460 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 17:31:58.275682 kubelet[2811]: E0123 17:31:58.274066 2811 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:31:58.275682 kubelet[2811]: E0123 17:31:58.274779 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-139\" not found" Jan 23 17:31:58.275682 kubelet[2811]: E0123 17:31:58.274947 2811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-139?timeout=10s\": dial tcp 172.31.16.139:6443: connect: connection refused" interval="200ms" Jan 23 17:31:58.276788 kubelet[2811]: I0123 17:31:58.276746 2811 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:31:58.279148 kubelet[2811]: I0123 17:31:58.279113 2811 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:31:58.279330 kubelet[2811]: I0123 17:31:58.279311 2811 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:31:58.316473 kubelet[2811]: I0123 17:31:58.316415 2811 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 17:31:58.318888 kubelet[2811]: I0123 17:31:58.318850 2811 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 17:31:58.319040 kubelet[2811]: I0123 17:31:58.319022 2811 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 17:31:58.319196 kubelet[2811]: I0123 17:31:58.319177 2811 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 17:31:58.319368 kubelet[2811]: E0123 17:31:58.319339 2811 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:31:58.330051 kubelet[2811]: E0123 17:31:58.329998 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 17:31:58.331162 kubelet[2811]: I0123 17:31:58.331125 2811 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:31:58.331337 kubelet[2811]: I0123 17:31:58.331316 2811 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:31:58.331452 kubelet[2811]: I0123 17:31:58.331433 2811 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:31:58.336771 kubelet[2811]: I0123 17:31:58.336741 2811 policy_none.go:49] "None policy: Start" Jan 23 17:31:58.336949 kubelet[2811]: I0123 17:31:58.336930 2811 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 17:31:58.337057 kubelet[2811]: I0123 17:31:58.337036 2811 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 17:31:58.341364 kubelet[2811]: I0123 17:31:58.341338 2811 policy_none.go:47] "Start" Jan 23 17:31:58.349959 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 17:31:58.375599 kubelet[2811]: E0123 17:31:58.374949 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-139\" not found" Jan 23 17:31:58.376212 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 17:31:58.384364 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 17:31:58.398930 kubelet[2811]: E0123 17:31:58.398223 2811 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:31:58.398930 kubelet[2811]: I0123 17:31:58.398522 2811 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:31:58.398930 kubelet[2811]: I0123 17:31:58.398568 2811 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:31:58.400010 kubelet[2811]: I0123 17:31:58.399983 2811 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:31:58.403068 kubelet[2811]: E0123 17:31:58.403019 2811 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:31:58.403221 kubelet[2811]: E0123 17:31:58.403091 2811 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-139\" not found" Jan 23 17:31:58.444223 systemd[1]: Created slice kubepods-burstable-podc00d797e0bff1949ba770a27b3a600dd.slice - libcontainer container kubepods-burstable-podc00d797e0bff1949ba770a27b3a600dd.slice. Jan 23 17:31:58.458285 kubelet[2811]: E0123 17:31:58.458181 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-139\" not found" node="ip-172-31-16-139" Jan 23 17:31:58.467136 systemd[1]: Created slice kubepods-burstable-pod0174b967d98f787fb0ed50fb63d6eab3.slice - libcontainer container kubepods-burstable-pod0174b967d98f787fb0ed50fb63d6eab3.slice. Jan 23 17:31:58.471103 kubelet[2811]: E0123 17:31:58.471054 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-139\" not found" node="ip-172-31-16-139" Jan 23 17:31:58.474060 kubelet[2811]: I0123 17:31:58.474008 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0174b967d98f787fb0ed50fb63d6eab3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-139\" (UID: \"0174b967d98f787fb0ed50fb63d6eab3\") " pod="kube-system/kube-controller-manager-ip-172-31-16-139" Jan 23 17:31:58.474191 kubelet[2811]: I0123 17:31:58.474070 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0174b967d98f787fb0ed50fb63d6eab3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-139\" (UID: \"0174b967d98f787fb0ed50fb63d6eab3\") " pod="kube-system/kube-controller-manager-ip-172-31-16-139" Jan 23 17:31:58.474191 kubelet[2811]: I0123 17:31:58.474112 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c00d797e0bff1949ba770a27b3a600dd-ca-certs\") pod \"kube-apiserver-ip-172-31-16-139\" (UID: \"c00d797e0bff1949ba770a27b3a600dd\") " pod="kube-system/kube-apiserver-ip-172-31-16-139" Jan 23 17:31:58.474191 kubelet[2811]: I0123 17:31:58.474149 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c00d797e0bff1949ba770a27b3a600dd-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-139\" (UID: \"c00d797e0bff1949ba770a27b3a600dd\") " pod="kube-system/kube-apiserver-ip-172-31-16-139" Jan 23 17:31:58.474191 kubelet[2811]: I0123 17:31:58.474182 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0174b967d98f787fb0ed50fb63d6eab3-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-139\" (UID: \"0174b967d98f787fb0ed50fb63d6eab3\") " pod="kube-system/kube-controller-manager-ip-172-31-16-139" Jan 23 17:31:58.474402 kubelet[2811]: I0123 17:31:58.474217 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0174b967d98f787fb0ed50fb63d6eab3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-139\" (UID: \"0174b967d98f787fb0ed50fb63d6eab3\") " pod="kube-system/kube-controller-manager-ip-172-31-16-139" Jan 23 17:31:58.474402 kubelet[2811]: I0123 17:31:58.474257 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9cddbb971aeb1a4df3322205de41d5b5-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-139\" (UID: \"9cddbb971aeb1a4df3322205de41d5b5\") " pod="kube-system/kube-scheduler-ip-172-31-16-139" Jan 23 17:31:58.474402 kubelet[2811]: I0123 17:31:58.474291 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c00d797e0bff1949ba770a27b3a600dd-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-139\" (UID: \"c00d797e0bff1949ba770a27b3a600dd\") " pod="kube-system/kube-apiserver-ip-172-31-16-139" Jan 23 17:31:58.474402 kubelet[2811]: I0123 17:31:58.474325 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0174b967d98f787fb0ed50fb63d6eab3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-139\" (UID: \"0174b967d98f787fb0ed50fb63d6eab3\") " pod="kube-system/kube-controller-manager-ip-172-31-16-139" Jan 23 17:31:58.476379 systemd[1]: Created slice kubepods-burstable-pod9cddbb971aeb1a4df3322205de41d5b5.slice - libcontainer container kubepods-burstable-pod9cddbb971aeb1a4df3322205de41d5b5.slice. Jan 23 17:31:58.478182 kubelet[2811]: E0123 17:31:58.478123 2811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-139?timeout=10s\": dial tcp 172.31.16.139:6443: connect: connection refused" interval="400ms" Jan 23 17:31:58.481078 kubelet[2811]: E0123 17:31:58.481036 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-139\" not found" node="ip-172-31-16-139" Jan 23 17:31:58.500923 kubelet[2811]: I0123 17:31:58.500802 2811 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-139" Jan 23 17:31:58.501916 kubelet[2811]: E0123 17:31:58.501864 2811 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.139:6443/api/v1/nodes\": dial tcp 172.31.16.139:6443: connect: connection refused" node="ip-172-31-16-139" Jan 23 17:31:58.705531 kubelet[2811]: I0123 17:31:58.705390 2811 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-139" Jan 23 17:31:58.707198 kubelet[2811]: E0123 17:31:58.707144 2811 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.139:6443/api/v1/nodes\": dial tcp 172.31.16.139:6443: connect: connection refused" node="ip-172-31-16-139" Jan 23 17:31:58.764586 containerd[1897]: time="2026-01-23T17:31:58.764509020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-139,Uid:c00d797e0bff1949ba770a27b3a600dd,Namespace:kube-system,Attempt:0,}" Jan 23 17:31:58.777186 containerd[1897]: time="2026-01-23T17:31:58.777124369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-139,Uid:0174b967d98f787fb0ed50fb63d6eab3,Namespace:kube-system,Attempt:0,}" Jan 23 17:31:58.787607 containerd[1897]: time="2026-01-23T17:31:58.787335617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-139,Uid:9cddbb971aeb1a4df3322205de41d5b5,Namespace:kube-system,Attempt:0,}" Jan 23 17:31:58.879768 kubelet[2811]: E0123 17:31:58.879710 2811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-139?timeout=10s\": dial tcp 172.31.16.139:6443: connect: connection refused" interval="800ms" Jan 23 17:31:59.109597 kubelet[2811]: I0123 17:31:59.109434 2811 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-139" Jan 23 17:31:59.110137 kubelet[2811]: E0123 17:31:59.110089 2811 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.139:6443/api/v1/nodes\": dial tcp 172.31.16.139:6443: connect: connection refused" node="ip-172-31-16-139" Jan 23 17:31:59.128858 kubelet[2811]: E0123 17:31:59.128801 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-139&limit=500&resourceVersion=0\": dial tcp 172.31.16.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 17:31:59.262062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3328079133.mount: Deactivated successfully. Jan 23 17:31:59.277617 containerd[1897]: time="2026-01-23T17:31:59.277455616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:31:59.286309 containerd[1897]: time="2026-01-23T17:31:59.286220927Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 17:31:59.288152 containerd[1897]: time="2026-01-23T17:31:59.288079601Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:31:59.291579 containerd[1897]: time="2026-01-23T17:31:59.290817699Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:31:59.294449 containerd[1897]: time="2026-01-23T17:31:59.294404373Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:31:59.297443 containerd[1897]: time="2026-01-23T17:31:59.297356444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 17:31:59.299833 containerd[1897]: time="2026-01-23T17:31:59.299635688Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 17:31:59.302087 containerd[1897]: time="2026-01-23T17:31:59.302023202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:31:59.304592 containerd[1897]: time="2026-01-23T17:31:59.304294350Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 534.055185ms" Jan 23 17:31:59.308962 containerd[1897]: time="2026-01-23T17:31:59.308886468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 516.630565ms" Jan 23 17:31:59.317068 containerd[1897]: time="2026-01-23T17:31:59.316970832Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 534.769415ms" Jan 23 17:31:59.360329 kubelet[2811]: E0123 17:31:59.358269 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 17:31:59.386680 containerd[1897]: time="2026-01-23T17:31:59.386608752Z" level=info msg="connecting to shim 97c5db23d8e348c97a4df59bf3e37a8e56f651736fcfa641e6cc092b0b37852b" address="unix:///run/containerd/s/10a9c125e634484f81c6d1e45b55024a1f40240b51e7a225aa7db689ebfbd6b9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:31:59.395412 containerd[1897]: time="2026-01-23T17:31:59.395328150Z" level=info msg="connecting to shim a22a18a861d0f7c48493749f19d01b44c1b839815846cec326bb84d096c0b840" address="unix:///run/containerd/s/9cf3c48112d4cc7423a4500d028d8613bcd1ea051840baefd58f23d0758aeabd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:31:59.413578 containerd[1897]: time="2026-01-23T17:31:59.413439594Z" level=info msg="connecting to shim a4dc5e155ecf19a5e6c9871d4de9311d2b6fa8179281ab2faa4a5557d303ef12" address="unix:///run/containerd/s/572b2cbb78ab9b6c31756b9417f929a112ffb1bb56af73cd919f1ded673b240f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:31:59.453417 systemd[1]: Started cri-containerd-97c5db23d8e348c97a4df59bf3e37a8e56f651736fcfa641e6cc092b0b37852b.scope - libcontainer container 97c5db23d8e348c97a4df59bf3e37a8e56f651736fcfa641e6cc092b0b37852b. Jan 23 17:31:59.470811 systemd[1]: Started cri-containerd-a22a18a861d0f7c48493749f19d01b44c1b839815846cec326bb84d096c0b840.scope - libcontainer container a22a18a861d0f7c48493749f19d01b44c1b839815846cec326bb84d096c0b840. Jan 23 17:31:59.500980 systemd[1]: Started cri-containerd-a4dc5e155ecf19a5e6c9871d4de9311d2b6fa8179281ab2faa4a5557d303ef12.scope - libcontainer container a4dc5e155ecf19a5e6c9871d4de9311d2b6fa8179281ab2faa4a5557d303ef12. Jan 23 17:31:59.610207 kubelet[2811]: E0123 17:31:59.610137 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:31:59.613120 containerd[1897]: time="2026-01-23T17:31:59.611603947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-139,Uid:0174b967d98f787fb0ed50fb63d6eab3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a22a18a861d0f7c48493749f19d01b44c1b839815846cec326bb84d096c0b840\"" Jan 23 17:31:59.627191 containerd[1897]: time="2026-01-23T17:31:59.626969820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-139,Uid:c00d797e0bff1949ba770a27b3a600dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"97c5db23d8e348c97a4df59bf3e37a8e56f651736fcfa641e6cc092b0b37852b\"" Jan 23 17:31:59.636212 containerd[1897]: time="2026-01-23T17:31:59.636135323Z" level=info msg="CreateContainer within sandbox \"a22a18a861d0f7c48493749f19d01b44c1b839815846cec326bb84d096c0b840\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 17:31:59.640978 containerd[1897]: time="2026-01-23T17:31:59.640910710Z" level=info msg="CreateContainer within sandbox \"97c5db23d8e348c97a4df59bf3e37a8e56f651736fcfa641e6cc092b0b37852b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 17:31:59.658388 containerd[1897]: time="2026-01-23T17:31:59.658311509Z" level=info msg="Container 77cdc1ca11d90727e2bfd56d0f12992706eaf1d45a3cc93371f614a74517f4b3: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:31:59.662080 containerd[1897]: time="2026-01-23T17:31:59.661917122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-139,Uid:9cddbb971aeb1a4df3322205de41d5b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4dc5e155ecf19a5e6c9871d4de9311d2b6fa8179281ab2faa4a5557d303ef12\"" Jan 23 17:31:59.672140 containerd[1897]: time="2026-01-23T17:31:59.672092616Z" level=info msg="CreateContainer within sandbox \"a4dc5e155ecf19a5e6c9871d4de9311d2b6fa8179281ab2faa4a5557d303ef12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 17:31:59.677034 containerd[1897]: time="2026-01-23T17:31:59.676893190Z" level=info msg="Container 6b2f50020f1f2ab3581774db1ccf8c7c3c8abfa008aebbb5f2a78f73abee29bc: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:31:59.681430 kubelet[2811]: E0123 17:31:59.681366 2811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-139?timeout=10s\": dial tcp 172.31.16.139:6443: connect: connection refused" interval="1.6s" Jan 23 17:31:59.685214 containerd[1897]: time="2026-01-23T17:31:59.685087371Z" level=info msg="CreateContainer within sandbox \"a22a18a861d0f7c48493749f19d01b44c1b839815846cec326bb84d096c0b840\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"77cdc1ca11d90727e2bfd56d0f12992706eaf1d45a3cc93371f614a74517f4b3\"" Jan 23 17:31:59.686625 containerd[1897]: time="2026-01-23T17:31:59.686425817Z" level=info msg="StartContainer for \"77cdc1ca11d90727e2bfd56d0f12992706eaf1d45a3cc93371f614a74517f4b3\"" Jan 23 17:31:59.690927 containerd[1897]: time="2026-01-23T17:31:59.690858956Z" level=info msg="connecting to shim 77cdc1ca11d90727e2bfd56d0f12992706eaf1d45a3cc93371f614a74517f4b3" address="unix:///run/containerd/s/9cf3c48112d4cc7423a4500d028d8613bcd1ea051840baefd58f23d0758aeabd" protocol=ttrpc version=3 Jan 23 17:31:59.694047 containerd[1897]: time="2026-01-23T17:31:59.693992004Z" level=info msg="CreateContainer within sandbox \"97c5db23d8e348c97a4df59bf3e37a8e56f651736fcfa641e6cc092b0b37852b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6b2f50020f1f2ab3581774db1ccf8c7c3c8abfa008aebbb5f2a78f73abee29bc\"" Jan 23 17:31:59.696943 containerd[1897]: time="2026-01-23T17:31:59.696828825Z" level=info msg="StartContainer for \"6b2f50020f1f2ab3581774db1ccf8c7c3c8abfa008aebbb5f2a78f73abee29bc\"" Jan 23 17:31:59.699037 kubelet[2811]: E0123 17:31:59.698981 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 17:31:59.702188 containerd[1897]: time="2026-01-23T17:31:59.702134875Z" level=info msg="connecting to shim 6b2f50020f1f2ab3581774db1ccf8c7c3c8abfa008aebbb5f2a78f73abee29bc" address="unix:///run/containerd/s/10a9c125e634484f81c6d1e45b55024a1f40240b51e7a225aa7db689ebfbd6b9" protocol=ttrpc version=3 Jan 23 17:31:59.704599 containerd[1897]: time="2026-01-23T17:31:59.704323036Z" level=info msg="Container 650cbe070497d15700153f18b16a347640c694881a0e1c3d3e0a000b5b58d589: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:31:59.725210 containerd[1897]: time="2026-01-23T17:31:59.725135205Z" level=info msg="CreateContainer within sandbox \"a4dc5e155ecf19a5e6c9871d4de9311d2b6fa8179281ab2faa4a5557d303ef12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"650cbe070497d15700153f18b16a347640c694881a0e1c3d3e0a000b5b58d589\"" Jan 23 17:31:59.726624 containerd[1897]: time="2026-01-23T17:31:59.726579619Z" level=info msg="StartContainer for \"650cbe070497d15700153f18b16a347640c694881a0e1c3d3e0a000b5b58d589\"" Jan 23 17:31:59.738305 containerd[1897]: time="2026-01-23T17:31:59.737845643Z" level=info msg="connecting to shim 650cbe070497d15700153f18b16a347640c694881a0e1c3d3e0a000b5b58d589" address="unix:///run/containerd/s/572b2cbb78ab9b6c31756b9417f929a112ffb1bb56af73cd919f1ded673b240f" protocol=ttrpc version=3 Jan 23 17:31:59.738235 systemd[1]: Started cri-containerd-77cdc1ca11d90727e2bfd56d0f12992706eaf1d45a3cc93371f614a74517f4b3.scope - libcontainer container 77cdc1ca11d90727e2bfd56d0f12992706eaf1d45a3cc93371f614a74517f4b3. Jan 23 17:31:59.765444 systemd[1]: Started cri-containerd-6b2f50020f1f2ab3581774db1ccf8c7c3c8abfa008aebbb5f2a78f73abee29bc.scope - libcontainer container 6b2f50020f1f2ab3581774db1ccf8c7c3c8abfa008aebbb5f2a78f73abee29bc. Jan 23 17:31:59.799900 systemd[1]: Started cri-containerd-650cbe070497d15700153f18b16a347640c694881a0e1c3d3e0a000b5b58d589.scope - libcontainer container 650cbe070497d15700153f18b16a347640c694881a0e1c3d3e0a000b5b58d589. Jan 23 17:31:59.918008 kubelet[2811]: I0123 17:31:59.917429 2811 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-139" Jan 23 17:31:59.919725 kubelet[2811]: E0123 17:31:59.919448 2811 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.139:6443/api/v1/nodes\": dial tcp 172.31.16.139:6443: connect: connection refused" node="ip-172-31-16-139" Jan 23 17:31:59.939755 containerd[1897]: time="2026-01-23T17:31:59.939673731Z" level=info msg="StartContainer for \"77cdc1ca11d90727e2bfd56d0f12992706eaf1d45a3cc93371f614a74517f4b3\" returns successfully" Jan 23 17:31:59.958910 containerd[1897]: time="2026-01-23T17:31:59.958037588Z" level=info msg="StartContainer for \"6b2f50020f1f2ab3581774db1ccf8c7c3c8abfa008aebbb5f2a78f73abee29bc\" returns successfully" Jan 23 17:31:59.993672 containerd[1897]: time="2026-01-23T17:31:59.993622563Z" level=info msg="StartContainer for \"650cbe070497d15700153f18b16a347640c694881a0e1c3d3e0a000b5b58d589\" returns successfully" Jan 23 17:32:00.356925 kubelet[2811]: E0123 17:32:00.356486 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-139\" not found" node="ip-172-31-16-139" Jan 23 17:32:00.363129 kubelet[2811]: E0123 17:32:00.363079 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-139\" not found" node="ip-172-31-16-139" Jan 23 17:32:00.370960 kubelet[2811]: E0123 17:32:00.370913 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-139\" not found" node="ip-172-31-16-139" Jan 23 17:32:00.697772 update_engine[1866]: I20260123 17:32:00.697602 1866 update_attempter.cc:509] Updating boot flags... Jan 23 17:32:01.386588 kubelet[2811]: E0123 17:32:01.385932 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-139\" not found" node="ip-172-31-16-139" Jan 23 17:32:01.390243 kubelet[2811]: E0123 17:32:01.388951 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-139\" not found" node="ip-172-31-16-139" Jan 23 17:32:01.527520 kubelet[2811]: I0123 17:32:01.526825 2811 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-139" Jan 23 17:32:02.381252 kubelet[2811]: E0123 17:32:02.381000 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-139\" not found" node="ip-172-31-16-139" Jan 23 17:32:03.619416 kubelet[2811]: E0123 17:32:03.619278 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-139\" not found" node="ip-172-31-16-139" Jan 23 17:32:04.012221 kubelet[2811]: E0123 17:32:04.011762 2811 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-139\" not found" node="ip-172-31-16-139" Jan 23 17:32:04.045952 kubelet[2811]: E0123 17:32:04.045808 2811 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-139.188d6c7e47b34d4e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-139,UID:ip-172-31-16-139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-139,},FirstTimestamp:2026-01-23 17:31:58.251085134 +0000 UTC m=+1.359783881,LastTimestamp:2026-01-23 17:31:58.251085134 +0000 UTC m=+1.359783881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-139,}" Jan 23 17:32:04.062576 kubelet[2811]: I0123 17:32:04.060995 2811 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-139" Jan 23 17:32:04.077572 kubelet[2811]: I0123 17:32:04.075613 2811 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-139" Jan 23 17:32:04.130999 kubelet[2811]: E0123 17:32:04.130785 2811 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-139.188d6c7e491196d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-139,UID:ip-172-31-16-139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-16-139,},FirstTimestamp:2026-01-23 17:31:58.274041555 +0000 UTC m=+1.382740302,LastTimestamp:2026-01-23 17:31:58.274041555 +0000 UTC m=+1.382740302,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-139,}" Jan 23 17:32:04.230106 kubelet[2811]: E0123 17:32:04.229896 2811 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-139.188d6c7e4c234cf8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-139,UID:ip-172-31-16-139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-16-139 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-16-139,},FirstTimestamp:2026-01-23 17:31:58.325533944 +0000 UTC m=+1.434232667,LastTimestamp:2026-01-23 17:31:58.325533944 +0000 UTC m=+1.434232667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-139,}" Jan 23 17:32:04.244464 kubelet[2811]: E0123 17:32:04.244420 2811 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-139\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-139" Jan 23 17:32:04.244781 kubelet[2811]: I0123 17:32:04.244587 2811 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-139" Jan 23 17:32:04.245840 kubelet[2811]: I0123 17:32:04.245770 2811 apiserver.go:52] "Watching apiserver" Jan 23 17:32:04.258502 kubelet[2811]: E0123 17:32:04.258423 2811 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-139\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-139" Jan 23 17:32:04.258991 kubelet[2811]: I0123 17:32:04.258732 2811 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-139" Jan 23 17:32:04.270420 kubelet[2811]: E0123 17:32:04.270234 2811 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-139\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-139" Jan 23 17:32:04.273406 kubelet[2811]: I0123 17:32:04.273319 2811 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 17:32:05.875258 kubelet[2811]: I0123 17:32:05.875212 2811 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-139" Jan 23 17:32:06.556375 systemd[1]: Reload requested from client PID 3275 ('systemctl') (unit session-6.scope)... Jan 23 17:32:06.556407 systemd[1]: Reloading... Jan 23 17:32:06.754597 zram_generator::config[3325]: No configuration found. Jan 23 17:32:07.293597 systemd[1]: Reloading finished in 736 ms. Jan 23 17:32:07.357633 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:32:07.376730 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:32:07.377335 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:32:07.377488 systemd[1]: kubelet.service: Consumed 2.198s CPU time, 121.5M memory peak. Jan 23 17:32:07.384026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:32:07.863025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:32:07.887710 (kubelet)[3382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:32:08.001176 kubelet[3382]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:32:08.002143 kubelet[3382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:32:08.002590 kubelet[3382]: I0123 17:32:08.002064 3382 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:32:08.020754 kubelet[3382]: I0123 17:32:08.020686 3382 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 17:32:08.020754 kubelet[3382]: I0123 17:32:08.020734 3382 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:32:08.020949 kubelet[3382]: I0123 17:32:08.020792 3382 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 17:32:08.020949 kubelet[3382]: I0123 17:32:08.020806 3382 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:32:08.021251 kubelet[3382]: I0123 17:32:08.021208 3382 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:32:08.024143 kubelet[3382]: I0123 17:32:08.023998 3382 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 17:32:08.036569 kubelet[3382]: I0123 17:32:08.035453 3382 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:32:08.050380 kubelet[3382]: I0123 17:32:08.050333 3382 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:32:08.058183 kubelet[3382]: I0123 17:32:08.058041 3382 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 17:32:08.058915 kubelet[3382]: I0123 17:32:08.058758 3382 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:32:08.059317 kubelet[3382]: I0123 17:32:08.058800 3382 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-139","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:32:08.060579 kubelet[3382]: I0123 17:32:08.059515 3382 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:32:08.060579 kubelet[3382]: I0123 17:32:08.059572 3382 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 17:32:08.060579 kubelet[3382]: I0123 17:32:08.059623 3382 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 17:32:08.061989 kubelet[3382]: I0123 17:32:08.061913 3382 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:32:08.064090 kubelet[3382]: I0123 17:32:08.062180 3382 kubelet.go:475] "Attempting to sync node with API server" Jan 23 17:32:08.064090 kubelet[3382]: I0123 17:32:08.062203 3382 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:32:08.064090 kubelet[3382]: I0123 17:32:08.062249 3382 kubelet.go:387] "Adding apiserver pod source" Jan 23 17:32:08.064090 kubelet[3382]: I0123 17:32:08.062279 3382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:32:08.068595 kubelet[3382]: I0123 17:32:08.067755 3382 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 17:32:08.068922 kubelet[3382]: I0123 17:32:08.068890 3382 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:32:08.069052 kubelet[3382]: I0123 17:32:08.069033 3382 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 17:32:08.079591 kubelet[3382]: I0123 17:32:08.079045 3382 server.go:1262] "Started kubelet" Jan 23 17:32:08.085571 kubelet[3382]: I0123 17:32:08.084037 3382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:32:08.099571 kubelet[3382]: I0123 17:32:08.096677 3382 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:32:08.105611 kubelet[3382]: I0123 17:32:08.105578 3382 server.go:310] "Adding debug handlers to kubelet server" Jan 23 17:32:08.119854 kubelet[3382]: I0123 17:32:08.119676 3382 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:32:08.119854 kubelet[3382]: I0123 17:32:08.119786 3382 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 17:32:08.124755 kubelet[3382]: I0123 17:32:08.124690 3382 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:32:08.134954 kubelet[3382]: I0123 17:32:08.134618 3382 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 17:32:08.135108 kubelet[3382]: E0123 17:32:08.134974 3382 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-139\" not found" Jan 23 17:32:08.135792 kubelet[3382]: I0123 17:32:08.135742 3382 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 17:32:08.136022 kubelet[3382]: I0123 17:32:08.135967 3382 reconciler.go:29] "Reconciler: start to sync state" Jan 23 17:32:08.147023 kubelet[3382]: I0123 17:32:08.146320 3382 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:32:08.166473 kubelet[3382]: I0123 17:32:08.166413 3382 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 17:32:08.174923 kubelet[3382]: I0123 17:32:08.174823 3382 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:32:08.185585 kubelet[3382]: I0123 17:32:08.184852 3382 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 17:32:08.185585 kubelet[3382]: I0123 17:32:08.184893 3382 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 17:32:08.185585 kubelet[3382]: I0123 17:32:08.184930 3382 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 17:32:08.185585 kubelet[3382]: E0123 17:32:08.185016 3382 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:32:08.196928 kubelet[3382]: E0123 17:32:08.196855 3382 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:32:08.199734 kubelet[3382]: I0123 17:32:08.199012 3382 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:32:08.199734 kubelet[3382]: I0123 17:32:08.199049 3382 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:32:08.285716 kubelet[3382]: E0123 17:32:08.285670 3382 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 17:32:08.289091 kubelet[3382]: I0123 17:32:08.287887 3382 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:32:08.289091 kubelet[3382]: I0123 17:32:08.287937 3382 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:32:08.289091 kubelet[3382]: I0123 17:32:08.287974 3382 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:32:08.289091 kubelet[3382]: I0123 17:32:08.288302 3382 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 17:32:08.289091 kubelet[3382]: I0123 17:32:08.288323 3382 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 17:32:08.289091 kubelet[3382]: I0123 17:32:08.288380 3382 policy_none.go:49] "None policy: Start" Jan 23 17:32:08.289091 kubelet[3382]: I0123 17:32:08.288402 3382 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 17:32:08.289091 kubelet[3382]: I0123 17:32:08.288422 3382 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 17:32:08.289091 kubelet[3382]: I0123 17:32:08.288710 3382 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 17:32:08.289091 kubelet[3382]: I0123 17:32:08.288754 3382 policy_none.go:47] "Start" Jan 23 17:32:08.303473 kubelet[3382]: E0123 17:32:08.303408 3382 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:32:08.305380 kubelet[3382]: I0123 17:32:08.305308 3382 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:32:08.305508 kubelet[3382]: I0123 17:32:08.305372 3382 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:32:08.306334 kubelet[3382]: I0123 17:32:08.305901 3382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:32:08.313304 kubelet[3382]: E0123 17:32:08.312474 3382 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:32:08.422451 kubelet[3382]: I0123 17:32:08.422311 3382 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-139" Jan 23 17:32:08.439629 kubelet[3382]: I0123 17:32:08.439384 3382 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-139" Jan 23 17:32:08.440064 kubelet[3382]: I0123 17:32:08.440006 3382 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-139" Jan 23 17:32:08.489608 kubelet[3382]: I0123 17:32:08.487191 3382 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-139" Jan 23 17:32:08.489608 kubelet[3382]: I0123 17:32:08.487317 3382 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-139" Jan 23 17:32:08.489608 kubelet[3382]: I0123 17:32:08.487665 3382 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-139" Jan 23 17:32:08.503391 kubelet[3382]: E0123 17:32:08.503336 3382 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-139\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-139" Jan 23 17:32:08.539590 kubelet[3382]: I0123 17:32:08.539500 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0174b967d98f787fb0ed50fb63d6eab3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-139\" (UID: \"0174b967d98f787fb0ed50fb63d6eab3\") " pod="kube-system/kube-controller-manager-ip-172-31-16-139" Jan 23 17:32:08.540072 kubelet[3382]: I0123 17:32:08.539623 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9cddbb971aeb1a4df3322205de41d5b5-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-139\" (UID: \"9cddbb971aeb1a4df3322205de41d5b5\") " pod="kube-system/kube-scheduler-ip-172-31-16-139" Jan 23 17:32:08.540072 kubelet[3382]: I0123 17:32:08.539677 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0174b967d98f787fb0ed50fb63d6eab3-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-139\" (UID: \"0174b967d98f787fb0ed50fb63d6eab3\") " pod="kube-system/kube-controller-manager-ip-172-31-16-139" Jan 23 17:32:08.540072 kubelet[3382]: I0123 17:32:08.539715 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0174b967d98f787fb0ed50fb63d6eab3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-139\" (UID: \"0174b967d98f787fb0ed50fb63d6eab3\") " pod="kube-system/kube-controller-manager-ip-172-31-16-139" Jan 23 17:32:08.540072 kubelet[3382]: I0123 17:32:08.539767 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c00d797e0bff1949ba770a27b3a600dd-ca-certs\") pod \"kube-apiserver-ip-172-31-16-139\" (UID: \"c00d797e0bff1949ba770a27b3a600dd\") " pod="kube-system/kube-apiserver-ip-172-31-16-139" Jan 23 17:32:08.540072 kubelet[3382]: I0123 17:32:08.539806 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c00d797e0bff1949ba770a27b3a600dd-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-139\" (UID: \"c00d797e0bff1949ba770a27b3a600dd\") " pod="kube-system/kube-apiserver-ip-172-31-16-139" Jan 23 17:32:08.540358 kubelet[3382]: I0123 17:32:08.539844 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c00d797e0bff1949ba770a27b3a600dd-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-139\" (UID: \"c00d797e0bff1949ba770a27b3a600dd\") " pod="kube-system/kube-apiserver-ip-172-31-16-139" Jan 23 17:32:08.540358 kubelet[3382]: I0123 17:32:08.539883 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0174b967d98f787fb0ed50fb63d6eab3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-139\" (UID: \"0174b967d98f787fb0ed50fb63d6eab3\") " pod="kube-system/kube-controller-manager-ip-172-31-16-139" Jan 23 17:32:08.540358 kubelet[3382]: I0123 17:32:08.539930 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0174b967d98f787fb0ed50fb63d6eab3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-139\" (UID: \"0174b967d98f787fb0ed50fb63d6eab3\") " pod="kube-system/kube-controller-manager-ip-172-31-16-139" Jan 23 17:32:09.081597 kubelet[3382]: I0123 17:32:09.081482 3382 apiserver.go:52] "Watching apiserver" Jan 23 17:32:09.136643 kubelet[3382]: I0123 17:32:09.136578 3382 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 17:32:09.205395 kubelet[3382]: I0123 17:32:09.205066 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-139" podStartSLOduration=4.205044767 podStartE2EDuration="4.205044767s" podCreationTimestamp="2026-01-23 17:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:09.186814715 +0000 UTC m=+1.290275461" watchObservedRunningTime="2026-01-23 17:32:09.205044767 +0000 UTC m=+1.308505477" Jan 23 17:32:09.226289 kubelet[3382]: I0123 17:32:09.226215 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-139" podStartSLOduration=1.226194975 podStartE2EDuration="1.226194975s" podCreationTimestamp="2026-01-23 17:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:09.206894902 +0000 UTC m=+1.310355648" watchObservedRunningTime="2026-01-23 17:32:09.226194975 +0000 UTC m=+1.329655685" Jan 23 17:32:09.245341 kubelet[3382]: I0123 17:32:09.244506 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-139" podStartSLOduration=1.24448802 podStartE2EDuration="1.24448802s" podCreationTimestamp="2026-01-23 17:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:09.228113223 +0000 UTC m=+1.331573981" watchObservedRunningTime="2026-01-23 17:32:09.24448802 +0000 UTC m=+1.347948730" Jan 23 17:32:09.255561 kubelet[3382]: I0123 17:32:09.255044 3382 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-139" Jan 23 17:32:09.268141 kubelet[3382]: E0123 17:32:09.268099 3382 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-139\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-139" Jan 23 17:32:10.498791 sudo[2213]: pam_unix(sudo:session): session closed for user root Jan 23 17:32:10.582441 sshd[2212]: Connection closed by 4.153.228.146 port 55512 Jan 23 17:32:10.583396 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Jan 23 17:32:10.591909 systemd[1]: sshd@4-172.31.16.139:22-4.153.228.146:55512.service: Deactivated successfully. Jan 23 17:32:10.596819 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 17:32:10.598141 systemd[1]: session-6.scope: Consumed 11.423s CPU time, 227.9M memory peak. Jan 23 17:32:10.604073 systemd-logind[1862]: Session 6 logged out. Waiting for processes to exit. Jan 23 17:32:10.607039 systemd-logind[1862]: Removed session 6. Jan 23 17:32:12.250446 kubelet[3382]: I0123 17:32:12.250376 3382 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 17:32:12.252464 containerd[1897]: time="2026-01-23T17:32:12.252416968Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 17:32:12.254771 kubelet[3382]: I0123 17:32:12.253487 3382 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 17:32:13.140160 kubelet[3382]: E0123 17:32:13.140085 3382 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-16-139\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-139' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Jan 23 17:32:13.140330 kubelet[3382]: E0123 17:32:13.140200 3382 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-8ccml\" is forbidden: User \"system:node:ip-172-31-16-139\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-139' and this object" podUID="b186194f-5fb7-4e34-8f1c-6c77a15c589d" pod="kube-system/kube-proxy-8ccml" Jan 23 17:32:13.140465 kubelet[3382]: E0123 17:32:13.140352 3382 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-16-139\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-139' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Jan 23 17:32:13.145330 systemd[1]: Created slice kubepods-besteffort-podb186194f_5fb7_4e34_8f1c_6c77a15c589d.slice - libcontainer container kubepods-besteffort-podb186194f_5fb7_4e34_8f1c_6c77a15c589d.slice. Jan 23 17:32:13.172163 kubelet[3382]: I0123 17:32:13.171708 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b186194f-5fb7-4e34-8f1c-6c77a15c589d-xtables-lock\") pod \"kube-proxy-8ccml\" (UID: \"b186194f-5fb7-4e34-8f1c-6c77a15c589d\") " pod="kube-system/kube-proxy-8ccml" Jan 23 17:32:13.172163 kubelet[3382]: I0123 17:32:13.171761 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b186194f-5fb7-4e34-8f1c-6c77a15c589d-lib-modules\") pod \"kube-proxy-8ccml\" (UID: \"b186194f-5fb7-4e34-8f1c-6c77a15c589d\") " pod="kube-system/kube-proxy-8ccml" Jan 23 17:32:13.172163 kubelet[3382]: I0123 17:32:13.171799 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sstfw\" (UniqueName: \"kubernetes.io/projected/b186194f-5fb7-4e34-8f1c-6c77a15c589d-kube-api-access-sstfw\") pod \"kube-proxy-8ccml\" (UID: \"b186194f-5fb7-4e34-8f1c-6c77a15c589d\") " pod="kube-system/kube-proxy-8ccml" Jan 23 17:32:13.172163 kubelet[3382]: I0123 17:32:13.171840 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b186194f-5fb7-4e34-8f1c-6c77a15c589d-kube-proxy\") pod \"kube-proxy-8ccml\" (UID: \"b186194f-5fb7-4e34-8f1c-6c77a15c589d\") " pod="kube-system/kube-proxy-8ccml" Jan 23 17:32:13.183966 systemd[1]: Created slice kubepods-burstable-pod6e1818a7_10a9_474d_88a6_4ba39ed592ad.slice - libcontainer container kubepods-burstable-pod6e1818a7_10a9_474d_88a6_4ba39ed592ad.slice. Jan 23 17:32:13.272127 kubelet[3382]: I0123 17:32:13.272073 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/6e1818a7-10a9-474d-88a6-4ba39ed592ad-cni-plugin\") pod \"kube-flannel-ds-n7xbb\" (UID: \"6e1818a7-10a9-474d-88a6-4ba39ed592ad\") " pod="kube-flannel/kube-flannel-ds-n7xbb" Jan 23 17:32:13.272127 kubelet[3382]: I0123 17:32:13.272131 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e1818a7-10a9-474d-88a6-4ba39ed592ad-xtables-lock\") pod \"kube-flannel-ds-n7xbb\" (UID: \"6e1818a7-10a9-474d-88a6-4ba39ed592ad\") " pod="kube-flannel/kube-flannel-ds-n7xbb" Jan 23 17:32:13.273630 kubelet[3382]: I0123 17:32:13.272232 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/6e1818a7-10a9-474d-88a6-4ba39ed592ad-cni\") pod \"kube-flannel-ds-n7xbb\" (UID: \"6e1818a7-10a9-474d-88a6-4ba39ed592ad\") " pod="kube-flannel/kube-flannel-ds-n7xbb" Jan 23 17:32:13.273630 kubelet[3382]: I0123 17:32:13.272269 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/6e1818a7-10a9-474d-88a6-4ba39ed592ad-flannel-cfg\") pod \"kube-flannel-ds-n7xbb\" (UID: \"6e1818a7-10a9-474d-88a6-4ba39ed592ad\") " pod="kube-flannel/kube-flannel-ds-n7xbb" Jan 23 17:32:13.273630 kubelet[3382]: I0123 17:32:13.272303 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwv7\" (UniqueName: \"kubernetes.io/projected/6e1818a7-10a9-474d-88a6-4ba39ed592ad-kube-api-access-5mwv7\") pod \"kube-flannel-ds-n7xbb\" (UID: \"6e1818a7-10a9-474d-88a6-4ba39ed592ad\") " pod="kube-flannel/kube-flannel-ds-n7xbb" Jan 23 17:32:13.273630 kubelet[3382]: I0123 17:32:13.272377 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e1818a7-10a9-474d-88a6-4ba39ed592ad-run\") pod \"kube-flannel-ds-n7xbb\" (UID: \"6e1818a7-10a9-474d-88a6-4ba39ed592ad\") " pod="kube-flannel/kube-flannel-ds-n7xbb" Jan 23 17:32:13.497095 containerd[1897]: time="2026-01-23T17:32:13.496759689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-n7xbb,Uid:6e1818a7-10a9-474d-88a6-4ba39ed592ad,Namespace:kube-flannel,Attempt:0,}" Jan 23 17:32:13.540619 containerd[1897]: time="2026-01-23T17:32:13.539416530Z" level=info msg="connecting to shim 18cd6d63341fc6efb94bee5e42ee313e880dba73bc23995da49ba86a7ecaff1f" address="unix:///run/containerd/s/37208f1e13143d732eb1d7081285ddcccaca0ea0cc146d6993ce3121ed734997" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:13.593002 systemd[1]: Started cri-containerd-18cd6d63341fc6efb94bee5e42ee313e880dba73bc23995da49ba86a7ecaff1f.scope - libcontainer container 18cd6d63341fc6efb94bee5e42ee313e880dba73bc23995da49ba86a7ecaff1f. Jan 23 17:32:13.666827 containerd[1897]: time="2026-01-23T17:32:13.666756889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-n7xbb,Uid:6e1818a7-10a9-474d-88a6-4ba39ed592ad,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"18cd6d63341fc6efb94bee5e42ee313e880dba73bc23995da49ba86a7ecaff1f\"" Jan 23 17:32:13.671155 containerd[1897]: time="2026-01-23T17:32:13.670717247Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 23 17:32:14.064211 containerd[1897]: time="2026-01-23T17:32:14.063969024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8ccml,Uid:b186194f-5fb7-4e34-8f1c-6c77a15c589d,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:14.094515 containerd[1897]: time="2026-01-23T17:32:14.094427691Z" level=info msg="connecting to shim e373d71d126f38abcd2f2e0edd1bef59419e1bd907afbe5559e1fa2633c2af1b" address="unix:///run/containerd/s/5f2424dd0bbd8ca158c3d8d9ea875b938512f404131908138b5687765f8051ba" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:14.136913 systemd[1]: Started cri-containerd-e373d71d126f38abcd2f2e0edd1bef59419e1bd907afbe5559e1fa2633c2af1b.scope - libcontainer container e373d71d126f38abcd2f2e0edd1bef59419e1bd907afbe5559e1fa2633c2af1b. Jan 23 17:32:14.201435 containerd[1897]: time="2026-01-23T17:32:14.201378813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8ccml,Uid:b186194f-5fb7-4e34-8f1c-6c77a15c589d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e373d71d126f38abcd2f2e0edd1bef59419e1bd907afbe5559e1fa2633c2af1b\"" Jan 23 17:32:14.211379 containerd[1897]: time="2026-01-23T17:32:14.211320472Z" level=info msg="CreateContainer within sandbox \"e373d71d126f38abcd2f2e0edd1bef59419e1bd907afbe5559e1fa2633c2af1b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 17:32:14.225400 containerd[1897]: time="2026-01-23T17:32:14.225351893Z" level=info msg="Container 95662d3b3e19525f99ea60761a959890af068006ddcb2a2c9c11c42228479ad5: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:14.236595 containerd[1897]: time="2026-01-23T17:32:14.236423698Z" level=info msg="CreateContainer within sandbox \"e373d71d126f38abcd2f2e0edd1bef59419e1bd907afbe5559e1fa2633c2af1b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"95662d3b3e19525f99ea60761a959890af068006ddcb2a2c9c11c42228479ad5\"" Jan 23 17:32:14.237787 containerd[1897]: time="2026-01-23T17:32:14.237708592Z" level=info msg="StartContainer for \"95662d3b3e19525f99ea60761a959890af068006ddcb2a2c9c11c42228479ad5\"" Jan 23 17:32:14.243723 containerd[1897]: time="2026-01-23T17:32:14.243648416Z" level=info msg="connecting to shim 95662d3b3e19525f99ea60761a959890af068006ddcb2a2c9c11c42228479ad5" address="unix:///run/containerd/s/5f2424dd0bbd8ca158c3d8d9ea875b938512f404131908138b5687765f8051ba" protocol=ttrpc version=3 Jan 23 17:32:14.280164 systemd[1]: Started cri-containerd-95662d3b3e19525f99ea60761a959890af068006ddcb2a2c9c11c42228479ad5.scope - libcontainer container 95662d3b3e19525f99ea60761a959890af068006ddcb2a2c9c11c42228479ad5. Jan 23 17:32:14.429917 containerd[1897]: time="2026-01-23T17:32:14.429629024Z" level=info msg="StartContainer for \"95662d3b3e19525f99ea60761a959890af068006ddcb2a2c9c11c42228479ad5\" returns successfully" Jan 23 17:32:15.047660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820930551.mount: Deactivated successfully. Jan 23 17:32:15.138577 containerd[1897]: time="2026-01-23T17:32:15.138310228Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:15.142864 containerd[1897]: time="2026-01-23T17:32:15.142595804Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=0" Jan 23 17:32:15.144857 containerd[1897]: time="2026-01-23T17:32:15.144800122Z" level=info msg="ImageCreate event name:\"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:15.154317 containerd[1897]: time="2026-01-23T17:32:15.154227490Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:15.157781 containerd[1897]: time="2026-01-23T17:32:15.157531237Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"5125394\" in 1.486263825s" Jan 23 17:32:15.157781 containerd[1897]: time="2026-01-23T17:32:15.157637036Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\"" Jan 23 17:32:15.168627 containerd[1897]: time="2026-01-23T17:32:15.168018934Z" level=info msg="CreateContainer within sandbox \"18cd6d63341fc6efb94bee5e42ee313e880dba73bc23995da49ba86a7ecaff1f\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 23 17:32:15.185099 containerd[1897]: time="2026-01-23T17:32:15.185043926Z" level=info msg="Container bd1332ce05733f05fa55c92a5a390341b4c25a0d2c1b876fe0a766d5ee3367d0: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:15.193893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4188442832.mount: Deactivated successfully. Jan 23 17:32:15.203540 containerd[1897]: time="2026-01-23T17:32:15.203452437Z" level=info msg="CreateContainer within sandbox \"18cd6d63341fc6efb94bee5e42ee313e880dba73bc23995da49ba86a7ecaff1f\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"bd1332ce05733f05fa55c92a5a390341b4c25a0d2c1b876fe0a766d5ee3367d0\"" Jan 23 17:32:15.211238 containerd[1897]: time="2026-01-23T17:32:15.210843512Z" level=info msg="StartContainer for \"bd1332ce05733f05fa55c92a5a390341b4c25a0d2c1b876fe0a766d5ee3367d0\"" Jan 23 17:32:15.221984 containerd[1897]: time="2026-01-23T17:32:15.221872067Z" level=info msg="connecting to shim bd1332ce05733f05fa55c92a5a390341b4c25a0d2c1b876fe0a766d5ee3367d0" address="unix:///run/containerd/s/37208f1e13143d732eb1d7081285ddcccaca0ea0cc146d6993ce3121ed734997" protocol=ttrpc version=3 Jan 23 17:32:15.273970 systemd[1]: Started cri-containerd-bd1332ce05733f05fa55c92a5a390341b4c25a0d2c1b876fe0a766d5ee3367d0.scope - libcontainer container bd1332ce05733f05fa55c92a5a390341b4c25a0d2c1b876fe0a766d5ee3367d0. Jan 23 17:32:15.368482 containerd[1897]: time="2026-01-23T17:32:15.368427581Z" level=info msg="StartContainer for \"bd1332ce05733f05fa55c92a5a390341b4c25a0d2c1b876fe0a766d5ee3367d0\" returns successfully" Jan 23 17:32:15.382086 systemd[1]: cri-containerd-bd1332ce05733f05fa55c92a5a390341b4c25a0d2c1b876fe0a766d5ee3367d0.scope: Deactivated successfully. Jan 23 17:32:15.390046 containerd[1897]: time="2026-01-23T17:32:15.389978736Z" level=info msg="received container exit event container_id:\"bd1332ce05733f05fa55c92a5a390341b4c25a0d2c1b876fe0a766d5ee3367d0\" id:\"bd1332ce05733f05fa55c92a5a390341b4c25a0d2c1b876fe0a766d5ee3367d0\" pid:3722 exited_at:{seconds:1769189535 nanos:389078418}" Jan 23 17:32:15.436010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd1332ce05733f05fa55c92a5a390341b4c25a0d2c1b876fe0a766d5ee3367d0-rootfs.mount: Deactivated successfully. Jan 23 17:32:16.340018 containerd[1897]: time="2026-01-23T17:32:16.339896918Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 23 17:32:16.363127 kubelet[3382]: I0123 17:32:16.362817 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8ccml" podStartSLOduration=3.362795971 podStartE2EDuration="3.362795971s" podCreationTimestamp="2026-01-23 17:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:15.359019463 +0000 UTC m=+7.462480269" watchObservedRunningTime="2026-01-23 17:32:16.362795971 +0000 UTC m=+8.466256693" Jan 23 17:32:18.776527 containerd[1897]: time="2026-01-23T17:32:18.776446313Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:18.782182 containerd[1897]: time="2026-01-23T17:32:18.782079415Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=12447072" Jan 23 17:32:18.783275 containerd[1897]: time="2026-01-23T17:32:18.783221400Z" level=info msg="ImageCreate event name:\"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:18.795003 containerd[1897]: time="2026-01-23T17:32:18.794932018Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:18.797799 containerd[1897]: time="2026-01-23T17:32:18.797749648Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32412118\" in 2.457791824s" Jan 23 17:32:18.797975 containerd[1897]: time="2026-01-23T17:32:18.797945942Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\"" Jan 23 17:32:18.806924 containerd[1897]: time="2026-01-23T17:32:18.806875415Z" level=info msg="CreateContainer within sandbox \"18cd6d63341fc6efb94bee5e42ee313e880dba73bc23995da49ba86a7ecaff1f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 17:32:18.824375 containerd[1897]: time="2026-01-23T17:32:18.824307878Z" level=info msg="Container 471142eec0fb9b956c4980c614cb735e753068d8372fd786c8bc40c29bc5b6a7: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:18.825766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3415875835.mount: Deactivated successfully. Jan 23 17:32:18.831351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3210087139.mount: Deactivated successfully. Jan 23 17:32:18.844633 containerd[1897]: time="2026-01-23T17:32:18.844541660Z" level=info msg="CreateContainer within sandbox \"18cd6d63341fc6efb94bee5e42ee313e880dba73bc23995da49ba86a7ecaff1f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"471142eec0fb9b956c4980c614cb735e753068d8372fd786c8bc40c29bc5b6a7\"" Jan 23 17:32:18.846795 containerd[1897]: time="2026-01-23T17:32:18.845854368Z" level=info msg="StartContainer for \"471142eec0fb9b956c4980c614cb735e753068d8372fd786c8bc40c29bc5b6a7\"" Jan 23 17:32:18.848179 containerd[1897]: time="2026-01-23T17:32:18.847918763Z" level=info msg="connecting to shim 471142eec0fb9b956c4980c614cb735e753068d8372fd786c8bc40c29bc5b6a7" address="unix:///run/containerd/s/37208f1e13143d732eb1d7081285ddcccaca0ea0cc146d6993ce3121ed734997" protocol=ttrpc version=3 Jan 23 17:32:18.887912 systemd[1]: Started cri-containerd-471142eec0fb9b956c4980c614cb735e753068d8372fd786c8bc40c29bc5b6a7.scope - libcontainer container 471142eec0fb9b956c4980c614cb735e753068d8372fd786c8bc40c29bc5b6a7. Jan 23 17:32:18.948331 systemd[1]: cri-containerd-471142eec0fb9b956c4980c614cb735e753068d8372fd786c8bc40c29bc5b6a7.scope: Deactivated successfully. Jan 23 17:32:18.954047 containerd[1897]: time="2026-01-23T17:32:18.953399385Z" level=info msg="received container exit event container_id:\"471142eec0fb9b956c4980c614cb735e753068d8372fd786c8bc40c29bc5b6a7\" id:\"471142eec0fb9b956c4980c614cb735e753068d8372fd786c8bc40c29bc5b6a7\" pid:3803 exited_at:{seconds:1769189538 nanos:952415817}" Jan 23 17:32:18.955418 containerd[1897]: time="2026-01-23T17:32:18.955208188Z" level=info msg="StartContainer for \"471142eec0fb9b956c4980c614cb735e753068d8372fd786c8bc40c29bc5b6a7\" returns successfully" Jan 23 17:32:18.994350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-471142eec0fb9b956c4980c614cb735e753068d8372fd786c8bc40c29bc5b6a7-rootfs.mount: Deactivated successfully. Jan 23 17:32:19.057077 kubelet[3382]: I0123 17:32:19.056922 3382 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 17:32:19.144425 systemd[1]: Created slice kubepods-burstable-pod3fcb5c6f_55d7_4dae_aede_35fad32a4076.slice - libcontainer container kubepods-burstable-pod3fcb5c6f_55d7_4dae_aede_35fad32a4076.slice. Jan 23 17:32:19.163107 systemd[1]: Created slice kubepods-burstable-pod5aeff923_b5d2_4c75_9f32_5c030c980737.slice - libcontainer container kubepods-burstable-pod5aeff923_b5d2_4c75_9f32_5c030c980737.slice. Jan 23 17:32:19.214762 kubelet[3382]: I0123 17:32:19.214454 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fcb5c6f-55d7-4dae-aede-35fad32a4076-config-volume\") pod \"coredns-66bc5c9577-zzzmr\" (UID: \"3fcb5c6f-55d7-4dae-aede-35fad32a4076\") " pod="kube-system/coredns-66bc5c9577-zzzmr" Jan 23 17:32:19.214762 kubelet[3382]: I0123 17:32:19.214524 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkps8\" (UniqueName: \"kubernetes.io/projected/3fcb5c6f-55d7-4dae-aede-35fad32a4076-kube-api-access-vkps8\") pod \"coredns-66bc5c9577-zzzmr\" (UID: \"3fcb5c6f-55d7-4dae-aede-35fad32a4076\") " pod="kube-system/coredns-66bc5c9577-zzzmr" Jan 23 17:32:19.214762 kubelet[3382]: I0123 17:32:19.214609 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aeff923-b5d2-4c75-9f32-5c030c980737-config-volume\") pod \"coredns-66bc5c9577-gdp2b\" (UID: \"5aeff923-b5d2-4c75-9f32-5c030c980737\") " pod="kube-system/coredns-66bc5c9577-gdp2b" Jan 23 17:32:19.214762 kubelet[3382]: I0123 17:32:19.214677 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwrrk\" (UniqueName: \"kubernetes.io/projected/5aeff923-b5d2-4c75-9f32-5c030c980737-kube-api-access-kwrrk\") pod \"coredns-66bc5c9577-gdp2b\" (UID: \"5aeff923-b5d2-4c75-9f32-5c030c980737\") " pod="kube-system/coredns-66bc5c9577-gdp2b" Jan 23 17:32:19.367441 containerd[1897]: time="2026-01-23T17:32:19.366927462Z" level=info msg="CreateContainer within sandbox \"18cd6d63341fc6efb94bee5e42ee313e880dba73bc23995da49ba86a7ecaff1f\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 23 17:32:19.385244 containerd[1897]: time="2026-01-23T17:32:19.385143950Z" level=info msg="Container c73efd8cb56007331c4dea3dd61ced7dedfe61eeac938630ba83effd1b1e3f46: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:19.398299 containerd[1897]: time="2026-01-23T17:32:19.398247095Z" level=info msg="CreateContainer within sandbox \"18cd6d63341fc6efb94bee5e42ee313e880dba73bc23995da49ba86a7ecaff1f\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"c73efd8cb56007331c4dea3dd61ced7dedfe61eeac938630ba83effd1b1e3f46\"" Jan 23 17:32:19.399697 containerd[1897]: time="2026-01-23T17:32:19.399641277Z" level=info msg="StartContainer for \"c73efd8cb56007331c4dea3dd61ced7dedfe61eeac938630ba83effd1b1e3f46\"" Jan 23 17:32:19.402457 containerd[1897]: time="2026-01-23T17:32:19.402390002Z" level=info msg="connecting to shim c73efd8cb56007331c4dea3dd61ced7dedfe61eeac938630ba83effd1b1e3f46" address="unix:///run/containerd/s/37208f1e13143d732eb1d7081285ddcccaca0ea0cc146d6993ce3121ed734997" protocol=ttrpc version=3 Jan 23 17:32:19.443912 systemd[1]: Started cri-containerd-c73efd8cb56007331c4dea3dd61ced7dedfe61eeac938630ba83effd1b1e3f46.scope - libcontainer container c73efd8cb56007331c4dea3dd61ced7dedfe61eeac938630ba83effd1b1e3f46. Jan 23 17:32:19.461684 containerd[1897]: time="2026-01-23T17:32:19.460756465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zzzmr,Uid:3fcb5c6f-55d7-4dae-aede-35fad32a4076,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:19.483362 containerd[1897]: time="2026-01-23T17:32:19.483296429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gdp2b,Uid:5aeff923-b5d2-4c75-9f32-5c030c980737,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:19.539838 containerd[1897]: time="2026-01-23T17:32:19.538770143Z" level=info msg="StartContainer for \"c73efd8cb56007331c4dea3dd61ced7dedfe61eeac938630ba83effd1b1e3f46\" returns successfully" Jan 23 17:32:19.568419 containerd[1897]: time="2026-01-23T17:32:19.567935077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zzzmr,Uid:3fcb5c6f-55d7-4dae-aede-35fad32a4076,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd6d4fa984209947052e410d2329cfa757640defaa97266ad81c61252c7f0b1f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 17:32:19.570320 kubelet[3382]: E0123 17:32:19.568384 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd6d4fa984209947052e410d2329cfa757640defaa97266ad81c61252c7f0b1f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 17:32:19.570320 kubelet[3382]: E0123 17:32:19.568482 3382 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd6d4fa984209947052e410d2329cfa757640defaa97266ad81c61252c7f0b1f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-zzzmr" Jan 23 17:32:19.570320 kubelet[3382]: E0123 17:32:19.568515 3382 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd6d4fa984209947052e410d2329cfa757640defaa97266ad81c61252c7f0b1f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-zzzmr" Jan 23 17:32:19.570320 kubelet[3382]: E0123 17:32:19.568637 3382 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zzzmr_kube-system(3fcb5c6f-55d7-4dae-aede-35fad32a4076)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zzzmr_kube-system(3fcb5c6f-55d7-4dae-aede-35fad32a4076)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd6d4fa984209947052e410d2329cfa757640defaa97266ad81c61252c7f0b1f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-zzzmr" podUID="3fcb5c6f-55d7-4dae-aede-35fad32a4076" Jan 23 17:32:19.575789 containerd[1897]: time="2026-01-23T17:32:19.574514590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gdp2b,Uid:5aeff923-b5d2-4c75-9f32-5c030c980737,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca86e8851119ff91d36b8b6e0e466dc766be321ddcaac98567392d6a6d0fa38\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 17:32:19.576013 kubelet[3382]: E0123 17:32:19.575805 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca86e8851119ff91d36b8b6e0e466dc766be321ddcaac98567392d6a6d0fa38\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 17:32:19.576013 kubelet[3382]: E0123 17:32:19.575879 3382 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca86e8851119ff91d36b8b6e0e466dc766be321ddcaac98567392d6a6d0fa38\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-gdp2b" Jan 23 17:32:19.576013 kubelet[3382]: E0123 17:32:19.575912 3382 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca86e8851119ff91d36b8b6e0e466dc766be321ddcaac98567392d6a6d0fa38\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-gdp2b" Jan 23 17:32:19.577204 kubelet[3382]: E0123 17:32:19.576014 3382 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-gdp2b_kube-system(5aeff923-b5d2-4c75-9f32-5c030c980737)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-gdp2b_kube-system(5aeff923-b5d2-4c75-9f32-5c030c980737)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fca86e8851119ff91d36b8b6e0e466dc766be321ddcaac98567392d6a6d0fa38\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-gdp2b" podUID="5aeff923-b5d2-4c75-9f32-5c030c980737" Jan 23 17:32:20.382663 kubelet[3382]: I0123 17:32:20.382443 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-n7xbb" podStartSLOduration=2.251349097 podStartE2EDuration="7.381975032s" podCreationTimestamp="2026-01-23 17:32:13 +0000 UTC" firstStartedPulling="2026-01-23 17:32:13.669673338 +0000 UTC m=+5.773134036" lastFinishedPulling="2026-01-23 17:32:18.800299273 +0000 UTC m=+10.903759971" observedRunningTime="2026-01-23 17:32:20.381195962 +0000 UTC m=+12.484656779" watchObservedRunningTime="2026-01-23 17:32:20.381975032 +0000 UTC m=+12.485435754" Jan 23 17:32:20.654272 (udev-worker)[3921]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:32:20.679478 systemd-networkd[1697]: flannel.1: Link UP Jan 23 17:32:20.680115 systemd-networkd[1697]: flannel.1: Gained carrier Jan 23 17:32:22.168781 systemd-networkd[1697]: flannel.1: Gained IPv6LL Jan 23 17:32:24.210364 ntpd[1856]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 23 17:32:24.211004 ntpd[1856]: 23 Jan 17:32:24 ntpd[1856]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 23 17:32:24.211004 ntpd[1856]: 23 Jan 17:32:24 ntpd[1856]: Listen normally on 7 flannel.1 [fe80::e4f9:a8ff:fe5d:a425%4]:123 Jan 23 17:32:24.210443 ntpd[1856]: Listen normally on 7 flannel.1 [fe80::e4f9:a8ff:fe5d:a425%4]:123 Jan 23 17:32:32.192042 containerd[1897]: time="2026-01-23T17:32:32.191972342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gdp2b,Uid:5aeff923-b5d2-4c75-9f32-5c030c980737,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:32.196191 containerd[1897]: time="2026-01-23T17:32:32.196142596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zzzmr,Uid:3fcb5c6f-55d7-4dae-aede-35fad32a4076,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:32.233520 systemd-networkd[1697]: cni0: Link UP Jan 23 17:32:32.233534 systemd-networkd[1697]: cni0: Gained carrier Jan 23 17:32:32.246477 (udev-worker)[4039]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:32:32.247589 systemd-networkd[1697]: cni0: Lost carrier Jan 23 17:32:32.253875 systemd-networkd[1697]: vethdb5e2bd5: Link UP Jan 23 17:32:32.261892 kernel: cni0: port 1(vethdb5e2bd5) entered blocking state Jan 23 17:32:32.262035 kernel: cni0: port 1(vethdb5e2bd5) entered disabled state Jan 23 17:32:32.262087 kernel: vethdb5e2bd5: entered allmulticast mode Jan 23 17:32:32.268672 kernel: vethdb5e2bd5: entered promiscuous mode Jan 23 17:32:32.270407 (udev-worker)[4049]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:32:32.270408 (udev-worker)[4051]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:32:32.276228 systemd-networkd[1697]: vethec58bb40: Link UP Jan 23 17:32:32.289233 kernel: cni0: port 2(vethec58bb40) entered blocking state Jan 23 17:32:32.289524 kernel: cni0: port 2(vethec58bb40) entered disabled state Jan 23 17:32:32.297403 kernel: vethec58bb40: entered allmulticast mode Jan 23 17:32:32.299306 kernel: vethec58bb40: entered promiscuous mode Jan 23 17:32:32.305936 kernel: cni0: port 1(vethdb5e2bd5) entered blocking state Jan 23 17:32:32.306057 kernel: cni0: port 1(vethdb5e2bd5) entered forwarding state Jan 23 17:32:32.311356 systemd-networkd[1697]: vethdb5e2bd5: Gained carrier Jan 23 17:32:32.312435 systemd-networkd[1697]: cni0: Gained carrier Jan 23 17:32:32.329107 containerd[1897]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000084950), "name":"cbr0", "type":"bridge"} Jan 23 17:32:32.329107 containerd[1897]: delegateAdd: netconf sent to delegate plugin: Jan 23 17:32:32.338151 kernel: cni0: port 2(vethec58bb40) entered blocking state Jan 23 17:32:32.338283 kernel: cni0: port 2(vethec58bb40) entered forwarding state Jan 23 17:32:32.338724 systemd-networkd[1697]: vethec58bb40: Gained carrier Jan 23 17:32:32.344812 containerd[1897]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"} Jan 23 17:32:32.344812 containerd[1897]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400008c950), "name":"cbr0", "type":"bridge"} Jan 23 17:32:32.344812 containerd[1897]: delegateAdd: netconf sent to delegate plugin: Jan 23 17:32:32.423317 containerd[1897]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-23T17:32:32.422960723Z" level=info msg="connecting to shim 81e1e926bceb31d8277baf76c4e03d892974d99c27eb6da0cbd0b13feebf2cd0" address="unix:///run/containerd/s/e26a869f7fdda33f633d954e2d303e4e4b03df995d112a1d594b5351c33e7d98" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:32.429755 containerd[1897]: time="2026-01-23T17:32:32.429668679Z" level=info msg="connecting to shim 192cb44c2d47734d256247161539abe0de95c2fa7d7b10e90ecb2a1e21ccbf2d" address="unix:///run/containerd/s/fc40b00562ab9e351bfabb3b776de1a732d2bbd5664c7bbae54c55c19295e9a7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:32.480911 systemd[1]: Started cri-containerd-81e1e926bceb31d8277baf76c4e03d892974d99c27eb6da0cbd0b13feebf2cd0.scope - libcontainer container 81e1e926bceb31d8277baf76c4e03d892974d99c27eb6da0cbd0b13feebf2cd0. Jan 23 17:32:32.507007 systemd[1]: Started cri-containerd-192cb44c2d47734d256247161539abe0de95c2fa7d7b10e90ecb2a1e21ccbf2d.scope - libcontainer container 192cb44c2d47734d256247161539abe0de95c2fa7d7b10e90ecb2a1e21ccbf2d. Jan 23 17:32:32.595852 containerd[1897]: time="2026-01-23T17:32:32.595610168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gdp2b,Uid:5aeff923-b5d2-4c75-9f32-5c030c980737,Namespace:kube-system,Attempt:0,} returns sandbox id \"81e1e926bceb31d8277baf76c4e03d892974d99c27eb6da0cbd0b13feebf2cd0\"" Jan 23 17:32:32.608647 containerd[1897]: time="2026-01-23T17:32:32.606869319Z" level=info msg="CreateContainer within sandbox \"81e1e926bceb31d8277baf76c4e03d892974d99c27eb6da0cbd0b13feebf2cd0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:32:32.619648 containerd[1897]: time="2026-01-23T17:32:32.619575763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zzzmr,Uid:3fcb5c6f-55d7-4dae-aede-35fad32a4076,Namespace:kube-system,Attempt:0,} returns sandbox id \"192cb44c2d47734d256247161539abe0de95c2fa7d7b10e90ecb2a1e21ccbf2d\"" Jan 23 17:32:32.630014 containerd[1897]: time="2026-01-23T17:32:32.629937152Z" level=info msg="CreateContainer within sandbox \"192cb44c2d47734d256247161539abe0de95c2fa7d7b10e90ecb2a1e21ccbf2d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:32:32.632140 containerd[1897]: time="2026-01-23T17:32:32.632091670Z" level=info msg="Container f589fed67cd9b00ccd795fbd106236d88d3701a4de000b7e4f2f7e8637024cdc: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:32.646422 containerd[1897]: time="2026-01-23T17:32:32.646349705Z" level=info msg="Container 43371496c3dbeb7d1bfc81fff5960d420a1a37d9778334cca057de56050abb40: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:32.661766 containerd[1897]: time="2026-01-23T17:32:32.661687105Z" level=info msg="CreateContainer within sandbox \"81e1e926bceb31d8277baf76c4e03d892974d99c27eb6da0cbd0b13feebf2cd0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f589fed67cd9b00ccd795fbd106236d88d3701a4de000b7e4f2f7e8637024cdc\"" Jan 23 17:32:32.662647 containerd[1897]: time="2026-01-23T17:32:32.662595806Z" level=info msg="StartContainer for \"f589fed67cd9b00ccd795fbd106236d88d3701a4de000b7e4f2f7e8637024cdc\"" Jan 23 17:32:32.666256 containerd[1897]: time="2026-01-23T17:32:32.666005125Z" level=info msg="connecting to shim f589fed67cd9b00ccd795fbd106236d88d3701a4de000b7e4f2f7e8637024cdc" address="unix:///run/containerd/s/e26a869f7fdda33f633d954e2d303e4e4b03df995d112a1d594b5351c33e7d98" protocol=ttrpc version=3 Jan 23 17:32:32.667422 containerd[1897]: time="2026-01-23T17:32:32.667330486Z" level=info msg="CreateContainer within sandbox \"192cb44c2d47734d256247161539abe0de95c2fa7d7b10e90ecb2a1e21ccbf2d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"43371496c3dbeb7d1bfc81fff5960d420a1a37d9778334cca057de56050abb40\"" Jan 23 17:32:32.669650 containerd[1897]: time="2026-01-23T17:32:32.669590107Z" level=info msg="StartContainer for \"43371496c3dbeb7d1bfc81fff5960d420a1a37d9778334cca057de56050abb40\"" Jan 23 17:32:32.673585 containerd[1897]: time="2026-01-23T17:32:32.673345441Z" level=info msg="connecting to shim 43371496c3dbeb7d1bfc81fff5960d420a1a37d9778334cca057de56050abb40" address="unix:///run/containerd/s/fc40b00562ab9e351bfabb3b776de1a732d2bbd5664c7bbae54c55c19295e9a7" protocol=ttrpc version=3 Jan 23 17:32:32.713890 systemd[1]: Started cri-containerd-f589fed67cd9b00ccd795fbd106236d88d3701a4de000b7e4f2f7e8637024cdc.scope - libcontainer container f589fed67cd9b00ccd795fbd106236d88d3701a4de000b7e4f2f7e8637024cdc. Jan 23 17:32:32.724944 systemd[1]: Started cri-containerd-43371496c3dbeb7d1bfc81fff5960d420a1a37d9778334cca057de56050abb40.scope - libcontainer container 43371496c3dbeb7d1bfc81fff5960d420a1a37d9778334cca057de56050abb40. Jan 23 17:32:32.819396 containerd[1897]: time="2026-01-23T17:32:32.818200254Z" level=info msg="StartContainer for \"f589fed67cd9b00ccd795fbd106236d88d3701a4de000b7e4f2f7e8637024cdc\" returns successfully" Jan 23 17:32:32.819396 containerd[1897]: time="2026-01-23T17:32:32.819221675Z" level=info msg="StartContainer for \"43371496c3dbeb7d1bfc81fff5960d420a1a37d9778334cca057de56050abb40\" returns successfully" Jan 23 17:32:33.427117 kubelet[3382]: I0123 17:32:33.425742 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zzzmr" podStartSLOduration=20.425719984 podStartE2EDuration="20.425719984s" podCreationTimestamp="2026-01-23 17:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:33.425124721 +0000 UTC m=+25.528585443" watchObservedRunningTime="2026-01-23 17:32:33.425719984 +0000 UTC m=+25.529180706" Jan 23 17:32:33.816816 systemd-networkd[1697]: vethec58bb40: Gained IPv6LL Jan 23 17:32:34.073573 systemd-networkd[1697]: cni0: Gained IPv6LL Jan 23 17:32:34.200777 systemd-networkd[1697]: vethdb5e2bd5: Gained IPv6LL Jan 23 17:32:36.210497 ntpd[1856]: Listen normally on 8 cni0 192.168.0.1:123 Jan 23 17:32:36.211322 ntpd[1856]: 23 Jan 17:32:36 ntpd[1856]: Listen normally on 8 cni0 192.168.0.1:123 Jan 23 17:32:36.211322 ntpd[1856]: 23 Jan 17:32:36 ntpd[1856]: Listen normally on 9 cni0 [fe80::6814:9dff:fe5d:8dff%5]:123 Jan 23 17:32:36.211322 ntpd[1856]: 23 Jan 17:32:36 ntpd[1856]: Listen normally on 10 vethdb5e2bd5 [fe80::d8d5:64ff:fe0f:204a%6]:123 Jan 23 17:32:36.211322 ntpd[1856]: 23 Jan 17:32:36 ntpd[1856]: Listen normally on 11 vethec58bb40 [fe80::46c:8dff:fe1b:4814%7]:123 Jan 23 17:32:36.210625 ntpd[1856]: Listen normally on 9 cni0 [fe80::6814:9dff:fe5d:8dff%5]:123 Jan 23 17:32:36.210677 ntpd[1856]: Listen normally on 10 vethdb5e2bd5 [fe80::d8d5:64ff:fe0f:204a%6]:123 Jan 23 17:32:36.210743 ntpd[1856]: Listen normally on 11 vethec58bb40 [fe80::46c:8dff:fe1b:4814%7]:123 Jan 23 17:33:01.905145 systemd[1]: Started sshd@5-172.31.16.139:22-4.153.228.146:42466.service - OpenSSH per-connection server daemon (4.153.228.146:42466). Jan 23 17:33:02.367532 sshd[4374]: Accepted publickey for core from 4.153.228.146 port 42466 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:33:02.370779 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:33:02.379537 systemd-logind[1862]: New session 7 of user core. Jan 23 17:33:02.385877 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 17:33:02.748758 sshd[4378]: Connection closed by 4.153.228.146 port 42466 Jan 23 17:33:02.750046 sshd-session[4374]: pam_unix(sshd:session): session closed for user core Jan 23 17:33:02.758398 systemd[1]: sshd@5-172.31.16.139:22-4.153.228.146:42466.service: Deactivated successfully. Jan 23 17:33:02.763978 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 17:33:02.766845 systemd-logind[1862]: Session 7 logged out. Waiting for processes to exit. Jan 23 17:33:02.771066 systemd-logind[1862]: Removed session 7. Jan 23 17:33:07.841445 systemd[1]: Started sshd@6-172.31.16.139:22-4.153.228.146:41150.service - OpenSSH per-connection server daemon (4.153.228.146:41150). Jan 23 17:33:08.305127 sshd[4414]: Accepted publickey for core from 4.153.228.146 port 41150 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:33:08.308646 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:33:08.317217 systemd-logind[1862]: New session 8 of user core. Jan 23 17:33:08.327218 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 17:33:08.674293 sshd[4420]: Connection closed by 4.153.228.146 port 41150 Jan 23 17:33:08.675162 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Jan 23 17:33:08.683838 systemd[1]: sshd@6-172.31.16.139:22-4.153.228.146:41150.service: Deactivated successfully. Jan 23 17:33:08.688256 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 17:33:08.690252 systemd-logind[1862]: Session 8 logged out. Waiting for processes to exit. Jan 23 17:33:08.693714 systemd-logind[1862]: Removed session 8. Jan 23 17:33:13.766725 systemd[1]: Started sshd@7-172.31.16.139:22-4.153.228.146:41162.service - OpenSSH per-connection server daemon (4.153.228.146:41162). Jan 23 17:33:14.229427 sshd[4452]: Accepted publickey for core from 4.153.228.146 port 41162 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:33:14.233332 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:33:14.245069 systemd-logind[1862]: New session 9 of user core. Jan 23 17:33:14.254945 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 17:33:14.598676 sshd[4456]: Connection closed by 4.153.228.146 port 41162 Jan 23 17:33:14.599598 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Jan 23 17:33:14.609056 systemd[1]: sshd@7-172.31.16.139:22-4.153.228.146:41162.service: Deactivated successfully. Jan 23 17:33:14.613606 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 17:33:14.617934 systemd-logind[1862]: Session 9 logged out. Waiting for processes to exit. Jan 23 17:33:14.620735 systemd-logind[1862]: Removed session 9. Jan 23 17:33:14.694202 systemd[1]: Started sshd@8-172.31.16.139:22-4.153.228.146:59598.service - OpenSSH per-connection server daemon (4.153.228.146:59598). Jan 23 17:33:15.151120 sshd[4469]: Accepted publickey for core from 4.153.228.146 port 59598 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:33:15.156474 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:33:15.172419 systemd-logind[1862]: New session 10 of user core. Jan 23 17:33:15.183958 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 17:33:15.599371 sshd[4475]: Connection closed by 4.153.228.146 port 59598 Jan 23 17:33:15.600274 sshd-session[4469]: pam_unix(sshd:session): session closed for user core Jan 23 17:33:15.614349 systemd[1]: sshd@8-172.31.16.139:22-4.153.228.146:59598.service: Deactivated successfully. Jan 23 17:33:15.621747 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 17:33:15.625966 systemd-logind[1862]: Session 10 logged out. Waiting for processes to exit. Jan 23 17:33:15.633045 systemd-logind[1862]: Removed session 10. Jan 23 17:33:15.701655 systemd[1]: Started sshd@9-172.31.16.139:22-4.153.228.146:59606.service - OpenSSH per-connection server daemon (4.153.228.146:59606). Jan 23 17:33:16.168628 sshd[4486]: Accepted publickey for core from 4.153.228.146 port 59606 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:33:16.171818 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:33:16.184822 systemd-logind[1862]: New session 11 of user core. Jan 23 17:33:16.194669 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 17:33:16.534837 sshd[4503]: Connection closed by 4.153.228.146 port 59606 Jan 23 17:33:16.535174 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Jan 23 17:33:16.543997 systemd[1]: sshd@9-172.31.16.139:22-4.153.228.146:59606.service: Deactivated successfully. Jan 23 17:33:16.549476 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 17:33:16.551760 systemd-logind[1862]: Session 11 logged out. Waiting for processes to exit. Jan 23 17:33:16.555024 systemd-logind[1862]: Removed session 11. Jan 23 17:33:21.627912 systemd[1]: Started sshd@10-172.31.16.139:22-4.153.228.146:59614.service - OpenSSH per-connection server daemon (4.153.228.146:59614). Jan 23 17:33:22.104485 sshd[4542]: Accepted publickey for core from 4.153.228.146 port 59614 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:33:22.107439 sshd-session[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:33:22.116405 systemd-logind[1862]: New session 12 of user core. Jan 23 17:33:22.132847 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 17:33:22.461089 sshd[4546]: Connection closed by 4.153.228.146 port 59614 Jan 23 17:33:22.460841 sshd-session[4542]: pam_unix(sshd:session): session closed for user core Jan 23 17:33:22.471631 systemd[1]: sshd@10-172.31.16.139:22-4.153.228.146:59614.service: Deactivated successfully. Jan 23 17:33:22.478336 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 17:33:22.482505 systemd-logind[1862]: Session 12 logged out. Waiting for processes to exit. Jan 23 17:33:22.485988 systemd-logind[1862]: Removed session 12. Jan 23 17:33:22.553443 systemd[1]: Started sshd@11-172.31.16.139:22-4.153.228.146:59626.service - OpenSSH per-connection server daemon (4.153.228.146:59626). Jan 23 17:33:23.015789 sshd[4557]: Accepted publickey for core from 4.153.228.146 port 59626 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:33:23.018894 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:33:23.028487 systemd-logind[1862]: New session 13 of user core. Jan 23 17:33:23.035866 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 17:33:23.457438 sshd[4561]: Connection closed by 4.153.228.146 port 59626 Jan 23 17:33:23.458303 sshd-session[4557]: pam_unix(sshd:session): session closed for user core Jan 23 17:33:23.468632 systemd-logind[1862]: Session 13 logged out. Waiting for processes to exit. Jan 23 17:33:23.469334 systemd[1]: sshd@11-172.31.16.139:22-4.153.228.146:59626.service: Deactivated successfully. Jan 23 17:33:23.475849 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 17:33:23.480676 systemd-logind[1862]: Removed session 13. Jan 23 17:33:23.559837 systemd[1]: Started sshd@12-172.31.16.139:22-4.153.228.146:59642.service - OpenSSH per-connection server daemon (4.153.228.146:59642). Jan 23 17:33:24.013305 sshd[4570]: Accepted publickey for core from 4.153.228.146 port 59642 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:33:24.016324 sshd-session[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:33:24.025651 systemd-logind[1862]: New session 14 of user core. Jan 23 17:33:24.034852 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 17:33:25.081227 sshd[4574]: Connection closed by 4.153.228.146 port 59642 Jan 23 17:33:25.081659 sshd-session[4570]: pam_unix(sshd:session): session closed for user core Jan 23 17:33:25.092222 systemd[1]: sshd@12-172.31.16.139:22-4.153.228.146:59642.service: Deactivated successfully. Jan 23 17:33:25.097026 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 17:33:25.099743 systemd-logind[1862]: Session 14 logged out. Waiting for processes to exit. Jan 23 17:33:25.103767 systemd-logind[1862]: Removed session 14. Jan 23 17:33:25.184684 systemd[1]: Started sshd@13-172.31.16.139:22-4.153.228.146:46536.service - OpenSSH per-connection server daemon (4.153.228.146:46536). Jan 23 17:33:25.671606 sshd[4588]: Accepted publickey for core from 4.153.228.146 port 46536 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:33:25.674100 sshd-session[4588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:33:25.683667 systemd-logind[1862]: New session 15 of user core. Jan 23 17:33:25.688889 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 17:33:26.294971 sshd[4592]: Connection closed by 4.153.228.146 port 46536 Jan 23 17:33:26.295885 sshd-session[4588]: pam_unix(sshd:session): session closed for user core Jan 23 17:33:26.305470 systemd[1]: sshd@13-172.31.16.139:22-4.153.228.146:46536.service: Deactivated successfully. Jan 23 17:33:26.312239 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 17:33:26.320086 systemd-logind[1862]: Session 15 logged out. Waiting for processes to exit. Jan 23 17:33:26.325252 systemd-logind[1862]: Removed session 15. Jan 23 17:33:26.378479 systemd[1]: Started sshd@14-172.31.16.139:22-4.153.228.146:46548.service - OpenSSH per-connection server daemon (4.153.228.146:46548). Jan 23 17:33:26.835838 sshd[4623]: Accepted publickey for core from 4.153.228.146 port 46548 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:33:26.838816 sshd-session[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:33:26.849639 systemd-logind[1862]: New session 16 of user core. Jan 23 17:33:26.860905 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 17:33:27.192193 sshd[4628]: Connection closed by 4.153.228.146 port 46548 Jan 23 17:33:27.191900 sshd-session[4623]: pam_unix(sshd:session): session closed for user core Jan 23 17:33:27.202085 systemd[1]: sshd@14-172.31.16.139:22-4.153.228.146:46548.service: Deactivated successfully. Jan 23 17:33:27.206390 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 17:33:27.208512 systemd-logind[1862]: Session 16 logged out. Waiting for processes to exit. Jan 23 17:33:27.212308 systemd-logind[1862]: Removed session 16. Jan 23 17:33:32.283425 systemd[1]: Started sshd@15-172.31.16.139:22-4.153.228.146:46564.service - OpenSSH per-connection server daemon (4.153.228.146:46564). Jan 23 17:33:32.760412 sshd[4662]: Accepted publickey for core from 4.153.228.146 port 46564 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:33:32.763345 sshd-session[4662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:33:32.773956 systemd-logind[1862]: New session 17 of user core. Jan 23 17:33:32.783962 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 17:33:33.120131 sshd[4666]: Connection closed by 4.153.228.146 port 46564 Jan 23 17:33:33.122135 sshd-session[4662]: pam_unix(sshd:session): session closed for user core Jan 23 17:33:33.130313 systemd-logind[1862]: Session 17 logged out. Waiting for processes to exit. Jan 23 17:33:33.130865 systemd[1]: sshd@15-172.31.16.139:22-4.153.228.146:46564.service: Deactivated successfully. Jan 23 17:33:33.135975 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 17:33:33.141473 systemd-logind[1862]: Removed session 17. Jan 23 17:33:38.224298 systemd[1]: Started sshd@16-172.31.16.139:22-4.153.228.146:36772.service - OpenSSH per-connection server daemon (4.153.228.146:36772). Jan 23 17:33:38.719828 sshd[4697]: Accepted publickey for core from 4.153.228.146 port 36772 ssh2: RSA SHA256:QSIKsZ26pWCly3AO0bvsdVvJy6W4iUL8EVbryWeRupw Jan 23 17:33:38.726369 sshd-session[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:33:38.744653 systemd-logind[1862]: New session 18 of user core. Jan 23 17:33:38.751948 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 17:33:39.116448 sshd[4701]: Connection closed by 4.153.228.146 port 36772 Jan 23 17:33:39.116327 sshd-session[4697]: pam_unix(sshd:session): session closed for user core Jan 23 17:33:39.124335 systemd[1]: sshd@16-172.31.16.139:22-4.153.228.146:36772.service: Deactivated successfully. Jan 23 17:33:39.129003 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 17:33:39.132383 systemd-logind[1862]: Session 18 logged out. Waiting for processes to exit. Jan 23 17:33:39.135337 systemd-logind[1862]: Removed session 18. Jan 23 17:33:53.518328 systemd[1]: cri-containerd-77cdc1ca11d90727e2bfd56d0f12992706eaf1d45a3cc93371f614a74517f4b3.scope: Deactivated successfully. Jan 23 17:33:53.521458 systemd[1]: cri-containerd-77cdc1ca11d90727e2bfd56d0f12992706eaf1d45a3cc93371f614a74517f4b3.scope: Consumed 4.505s CPU time, 56M memory peak. Jan 23 17:33:53.529501 containerd[1897]: time="2026-01-23T17:33:53.529299680Z" level=info msg="received container exit event container_id:\"77cdc1ca11d90727e2bfd56d0f12992706eaf1d45a3cc93371f614a74517f4b3\" id:\"77cdc1ca11d90727e2bfd56d0f12992706eaf1d45a3cc93371f614a74517f4b3\" pid:3017 exit_status:1 exited_at:{seconds:1769189633 nanos:528774608}" Jan 23 17:33:53.580613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77cdc1ca11d90727e2bfd56d0f12992706eaf1d45a3cc93371f614a74517f4b3-rootfs.mount: Deactivated successfully. Jan 23 17:33:53.678512 kubelet[3382]: I0123 17:33:53.678457 3382 scope.go:117] "RemoveContainer" containerID="77cdc1ca11d90727e2bfd56d0f12992706eaf1d45a3cc93371f614a74517f4b3" Jan 23 17:33:53.684337 containerd[1897]: time="2026-01-23T17:33:53.683634705Z" level=info msg="CreateContainer within sandbox \"a22a18a861d0f7c48493749f19d01b44c1b839815846cec326bb84d096c0b840\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 17:33:53.705974 containerd[1897]: time="2026-01-23T17:33:53.705910857Z" level=info msg="Container 749276dabff34655ab15299cdae972a8f63b0f540a88762df5f494b2414384ef: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:33:53.726244 containerd[1897]: time="2026-01-23T17:33:53.726172737Z" level=info msg="CreateContainer within sandbox \"a22a18a861d0f7c48493749f19d01b44c1b839815846cec326bb84d096c0b840\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"749276dabff34655ab15299cdae972a8f63b0f540a88762df5f494b2414384ef\"" Jan 23 17:33:53.728035 containerd[1897]: time="2026-01-23T17:33:53.727523529Z" level=info msg="StartContainer for \"749276dabff34655ab15299cdae972a8f63b0f540a88762df5f494b2414384ef\"" Jan 23 17:33:53.737585 containerd[1897]: time="2026-01-23T17:33:53.737465625Z" level=info msg="connecting to shim 749276dabff34655ab15299cdae972a8f63b0f540a88762df5f494b2414384ef" address="unix:///run/containerd/s/9cf3c48112d4cc7423a4500d028d8613bcd1ea051840baefd58f23d0758aeabd" protocol=ttrpc version=3 Jan 23 17:33:53.781987 systemd[1]: Started cri-containerd-749276dabff34655ab15299cdae972a8f63b0f540a88762df5f494b2414384ef.scope - libcontainer container 749276dabff34655ab15299cdae972a8f63b0f540a88762df5f494b2414384ef. Jan 23 17:33:53.881273 containerd[1897]: time="2026-01-23T17:33:53.881134078Z" level=info msg="StartContainer for \"749276dabff34655ab15299cdae972a8f63b0f540a88762df5f494b2414384ef\" returns successfully" Jan 23 17:33:59.051183 systemd[1]: cri-containerd-650cbe070497d15700153f18b16a347640c694881a0e1c3d3e0a000b5b58d589.scope: Deactivated successfully. Jan 23 17:33:59.052163 systemd[1]: cri-containerd-650cbe070497d15700153f18b16a347640c694881a0e1c3d3e0a000b5b58d589.scope: Consumed 3.764s CPU time, 20.5M memory peak. Jan 23 17:33:59.057398 containerd[1897]: time="2026-01-23T17:33:59.057175956Z" level=info msg="received container exit event container_id:\"650cbe070497d15700153f18b16a347640c694881a0e1c3d3e0a000b5b58d589\" id:\"650cbe070497d15700153f18b16a347640c694881a0e1c3d3e0a000b5b58d589\" pid:3043 exit_status:1 exited_at:{seconds:1769189639 nanos:56282184}" Jan 23 17:33:59.107838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-650cbe070497d15700153f18b16a347640c694881a0e1c3d3e0a000b5b58d589-rootfs.mount: Deactivated successfully. Jan 23 17:33:59.710336 kubelet[3382]: I0123 17:33:59.710199 3382 scope.go:117] "RemoveContainer" containerID="650cbe070497d15700153f18b16a347640c694881a0e1c3d3e0a000b5b58d589" Jan 23 17:33:59.714692 containerd[1897]: time="2026-01-23T17:33:59.714634959Z" level=info msg="CreateContainer within sandbox \"a4dc5e155ecf19a5e6c9871d4de9311d2b6fa8179281ab2faa4a5557d303ef12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 17:33:59.741589 containerd[1897]: time="2026-01-23T17:33:59.738627783Z" level=info msg="Container 589188da50c2089f527691acbfb47c6d39d571c057ff198c20fcb13d043e8060: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:33:59.759306 containerd[1897]: time="2026-01-23T17:33:59.759239763Z" level=info msg="CreateContainer within sandbox \"a4dc5e155ecf19a5e6c9871d4de9311d2b6fa8179281ab2faa4a5557d303ef12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"589188da50c2089f527691acbfb47c6d39d571c057ff198c20fcb13d043e8060\"" Jan 23 17:33:59.760377 containerd[1897]: time="2026-01-23T17:33:59.760304367Z" level=info msg="StartContainer for \"589188da50c2089f527691acbfb47c6d39d571c057ff198c20fcb13d043e8060\"" Jan 23 17:33:59.762732 containerd[1897]: time="2026-01-23T17:33:59.762662727Z" level=info msg="connecting to shim 589188da50c2089f527691acbfb47c6d39d571c057ff198c20fcb13d043e8060" address="unix:///run/containerd/s/572b2cbb78ab9b6c31756b9417f929a112ffb1bb56af73cd919f1ded673b240f" protocol=ttrpc version=3 Jan 23 17:33:59.806926 systemd[1]: Started cri-containerd-589188da50c2089f527691acbfb47c6d39d571c057ff198c20fcb13d043e8060.scope - libcontainer container 589188da50c2089f527691acbfb47c6d39d571c057ff198c20fcb13d043e8060. Jan 23 17:33:59.887153 containerd[1897]: time="2026-01-23T17:33:59.887069464Z" level=info msg="StartContainer for \"589188da50c2089f527691acbfb47c6d39d571c057ff198c20fcb13d043e8060\" returns successfully" Jan 23 17:34:00.912774 kubelet[3382]: E0123 17:34:00.912699 3382 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-139?timeout=10s\": context deadline exceeded"