Jan 20 23:53:13.956006 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 20 23:53:13.956051 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 20 22:19:20 -00 2026 Jan 20 23:53:13.956075 kernel: KASLR disabled due to lack of seed Jan 20 23:53:13.956092 kernel: efi: EFI v2.7 by EDK II Jan 20 23:53:13.956108 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Jan 20 23:53:13.956123 kernel: secureboot: Secure boot disabled Jan 20 23:53:13.956141 kernel: ACPI: Early table checksum verification disabled Jan 20 23:53:13.956157 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 20 23:53:13.956173 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 20 23:53:13.956193 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 20 23:53:13.956210 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 20 23:53:13.956225 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 20 23:53:13.956241 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 20 23:53:13.956257 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 20 23:53:13.956279 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 20 23:53:13.956297 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 20 23:53:13.956314 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 20 23:53:13.956330 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 20 23:53:13.956347 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 20 23:53:13.956364 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 20 23:53:13.956380 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 20 23:53:13.956397 kernel: printk: legacy bootconsole [uart0] enabled Jan 20 23:53:13.956414 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 20 23:53:13.956431 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 20 23:53:13.956451 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Jan 20 23:53:13.956468 kernel: Zone ranges: Jan 20 23:53:13.956484 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 20 23:53:13.956501 kernel: DMA32 empty Jan 20 23:53:13.956517 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 20 23:53:13.956551 kernel: Device empty Jan 20 23:53:13.956573 kernel: Movable zone start for each node Jan 20 23:53:13.956590 kernel: Early memory node ranges Jan 20 23:53:13.956607 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 20 23:53:13.956624 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 20 23:53:13.956641 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 20 23:53:13.956658 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 20 23:53:13.956681 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 20 23:53:13.956697 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 20 23:53:13.956714 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 20 23:53:13.956731 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 20 23:53:13.956755 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 20 23:53:13.956777 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 20 23:53:13.956796 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 20 23:53:13.956813 kernel: psci: probing for conduit method from ACPI. Jan 20 23:53:13.956831 kernel: psci: PSCIv1.0 detected in firmware. Jan 20 23:53:13.956849 kernel: psci: Using standard PSCI v0.2 function IDs Jan 20 23:53:13.956867 kernel: psci: Trusted OS migration not required Jan 20 23:53:13.956885 kernel: psci: SMC Calling Convention v1.1 Jan 20 23:53:13.958926 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 20 23:53:13.959519 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 20 23:53:13.959746 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 20 23:53:13.959768 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 20 23:53:13.959786 kernel: Detected PIPT I-cache on CPU0 Jan 20 23:53:13.959804 kernel: CPU features: detected: GIC system register CPU interface Jan 20 23:53:13.959822 kernel: CPU features: detected: Spectre-v2 Jan 20 23:53:13.959840 kernel: CPU features: detected: Spectre-v3a Jan 20 23:53:13.959858 kernel: CPU features: detected: Spectre-BHB Jan 20 23:53:13.959875 kernel: CPU features: detected: ARM erratum 1742098 Jan 20 23:53:13.959893 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 20 23:53:13.959911 kernel: alternatives: applying boot alternatives Jan 20 23:53:13.959931 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3c423a3ed4865abab898483a94535823dbc3dcf7b9fc4db9a9e44dcb3b3370eb Jan 20 23:53:13.959955 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 23:53:13.959973 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 23:53:13.959991 kernel: Fallback order for Node 0: 0 Jan 20 23:53:13.960009 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 20 23:53:13.960027 kernel: Policy zone: Normal Jan 20 23:53:13.960045 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 23:53:13.960062 kernel: software IO TLB: area num 2. Jan 20 23:53:13.960080 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Jan 20 23:53:13.960098 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 23:53:13.960116 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 23:53:13.960140 kernel: rcu: RCU event tracing is enabled. Jan 20 23:53:13.960158 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 23:53:13.960177 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 23:53:13.960195 kernel: Tracing variant of Tasks RCU enabled. Jan 20 23:53:13.960213 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 23:53:13.960231 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 23:53:13.960250 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 23:53:13.960268 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 23:53:13.960286 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 20 23:53:13.960304 kernel: GICv3: 96 SPIs implemented Jan 20 23:53:13.960321 kernel: GICv3: 0 Extended SPIs implemented Jan 20 23:53:13.960343 kernel: Root IRQ handler: gic_handle_irq Jan 20 23:53:13.960360 kernel: GICv3: GICv3 features: 16 PPIs Jan 20 23:53:13.960378 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 20 23:53:13.960396 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 20 23:53:13.960413 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 20 23:53:13.960431 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 20 23:53:13.960449 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 20 23:53:13.960467 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 20 23:53:13.960486 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 20 23:53:13.960503 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 20 23:53:13.960521 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 23:53:13.961332 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 20 23:53:13.961359 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 20 23:53:13.961378 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 20 23:53:13.961397 kernel: Console: colour dummy device 80x25 Jan 20 23:53:13.961416 kernel: printk: legacy console [tty1] enabled Jan 20 23:53:13.961435 kernel: ACPI: Core revision 20240827 Jan 20 23:53:13.961454 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 20 23:53:13.961473 kernel: pid_max: default: 32768 minimum: 301 Jan 20 23:53:13.961499 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 23:53:13.961518 kernel: landlock: Up and running. Jan 20 23:53:13.961555 kernel: SELinux: Initializing. Jan 20 23:53:13.961578 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 23:53:13.961597 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 23:53:13.961616 kernel: rcu: Hierarchical SRCU implementation. Jan 20 23:53:13.961635 kernel: rcu: Max phase no-delay instances is 400. Jan 20 23:53:13.961654 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 23:53:13.961678 kernel: Remapping and enabling EFI services. Jan 20 23:53:13.961696 kernel: smp: Bringing up secondary CPUs ... Jan 20 23:53:13.961714 kernel: Detected PIPT I-cache on CPU1 Jan 20 23:53:13.961733 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 20 23:53:13.961751 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 20 23:53:13.961770 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 20 23:53:13.961788 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 23:53:13.961811 kernel: SMP: Total of 2 processors activated. Jan 20 23:53:13.961829 kernel: CPU: All CPU(s) started at EL1 Jan 20 23:53:13.961859 kernel: CPU features: detected: 32-bit EL0 Support Jan 20 23:53:13.961882 kernel: CPU features: detected: 32-bit EL1 Support Jan 20 23:53:13.961901 kernel: CPU features: detected: CRC32 instructions Jan 20 23:53:13.961919 kernel: alternatives: applying system-wide alternatives Jan 20 23:53:13.961940 kernel: Memory: 3823400K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12480K init, 1038K bss, 185716K reserved, 16384K cma-reserved) Jan 20 23:53:13.961959 kernel: devtmpfs: initialized Jan 20 23:53:13.961983 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 23:53:13.962002 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 23:53:13.962021 kernel: 23648 pages in range for non-PLT usage Jan 20 23:53:13.962040 kernel: 515168 pages in range for PLT usage Jan 20 23:53:13.962059 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 23:53:13.962082 kernel: SMBIOS 3.0.0 present. Jan 20 23:53:13.962101 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 20 23:53:13.962120 kernel: DMI: Memory slots populated: 0/0 Jan 20 23:53:13.962139 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 23:53:13.962158 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 20 23:53:13.962177 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 20 23:53:13.962196 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 20 23:53:13.962220 kernel: audit: initializing netlink subsys (disabled) Jan 20 23:53:13.962239 kernel: audit: type=2000 audit(0.224:1): state=initialized audit_enabled=0 res=1 Jan 20 23:53:13.962258 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 23:53:13.962277 kernel: cpuidle: using governor menu Jan 20 23:53:13.962296 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 20 23:53:13.962315 kernel: ASID allocator initialised with 65536 entries Jan 20 23:53:13.962334 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 23:53:13.962357 kernel: Serial: AMBA PL011 UART driver Jan 20 23:53:13.962376 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 23:53:13.962395 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 23:53:13.962414 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 20 23:53:13.962433 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 20 23:53:13.962452 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 23:53:13.962471 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 23:53:13.962494 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 20 23:53:13.962514 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 20 23:53:13.967375 kernel: ACPI: Added _OSI(Module Device) Jan 20 23:53:13.967409 kernel: ACPI: Added _OSI(Processor Device) Jan 20 23:53:13.967428 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 23:53:13.967447 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 23:53:13.967466 kernel: ACPI: Interpreter enabled Jan 20 23:53:13.967495 kernel: ACPI: Using GIC for interrupt routing Jan 20 23:53:13.967515 kernel: ACPI: MCFG table detected, 1 entries Jan 20 23:53:13.967550 kernel: ACPI: CPU0 has been hot-added Jan 20 23:53:13.967574 kernel: ACPI: CPU1 has been hot-added Jan 20 23:53:13.967594 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 20 23:53:13.967950 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 23:53:13.968215 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 20 23:53:13.968478 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 20 23:53:13.968768 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 20 23:53:13.969025 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 20 23:53:13.969051 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 20 23:53:13.969070 kernel: acpiphp: Slot [1] registered Jan 20 23:53:13.969089 kernel: acpiphp: Slot [2] registered Jan 20 23:53:13.969115 kernel: acpiphp: Slot [3] registered Jan 20 23:53:13.969134 kernel: acpiphp: Slot [4] registered Jan 20 23:53:13.969152 kernel: acpiphp: Slot [5] registered Jan 20 23:53:13.969171 kernel: acpiphp: Slot [6] registered Jan 20 23:53:13.969190 kernel: acpiphp: Slot [7] registered Jan 20 23:53:13.969209 kernel: acpiphp: Slot [8] registered Jan 20 23:53:13.969227 kernel: acpiphp: Slot [9] registered Jan 20 23:53:13.969250 kernel: acpiphp: Slot [10] registered Jan 20 23:53:13.969270 kernel: acpiphp: Slot [11] registered Jan 20 23:53:13.969289 kernel: acpiphp: Slot [12] registered Jan 20 23:53:13.969308 kernel: acpiphp: Slot [13] registered Jan 20 23:53:13.969327 kernel: acpiphp: Slot [14] registered Jan 20 23:53:13.969346 kernel: acpiphp: Slot [15] registered Jan 20 23:53:13.969364 kernel: acpiphp: Slot [16] registered Jan 20 23:53:13.969383 kernel: acpiphp: Slot [17] registered Jan 20 23:53:13.969407 kernel: acpiphp: Slot [18] registered Jan 20 23:53:13.969425 kernel: acpiphp: Slot [19] registered Jan 20 23:53:13.969444 kernel: acpiphp: Slot [20] registered Jan 20 23:53:13.969463 kernel: acpiphp: Slot [21] registered Jan 20 23:53:13.969482 kernel: acpiphp: Slot [22] registered Jan 20 23:53:13.969501 kernel: acpiphp: Slot [23] registered Jan 20 23:53:13.969520 kernel: acpiphp: Slot [24] registered Jan 20 23:53:13.970025 kernel: acpiphp: Slot [25] registered Jan 20 23:53:13.970342 kernel: acpiphp: Slot [26] registered Jan 20 23:53:13.970938 kernel: acpiphp: Slot [27] registered Jan 20 23:53:13.971172 kernel: acpiphp: Slot [28] registered Jan 20 23:53:13.971196 kernel: acpiphp: Slot [29] registered Jan 20 23:53:13.971216 kernel: acpiphp: Slot [30] registered Jan 20 23:53:13.971235 kernel: acpiphp: Slot [31] registered Jan 20 23:53:13.971254 kernel: PCI host bridge to bus 0000:00 Jan 20 23:53:13.971672 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 20 23:53:13.971915 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 20 23:53:13.977924 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 20 23:53:13.978251 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 20 23:53:13.978627 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 20 23:53:13.978947 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 20 23:53:13.979217 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 20 23:53:13.979521 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 20 23:53:13.982309 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 20 23:53:13.982707 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 20 23:53:13.983014 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 20 23:53:13.983277 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 20 23:53:13.983583 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 20 23:53:13.983854 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 20 23:53:13.984113 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 20 23:53:13.984354 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 20 23:53:13.984675 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 20 23:53:13.984912 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 20 23:53:13.984937 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 20 23:53:13.984957 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 20 23:53:13.984977 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 20 23:53:13.984996 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 20 23:53:13.985015 kernel: iommu: Default domain type: Translated Jan 20 23:53:13.985040 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 20 23:53:13.985059 kernel: efivars: Registered efivars operations Jan 20 23:53:13.985078 kernel: vgaarb: loaded Jan 20 23:53:13.985097 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 20 23:53:13.985116 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 23:53:13.985136 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 23:53:13.985154 kernel: pnp: PnP ACPI init Jan 20 23:53:13.985427 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 20 23:53:13.985453 kernel: pnp: PnP ACPI: found 1 devices Jan 20 23:53:13.985472 kernel: NET: Registered PF_INET protocol family Jan 20 23:53:13.985492 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 23:53:13.985511 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 23:53:13.985530 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 23:53:13.985703 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 23:53:13.985731 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 23:53:13.985750 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 23:53:13.985769 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 23:53:13.985788 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 23:53:13.985808 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 23:53:13.985826 kernel: PCI: CLS 0 bytes, default 64 Jan 20 23:53:13.985845 kernel: kvm [1]: HYP mode not available Jan 20 23:53:13.985869 kernel: Initialise system trusted keyrings Jan 20 23:53:13.985888 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 23:53:13.985907 kernel: Key type asymmetric registered Jan 20 23:53:13.985926 kernel: Asymmetric key parser 'x509' registered Jan 20 23:53:13.985945 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 20 23:53:13.985964 kernel: io scheduler mq-deadline registered Jan 20 23:53:13.985983 kernel: io scheduler kyber registered Jan 20 23:53:13.986007 kernel: io scheduler bfq registered Jan 20 23:53:13.986318 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 20 23:53:13.986347 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 20 23:53:13.986366 kernel: ACPI: button: Power Button [PWRB] Jan 20 23:53:13.986386 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 20 23:53:13.986405 kernel: ACPI: button: Sleep Button [SLPB] Jan 20 23:53:13.986453 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 23:53:13.986476 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 20 23:53:13.986771 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 20 23:53:13.986798 kernel: printk: legacy console [ttyS0] disabled Jan 20 23:53:13.986842 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 20 23:53:13.986864 kernel: printk: legacy console [ttyS0] enabled Jan 20 23:53:13.986883 kernel: printk: legacy bootconsole [uart0] disabled Jan 20 23:53:13.986908 kernel: thunder_xcv, ver 1.0 Jan 20 23:53:13.986927 kernel: thunder_bgx, ver 1.0 Jan 20 23:53:13.986945 kernel: nicpf, ver 1.0 Jan 20 23:53:13.986964 kernel: nicvf, ver 1.0 Jan 20 23:53:13.987278 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 20 23:53:13.987525 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-20T23:53:10 UTC (1768953190) Jan 20 23:53:13.987569 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 23:53:13.987618 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 20 23:53:13.987639 kernel: NET: Registered PF_INET6 protocol family Jan 20 23:53:13.987658 kernel: watchdog: NMI not fully supported Jan 20 23:53:13.987678 kernel: watchdog: Hard watchdog permanently disabled Jan 20 23:53:13.987697 kernel: Segment Routing with IPv6 Jan 20 23:53:13.987716 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 23:53:13.987735 kernel: NET: Registered PF_PACKET protocol family Jan 20 23:53:13.987760 kernel: Key type dns_resolver registered Jan 20 23:53:13.987779 kernel: registered taskstats version 1 Jan 20 23:53:13.987797 kernel: Loading compiled-in X.509 certificates Jan 20 23:53:13.987816 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ae4cb0460a35d8e9b47e83cc3a018fffd2136c96' Jan 20 23:53:13.987835 kernel: Demotion targets for Node 0: null Jan 20 23:53:13.987854 kernel: Key type .fscrypt registered Jan 20 23:53:13.987873 kernel: Key type fscrypt-provisioning registered Jan 20 23:53:13.987895 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 23:53:13.987915 kernel: ima: Allocated hash algorithm: sha1 Jan 20 23:53:13.987934 kernel: ima: No architecture policies found Jan 20 23:53:13.987953 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 20 23:53:13.987972 kernel: clk: Disabling unused clocks Jan 20 23:53:13.988016 kernel: PM: genpd: Disabling unused power domains Jan 20 23:53:13.988036 kernel: Freeing unused kernel memory: 12480K Jan 20 23:53:13.988473 kernel: Run /init as init process Jan 20 23:53:13.988503 kernel: with arguments: Jan 20 23:53:13.988522 kernel: /init Jan 20 23:53:13.988559 kernel: with environment: Jan 20 23:53:13.988580 kernel: HOME=/ Jan 20 23:53:13.988599 kernel: TERM=linux Jan 20 23:53:13.988618 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 20 23:53:13.988841 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 20 23:53:13.989039 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 20 23:53:13.989065 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 23:53:13.989084 kernel: GPT:25804799 != 33554431 Jan 20 23:53:13.989103 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 23:53:13.989122 kernel: GPT:25804799 != 33554431 Jan 20 23:53:13.989140 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 23:53:13.989164 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 20 23:53:13.989183 kernel: SCSI subsystem initialized Jan 20 23:53:13.989202 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 23:53:13.989221 kernel: device-mapper: uevent: version 1.0.3 Jan 20 23:53:13.989241 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 23:53:13.989260 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 20 23:53:13.989279 kernel: raid6: neonx8 gen() 6610 MB/s Jan 20 23:53:13.989302 kernel: raid6: neonx4 gen() 6619 MB/s Jan 20 23:53:13.989321 kernel: raid6: neonx2 gen() 5456 MB/s Jan 20 23:53:13.989340 kernel: raid6: neonx1 gen() 3958 MB/s Jan 20 23:53:13.989359 kernel: raid6: int64x8 gen() 3654 MB/s Jan 20 23:53:13.989377 kernel: raid6: int64x4 gen() 3732 MB/s Jan 20 23:53:13.989396 kernel: raid6: int64x2 gen() 3622 MB/s Jan 20 23:53:13.989415 kernel: raid6: int64x1 gen() 2762 MB/s Jan 20 23:53:13.989438 kernel: raid6: using algorithm neonx4 gen() 6619 MB/s Jan 20 23:53:13.989457 kernel: raid6: .... xor() 4697 MB/s, rmw enabled Jan 20 23:53:13.989476 kernel: raid6: using neon recovery algorithm Jan 20 23:53:13.989495 kernel: xor: measuring software checksum speed Jan 20 23:53:13.989514 kernel: 8regs : 12917 MB/sec Jan 20 23:53:13.989549 kernel: 32regs : 13017 MB/sec Jan 20 23:53:13.989574 kernel: arm64_neon : 8922 MB/sec Jan 20 23:53:13.989598 kernel: xor: using function: 32regs (13017 MB/sec) Jan 20 23:53:13.989617 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 23:53:13.989637 kernel: BTRFS: device fsid c7d7174b-f392-4c72-bb61-0601db27f9ed devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (222) Jan 20 23:53:13.989656 kernel: BTRFS info (device dm-0): first mount of filesystem c7d7174b-f392-4c72-bb61-0601db27f9ed Jan 20 23:53:13.989676 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 20 23:53:13.989695 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 20 23:53:13.989714 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 23:53:13.989737 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 23:53:13.989756 kernel: loop: module loaded Jan 20 23:53:13.989775 kernel: loop0: detected capacity change from 0 to 91840 Jan 20 23:53:13.989794 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 23:53:13.989815 systemd[1]: Successfully made /usr/ read-only. Jan 20 23:53:13.989841 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 23:53:13.989866 systemd[1]: Detected virtualization amazon. Jan 20 23:53:13.989887 systemd[1]: Detected architecture arm64. Jan 20 23:53:13.989907 systemd[1]: Running in initrd. Jan 20 23:53:13.989927 systemd[1]: No hostname configured, using default hostname. Jan 20 23:53:13.989948 systemd[1]: Hostname set to . Jan 20 23:53:13.989968 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 23:53:13.989988 systemd[1]: Queued start job for default target initrd.target. Jan 20 23:53:13.990013 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 23:53:13.990034 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 23:53:13.990054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 23:53:13.990075 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 23:53:13.990097 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 23:53:13.990135 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 23:53:13.990157 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 23:53:13.990179 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 23:53:13.990200 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 23:53:13.990221 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 23:53:13.990246 systemd[1]: Reached target paths.target - Path Units. Jan 20 23:53:13.990267 systemd[1]: Reached target slices.target - Slice Units. Jan 20 23:53:13.990287 systemd[1]: Reached target swap.target - Swaps. Jan 20 23:53:13.990308 systemd[1]: Reached target timers.target - Timer Units. Jan 20 23:53:13.990329 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 23:53:13.990351 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 23:53:13.990372 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 23:53:13.990396 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 23:53:13.990418 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 23:53:13.990439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 23:53:13.990460 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 23:53:13.990481 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 23:53:13.990502 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 23:53:13.990523 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 23:53:13.990576 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 23:53:13.990599 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 23:53:13.990621 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 23:53:13.990642 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 23:53:13.990664 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 23:53:13.990685 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 23:53:13.990706 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 23:53:13.990733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 23:53:13.990755 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 23:53:13.990781 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 23:53:13.990803 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 23:53:13.990824 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 23:53:13.990846 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 23:53:13.990867 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 23:53:13.990893 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 23:53:13.990957 systemd-journald[359]: Collecting audit messages is enabled. Jan 20 23:53:13.991000 kernel: Bridge firewalling registered Jan 20 23:53:13.991026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 23:53:13.991048 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:53:13.991069 kernel: audit: type=1130 audit(1768953193.959:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:13.991090 kernel: audit: type=1130 audit(1768953193.968:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:13.991111 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 23:53:13.991131 systemd-journald[359]: Journal started Jan 20 23:53:13.991171 systemd-journald[359]: Runtime Journal (/run/log/journal/ec20ac17bf75beae9c5bf0614dc8e9dc) is 8M, max 75.3M, 67.3M free. Jan 20 23:53:13.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:13.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:13.952652 systemd-modules-load[361]: Inserted module 'br_netfilter' Jan 20 23:53:14.003597 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 23:53:14.003665 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 23:53:14.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.016036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 23:53:14.017242 kernel: audit: type=1130 audit(1768953194.003:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.029564 kernel: audit: type=1130 audit(1768953194.014:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.037633 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 23:53:14.059208 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 23:53:14.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.072453 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 23:53:14.078777 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 23:53:14.088732 kernel: audit: type=1130 audit(1768953194.064:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.088773 kernel: audit: type=1130 audit(1768953194.077:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.092319 kernel: audit: type=1334 audit(1768953194.089:8): prog-id=6 op=LOAD Jan 20 23:53:14.089000 audit: BPF prog-id=6 op=LOAD Jan 20 23:53:14.091120 systemd-tmpfiles[387]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 23:53:14.101770 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 23:53:14.114746 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 23:53:14.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.128570 kernel: audit: type=1130 audit(1768953194.113:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.145661 dracut-cmdline[393]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3c423a3ed4865abab898483a94535823dbc3dcf7b9fc4db9a9e44dcb3b3370eb Jan 20 23:53:14.267117 systemd-resolved[395]: Positive Trust Anchors: Jan 20 23:53:14.267639 systemd-resolved[395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 23:53:14.267649 systemd-resolved[395]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 23:53:14.267713 systemd-resolved[395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 23:53:14.457587 kernel: Loading iSCSI transport class v2.0-870. Jan 20 23:53:14.511585 kernel: iscsi: registered transport (tcp) Jan 20 23:53:14.547607 kernel: random: crng init done Jan 20 23:53:14.557197 systemd-resolved[395]: Defaulting to hostname 'linux'. Jan 20 23:53:14.559258 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 23:53:14.576219 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 23:53:14.590963 kernel: audit: type=1130 audit(1768953194.574:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.593284 kernel: iscsi: registered transport (qla4xxx) Jan 20 23:53:14.593342 kernel: QLogic iSCSI HBA Driver Jan 20 23:53:14.635116 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 23:53:14.673833 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 23:53:14.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.687365 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 23:53:14.692444 kernel: audit: type=1130 audit(1768953194.680:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.766890 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 23:53:14.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.776038 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 23:53:14.782460 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 23:53:14.845007 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 23:53:14.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.854000 audit: BPF prog-id=7 op=LOAD Jan 20 23:53:14.854000 audit: BPF prog-id=8 op=LOAD Jan 20 23:53:14.857823 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 23:53:14.919638 systemd-udevd[636]: Using default interface naming scheme 'v257'. Jan 20 23:53:14.942069 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 23:53:14.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:14.951119 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 23:53:15.002767 dracut-pre-trigger[701]: rd.md=0: removing MD RAID activation Jan 20 23:53:15.020149 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 23:53:15.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:15.025000 audit: BPF prog-id=9 op=LOAD Jan 20 23:53:15.032991 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 23:53:15.077031 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 23:53:15.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:15.084897 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 23:53:15.141083 systemd-networkd[753]: lo: Link UP Jan 20 23:53:15.141102 systemd-networkd[753]: lo: Gained carrier Jan 20 23:53:15.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:15.143116 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 23:53:15.148526 systemd[1]: Reached target network.target - Network. Jan 20 23:53:15.251902 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 23:53:15.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:15.265606 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 23:53:15.459583 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 20 23:53:15.459684 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 20 23:53:15.464386 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 20 23:53:15.464869 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 20 23:53:15.477074 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 23:53:15.479614 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:12:db:e9:de:bb Jan 20 23:53:15.479564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:53:15.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:15.482347 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 23:53:15.491815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 23:53:15.493492 (udev-worker)[780]: Network interface NamePolicy= disabled on kernel command line. Jan 20 23:53:15.514374 systemd-networkd[753]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:53:15.514396 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 23:53:15.532517 systemd-networkd[753]: eth0: Link UP Jan 20 23:53:15.532932 systemd-networkd[753]: eth0: Gained carrier Jan 20 23:53:15.532955 systemd-networkd[753]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:53:15.549426 kernel: nvme nvme0: using unchecked data buffer Jan 20 23:53:15.554644 systemd-networkd[753]: eth0: DHCPv4 address 172.31.29.43/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 20 23:53:15.577299 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:53:15.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:15.724054 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 20 23:53:15.756172 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 20 23:53:15.812492 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 23:53:15.854926 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 20 23:53:15.863110 disk-uuid[901]: Primary Header is updated. Jan 20 23:53:15.863110 disk-uuid[901]: Secondary Entries is updated. Jan 20 23:53:15.863110 disk-uuid[901]: Secondary Header is updated. Jan 20 23:53:15.897188 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 20 23:53:15.904324 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 23:53:15.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:15.967658 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 23:53:15.978244 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 23:53:15.986488 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 23:53:15.998702 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 23:53:16.056225 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 23:53:16.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:16.617679 systemd-networkd[753]: eth0: Gained IPv6LL Jan 20 23:53:16.981141 disk-uuid[902]: Warning: The kernel is still using the old partition table. Jan 20 23:53:16.981141 disk-uuid[902]: The new table will be used at the next reboot or after you Jan 20 23:53:16.981141 disk-uuid[902]: run partprobe(8) or kpartx(8) Jan 20 23:53:16.981141 disk-uuid[902]: The operation has completed successfully. Jan 20 23:53:16.997450 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 23:53:16.999620 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 23:53:17.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:17.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:17.004444 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 23:53:17.072588 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1090) Jan 20 23:53:17.077256 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:53:17.077311 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 20 23:53:17.084543 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 20 23:53:17.084630 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 20 23:53:17.094580 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:53:17.095762 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 23:53:17.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:17.102564 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 23:53:18.371009 ignition[1109]: Ignition 2.24.0 Jan 20 23:53:18.371031 ignition[1109]: Stage: fetch-offline Jan 20 23:53:18.371426 ignition[1109]: no configs at "/usr/lib/ignition/base.d" Jan 20 23:53:18.371452 ignition[1109]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 23:53:18.373111 ignition[1109]: Ignition finished successfully Jan 20 23:53:18.383743 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 23:53:18.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:18.388136 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 23:53:18.430823 ignition[1116]: Ignition 2.24.0 Jan 20 23:53:18.431309 ignition[1116]: Stage: fetch Jan 20 23:53:18.431731 ignition[1116]: no configs at "/usr/lib/ignition/base.d" Jan 20 23:53:18.431761 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 23:53:18.431888 ignition[1116]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 23:53:18.449403 ignition[1116]: PUT result: OK Jan 20 23:53:18.453501 ignition[1116]: parsed url from cmdline: "" Jan 20 23:53:18.453681 ignition[1116]: no config URL provided Jan 20 23:53:18.453706 ignition[1116]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 23:53:18.453911 ignition[1116]: no config at "/usr/lib/ignition/user.ign" Jan 20 23:53:18.453949 ignition[1116]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 23:53:18.462394 ignition[1116]: PUT result: OK Jan 20 23:53:18.463065 ignition[1116]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 20 23:53:18.466980 ignition[1116]: GET result: OK Jan 20 23:53:18.467356 ignition[1116]: parsing config with SHA512: ba00614ba4e905146889d0a47c57b93576b486138a48bc8a82337b00b4e22aef081d37dbe1fda4db28e657a7efa96703f4fa8a94c418eeed92634208cc3b8987 Jan 20 23:53:18.482104 unknown[1116]: fetched base config from "system" Jan 20 23:53:18.482423 unknown[1116]: fetched base config from "system" Jan 20 23:53:18.483137 ignition[1116]: fetch: fetch complete Jan 20 23:53:18.482437 unknown[1116]: fetched user config from "aws" Jan 20 23:53:18.483149 ignition[1116]: fetch: fetch passed Jan 20 23:53:18.483243 ignition[1116]: Ignition finished successfully Jan 20 23:53:18.496118 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 23:53:18.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:18.503522 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 23:53:18.548593 ignition[1122]: Ignition 2.24.0 Jan 20 23:53:18.549099 ignition[1122]: Stage: kargs Jan 20 23:53:18.549502 ignition[1122]: no configs at "/usr/lib/ignition/base.d" Jan 20 23:53:18.549524 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 23:53:18.549688 ignition[1122]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 23:53:18.559101 ignition[1122]: PUT result: OK Jan 20 23:53:18.568097 ignition[1122]: kargs: kargs passed Jan 20 23:53:18.568218 ignition[1122]: Ignition finished successfully Jan 20 23:53:18.574409 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 23:53:18.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:18.580756 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 23:53:18.631301 ignition[1128]: Ignition 2.24.0 Jan 20 23:53:18.631883 ignition[1128]: Stage: disks Jan 20 23:53:18.632287 ignition[1128]: no configs at "/usr/lib/ignition/base.d" Jan 20 23:53:18.632311 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 23:53:18.632449 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 23:53:18.642187 ignition[1128]: PUT result: OK Jan 20 23:53:18.650637 ignition[1128]: disks: disks passed Jan 20 23:53:18.650992 ignition[1128]: Ignition finished successfully Jan 20 23:53:18.657295 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 23:53:18.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:18.663074 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 23:53:18.668155 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 23:53:18.673644 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 23:53:18.676059 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 23:53:18.682665 systemd[1]: Reached target basic.target - Basic System. Jan 20 23:53:18.690883 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 23:53:18.855116 systemd-fsck[1137]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 20 23:53:18.859853 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 23:53:18.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:18.867438 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 23:53:19.135580 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 81ddf123-ac73-4435-a963-542e3692f093 r/w with ordered data mode. Quota mode: none. Jan 20 23:53:19.136773 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 23:53:19.141228 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 23:53:19.195690 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 23:53:19.201842 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 23:53:19.210352 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 23:53:19.210447 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 23:53:19.216384 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 23:53:19.234549 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 23:53:19.240771 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 23:53:19.261208 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1156) Jan 20 23:53:19.261270 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:53:19.263288 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 20 23:53:19.273298 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 20 23:53:19.273378 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 20 23:53:19.276099 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 23:53:21.464635 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 23:53:21.473351 kernel: kauditd_printk_skb: 22 callbacks suppressed Jan 20 23:53:21.473402 kernel: audit: type=1130 audit(1768953201.465:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:21.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:21.470693 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 23:53:21.489447 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 23:53:21.510573 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:53:21.510701 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 23:53:21.554670 ignition[1254]: INFO : Ignition 2.24.0 Jan 20 23:53:21.554670 ignition[1254]: INFO : Stage: mount Jan 20 23:53:21.562637 ignition[1254]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 23:53:21.562637 ignition[1254]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 23:53:21.562637 ignition[1254]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 23:53:21.570629 ignition[1254]: INFO : PUT result: OK Jan 20 23:53:21.578790 ignition[1254]: INFO : mount: mount passed Jan 20 23:53:21.580824 ignition[1254]: INFO : Ignition finished successfully Jan 20 23:53:21.582844 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 23:53:21.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:21.590075 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 23:53:21.601183 kernel: audit: type=1130 audit(1768953201.587:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:21.601227 kernel: audit: type=1130 audit(1768953201.594:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:21.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:21.598773 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 23:53:21.639675 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 23:53:21.687580 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1266) Jan 20 23:53:21.692789 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:53:21.692849 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 20 23:53:21.700299 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 20 23:53:21.700372 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 20 23:53:21.703524 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 23:53:21.747963 ignition[1283]: INFO : Ignition 2.24.0 Jan 20 23:53:21.747963 ignition[1283]: INFO : Stage: files Jan 20 23:53:21.752043 ignition[1283]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 23:53:21.752043 ignition[1283]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 23:53:21.752043 ignition[1283]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 23:53:21.752043 ignition[1283]: INFO : PUT result: OK Jan 20 23:53:21.765672 ignition[1283]: DEBUG : files: compiled without relabeling support, skipping Jan 20 23:53:21.770374 ignition[1283]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 23:53:21.770374 ignition[1283]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 23:53:21.865104 ignition[1283]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 23:53:21.868362 ignition[1283]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 23:53:21.871374 ignition[1283]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 23:53:21.869787 unknown[1283]: wrote ssh authorized keys file for user: core Jan 20 23:53:21.877599 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 20 23:53:21.877599 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 20 23:53:21.969642 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 23:53:22.100156 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 20 23:53:22.100156 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 23:53:22.108391 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 23:53:22.108391 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 23:53:22.108391 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 23:53:22.108391 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 23:53:22.108391 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 23:53:22.108391 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 23:53:22.108391 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 23:53:22.135759 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 23:53:22.135759 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 23:53:22.135759 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 20 23:53:22.149632 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 20 23:53:22.155999 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 20 23:53:22.155999 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 20 23:53:22.455661 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 23:53:22.827200 ignition[1283]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 20 23:53:22.831977 ignition[1283]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 23:53:22.831977 ignition[1283]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 23:53:22.842557 ignition[1283]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 23:53:22.842557 ignition[1283]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 23:53:22.842557 ignition[1283]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 23:53:22.853937 ignition[1283]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 23:53:22.853937 ignition[1283]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 23:53:22.853937 ignition[1283]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 23:53:22.853937 ignition[1283]: INFO : files: files passed Jan 20 23:53:22.853937 ignition[1283]: INFO : Ignition finished successfully Jan 20 23:53:22.871840 kernel: audit: type=1130 audit(1768953202.859:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:22.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:22.858607 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 23:53:22.863502 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 23:53:22.892906 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 23:53:22.907289 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 23:53:22.912154 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 23:53:22.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:22.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:22.927102 kernel: audit: type=1130 audit(1768953202.914:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:22.927180 kernel: audit: type=1131 audit(1768953202.914:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:22.938900 initrd-setup-root-after-ignition[1314]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 23:53:22.938900 initrd-setup-root-after-ignition[1314]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 23:53:22.949379 initrd-setup-root-after-ignition[1318]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 23:53:22.952978 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 23:53:22.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:22.961445 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 23:53:22.966674 kernel: audit: type=1130 audit(1768953202.959:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:22.971281 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 23:53:23.080403 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 23:53:23.097868 kernel: audit: type=1130 audit(1768953203.085:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.100200 kernel: audit: type=1131 audit(1768953203.085:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.082599 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 23:53:23.087339 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 23:53:23.097835 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 23:53:23.103135 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 23:53:23.107393 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 23:53:23.161391 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 23:53:23.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.172132 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 23:53:23.178662 kernel: audit: type=1130 audit(1768953203.161:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.224932 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 23:53:23.226011 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 23:53:23.230550 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 23:53:23.234220 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 23:53:23.240882 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 23:53:23.241142 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 23:53:23.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.249165 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 23:53:23.256795 systemd[1]: Stopped target basic.target - Basic System. Jan 20 23:53:23.259081 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 23:53:23.262158 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 23:53:23.266933 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 23:53:23.268824 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 23:53:23.273684 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 23:53:23.281310 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 23:53:23.283203 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 23:53:23.287977 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 23:53:23.295580 systemd[1]: Stopped target swap.target - Swaps. Jan 20 23:53:23.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.297207 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 23:53:23.297465 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 23:53:23.305409 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 23:53:23.307722 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 23:53:23.312248 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 23:53:23.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.312878 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 23:53:23.319974 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 23:53:23.320195 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 23:53:23.325396 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 23:53:23.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.325694 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 23:53:23.333836 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 23:53:23.334042 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 23:53:23.338471 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 23:53:23.358865 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 23:53:23.359346 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 23:53:23.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.378693 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 23:53:23.382649 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 23:53:23.387223 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 23:53:23.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.394164 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 23:53:23.395420 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 23:53:23.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.401871 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 23:53:23.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.402109 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 23:53:23.429904 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 23:53:23.432144 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 23:53:23.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.444421 ignition[1338]: INFO : Ignition 2.24.0 Jan 20 23:53:23.444421 ignition[1338]: INFO : Stage: umount Jan 20 23:53:23.450874 ignition[1338]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 23:53:23.450874 ignition[1338]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 23:53:23.456343 ignition[1338]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 23:53:23.459267 ignition[1338]: INFO : PUT result: OK Jan 20 23:53:23.463870 ignition[1338]: INFO : umount: umount passed Jan 20 23:53:23.466062 ignition[1338]: INFO : Ignition finished successfully Jan 20 23:53:23.473836 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 23:53:23.476869 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 23:53:23.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.486989 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 23:53:23.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.487096 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 23:53:23.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.491224 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 23:53:23.491322 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 23:53:23.496839 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 23:53:23.496958 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 23:53:23.501313 systemd[1]: Stopped target network.target - Network. Jan 20 23:53:23.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.513146 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 23:53:23.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.513282 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 23:53:23.517284 systemd[1]: Stopped target paths.target - Path Units. Jan 20 23:53:23.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.519680 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 23:53:23.524824 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 23:53:23.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.527570 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 23:53:23.578000 audit: BPF prog-id=6 op=UNLOAD Jan 20 23:53:23.527900 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 23:53:23.528158 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 23:53:23.587000 audit: BPF prog-id=9 op=UNLOAD Jan 20 23:53:23.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.528234 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 23:53:23.528494 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 23:53:23.528574 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 23:53:23.528862 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 23:53:23.528913 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 23:53:23.529217 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 23:53:23.529313 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 23:53:23.529677 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 23:53:23.529753 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 23:53:23.530115 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 23:53:23.530455 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 23:53:23.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.532615 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 23:53:23.533485 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 23:53:23.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.536066 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 23:53:23.545109 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 23:53:23.545242 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 23:53:23.560759 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 23:53:23.560999 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 23:53:23.581976 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 23:53:23.582238 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 23:53:23.594347 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 23:53:23.600529 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 23:53:23.601593 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 23:53:23.613990 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 23:53:23.631076 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 23:53:23.631219 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 23:53:23.634222 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 23:53:23.634313 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 23:53:23.637492 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 23:53:23.637615 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 23:53:23.644864 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 23:53:23.693574 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 23:53:23.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.693894 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 23:53:23.720753 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 23:53:23.720909 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 23:53:23.723918 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 23:53:23.723993 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 23:53:23.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.727841 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 23:53:23.727945 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 23:53:23.735317 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 23:53:23.735422 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 23:53:23.736427 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 23:53:23.736506 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 23:53:23.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.756567 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 23:53:23.759066 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 23:53:23.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.759183 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 23:53:23.762720 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 23:53:23.762827 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 23:53:23.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.772217 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 23:53:23.772327 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:53:23.796260 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 23:53:23.799381 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 23:53:23.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.816668 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 23:53:23.817043 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 23:53:23.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:23.825307 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 23:53:23.831467 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 23:53:23.876816 systemd[1]: Switching root. Jan 20 23:53:23.990018 systemd-journald[359]: Journal stopped Jan 20 23:53:27.856586 systemd-journald[359]: Received SIGTERM from PID 1 (systemd). Jan 20 23:53:27.859859 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 23:53:27.859922 kernel: SELinux: policy capability open_perms=1 Jan 20 23:53:27.859958 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 23:53:27.859998 kernel: SELinux: policy capability always_check_network=0 Jan 20 23:53:27.860032 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 23:53:27.860066 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 23:53:27.860099 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 23:53:27.860134 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 23:53:27.860173 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 23:53:27.860206 systemd[1]: Successfully loaded SELinux policy in 120.822ms. Jan 20 23:53:27.860264 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.782ms. Jan 20 23:53:27.860302 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 23:53:27.860334 systemd[1]: Detected virtualization amazon. Jan 20 23:53:27.860371 systemd[1]: Detected architecture arm64. Jan 20 23:53:27.860407 systemd[1]: Detected first boot. Jan 20 23:53:27.860440 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 23:53:27.860473 zram_generator::config[1383]: No configuration found. Jan 20 23:53:27.860517 kernel: NET: Registered PF_VSOCK protocol family Jan 20 23:53:27.860635 systemd[1]: Populated /etc with preset unit settings. Jan 20 23:53:27.860679 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 20 23:53:27.860717 kernel: audit: type=1334 audit(1768953207.053:87): prog-id=12 op=LOAD Jan 20 23:53:27.860746 kernel: audit: type=1334 audit(1768953207.056:88): prog-id=3 op=UNLOAD Jan 20 23:53:27.860775 kernel: audit: type=1334 audit(1768953207.056:89): prog-id=13 op=LOAD Jan 20 23:53:27.860804 kernel: audit: type=1334 audit(1768953207.056:90): prog-id=14 op=LOAD Jan 20 23:53:27.860834 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 23:53:27.860864 kernel: audit: type=1334 audit(1768953207.056:91): prog-id=4 op=UNLOAD Jan 20 23:53:27.860898 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 23:53:27.860933 kernel: audit: type=1334 audit(1768953207.056:92): prog-id=5 op=UNLOAD Jan 20 23:53:27.860965 kernel: audit: type=1131 audit(1768953207.060:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.860997 kernel: audit: type=1334 audit(1768953207.077:94): prog-id=12 op=UNLOAD Jan 20 23:53:27.861026 kernel: audit: type=1130 audit(1768953207.081:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.861058 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 23:53:27.861090 kernel: audit: type=1131 audit(1768953207.081:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.861135 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 23:53:27.861166 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 23:53:27.861195 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 23:53:27.861224 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 23:53:27.861254 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 23:53:27.861322 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 23:53:27.861357 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 23:53:27.861391 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 23:53:27.861421 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 23:53:27.861450 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 23:53:27.861481 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 23:53:27.861510 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 23:53:27.861581 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 23:53:27.861624 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 23:53:27.861654 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 23:53:27.861684 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 23:53:27.861716 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 23:53:27.861745 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 23:53:27.861775 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 23:53:27.861807 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 23:53:27.861843 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 23:53:27.861872 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 23:53:27.861904 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 23:53:27.861937 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 23:53:27.861965 systemd[1]: Reached target slices.target - Slice Units. Jan 20 23:53:27.861996 systemd[1]: Reached target swap.target - Swaps. Jan 20 23:53:27.862025 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 23:53:27.862059 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 23:53:27.862088 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 23:53:27.862117 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 23:53:27.862147 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 23:53:27.862176 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 23:53:27.862207 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 23:53:27.862238 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 23:53:27.862271 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 23:53:27.862300 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 23:53:27.862331 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 23:53:27.862362 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 23:53:27.862391 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 23:53:27.862420 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 23:53:27.862451 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 23:53:27.862508 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 23:53:27.862577 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 23:53:27.862616 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 23:53:27.862647 systemd[1]: Reached target machines.target - Containers. Jan 20 23:53:27.862679 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 23:53:27.862712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 23:53:27.862742 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 23:53:27.862776 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 23:53:27.862805 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 23:53:27.862834 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 23:53:27.862863 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 23:53:27.862893 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 23:53:27.862925 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 23:53:27.862958 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 23:53:27.862994 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 23:53:27.863024 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 23:53:27.863057 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 23:53:27.863090 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 23:53:27.863133 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 23:53:27.863169 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 23:53:27.863198 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 23:53:27.863228 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 23:53:27.863260 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 23:53:27.863290 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 23:53:27.863325 kernel: fuse: init (API version 7.41) Jan 20 23:53:27.863358 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 23:53:27.863389 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 23:53:27.863418 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 23:53:27.863447 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 23:53:27.863476 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 23:53:27.863505 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 23:53:27.863554 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 23:53:27.863588 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 23:53:27.863620 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 23:53:27.863653 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 23:53:27.863682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 23:53:27.863712 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 23:53:27.863746 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 23:53:27.863778 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 23:53:27.863807 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 23:53:27.863840 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 23:53:27.863873 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 23:53:27.863909 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 23:53:27.863943 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 23:53:27.863973 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 23:53:27.864005 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 23:53:27.864039 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 23:53:27.864069 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 23:53:27.864099 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 23:53:27.864131 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 23:53:27.864160 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 23:53:27.864190 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 23:53:27.864220 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 23:53:27.864254 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 23:53:27.864284 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 23:53:27.864314 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 23:53:27.864344 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 23:53:27.864375 kernel: ACPI: bus type drm_connector registered Jan 20 23:53:27.864403 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 23:53:27.864460 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 23:53:27.864501 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 23:53:27.864606 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 23:53:27.864642 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 23:53:27.864706 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 23:53:27.864764 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 23:53:27.864801 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 23:53:27.864881 systemd-journald[1457]: Collecting audit messages is enabled. Jan 20 23:53:27.864937 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 23:53:27.864970 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 23:53:27.865002 systemd-journald[1457]: Journal started Jan 20 23:53:27.865048 systemd-journald[1457]: Runtime Journal (/run/log/journal/ec20ac17bf75beae9c5bf0614dc8e9dc) is 8M, max 75.3M, 67.3M free. Jan 20 23:53:27.272000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 23:53:27.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.501000 audit: BPF prog-id=14 op=UNLOAD Jan 20 23:53:27.501000 audit: BPF prog-id=13 op=UNLOAD Jan 20 23:53:27.505000 audit: BPF prog-id=15 op=LOAD Jan 20 23:53:27.505000 audit: BPF prog-id=16 op=LOAD Jan 20 23:53:27.505000 audit: BPF prog-id=17 op=LOAD Jan 20 23:53:27.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.870599 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 23:53:27.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.850000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 23:53:27.850000 audit[1457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=fffff29c64f0 a2=4000 a3=0 items=0 ppid=1 pid=1457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:27.850000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 23:53:27.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:27.042187 systemd[1]: Queued start job for default target multi-user.target. Jan 20 23:53:27.058924 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 20 23:53:27.061257 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 23:53:27.875199 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 23:53:27.881424 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 23:53:27.890880 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 23:53:27.909586 kernel: loop1: detected capacity change from 0 to 200800 Jan 20 23:53:27.924037 systemd-journald[1457]: Time spent on flushing to /var/log/journal/ec20ac17bf75beae9c5bf0614dc8e9dc is 49.840ms for 1054 entries. Jan 20 23:53:27.924037 systemd-journald[1457]: System Journal (/var/log/journal/ec20ac17bf75beae9c5bf0614dc8e9dc) is 8M, max 588.1M, 580.1M free. Jan 20 23:53:27.989207 systemd-journald[1457]: Received client request to flush runtime journal. Jan 20 23:53:27.994143 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 23:53:27.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:28.043680 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 23:53:28.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:28.066672 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 23:53:28.113248 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 23:53:28.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:28.125145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 23:53:28.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:28.236756 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 23:53:28.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:28.246803 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 23:53:28.259610 kernel: loop2: detected capacity change from 0 to 100192 Jan 20 23:53:28.318787 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 23:53:28.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:28.323000 audit: BPF prog-id=18 op=LOAD Jan 20 23:53:28.323000 audit: BPF prog-id=19 op=LOAD Jan 20 23:53:28.323000 audit: BPF prog-id=20 op=LOAD Jan 20 23:53:28.326268 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 23:53:28.329000 audit: BPF prog-id=21 op=LOAD Jan 20 23:53:28.332146 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 23:53:28.340007 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 23:53:28.358000 audit: BPF prog-id=22 op=LOAD Jan 20 23:53:28.359000 audit: BPF prog-id=23 op=LOAD Jan 20 23:53:28.359000 audit: BPF prog-id=24 op=LOAD Jan 20 23:53:28.362920 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 23:53:28.366000 audit: BPF prog-id=25 op=LOAD Jan 20 23:53:28.366000 audit: BPF prog-id=26 op=LOAD Jan 20 23:53:28.367000 audit: BPF prog-id=27 op=LOAD Jan 20 23:53:28.378723 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 23:53:28.429591 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Jan 20 23:53:28.429630 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Jan 20 23:53:28.442770 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 23:53:28.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:28.490391 systemd-nsresourced[1539]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 23:53:28.496964 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 23:53:28.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:28.540108 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 23:53:28.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:28.660578 kernel: loop3: detected capacity change from 0 to 45344 Jan 20 23:53:28.689900 systemd-oomd[1536]: No swap; memory pressure usage will be degraded Jan 20 23:53:28.690865 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 23:53:28.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:28.721979 systemd-resolved[1537]: Positive Trust Anchors: Jan 20 23:53:28.722014 systemd-resolved[1537]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 23:53:28.722024 systemd-resolved[1537]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 23:53:28.722085 systemd-resolved[1537]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 23:53:28.736224 systemd-resolved[1537]: Defaulting to hostname 'linux'. Jan 20 23:53:28.738604 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 23:53:28.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:28.744069 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 23:53:28.932610 kernel: loop4: detected capacity change from 0 to 61504 Jan 20 23:53:28.978627 kernel: loop5: detected capacity change from 0 to 200800 Jan 20 23:53:29.008633 kernel: loop6: detected capacity change from 0 to 100192 Jan 20 23:53:29.022572 kernel: loop7: detected capacity change from 0 to 45344 Jan 20 23:53:29.035585 kernel: loop1: detected capacity change from 0 to 61504 Jan 20 23:53:29.059419 (sd-merge)[1561]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 20 23:53:29.069706 (sd-merge)[1561]: Merged extensions into '/usr'. Jan 20 23:53:29.079524 systemd[1]: Reload requested from client PID 1475 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 23:53:29.080082 systemd[1]: Reloading... Jan 20 23:53:29.209575 zram_generator::config[1587]: No configuration found. Jan 20 23:53:29.615434 systemd[1]: Reloading finished in 534 ms. Jan 20 23:53:29.636506 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 23:53:29.640239 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 23:53:29.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:29.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:29.658664 systemd[1]: Starting ensure-sysext.service... Jan 20 23:53:29.664853 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 23:53:29.666000 audit: BPF prog-id=8 op=UNLOAD Jan 20 23:53:29.666000 audit: BPF prog-id=7 op=UNLOAD Jan 20 23:53:29.667000 audit: BPF prog-id=28 op=LOAD Jan 20 23:53:29.667000 audit: BPF prog-id=29 op=LOAD Jan 20 23:53:29.671662 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 23:53:29.676000 audit: BPF prog-id=30 op=LOAD Jan 20 23:53:29.679000 audit: BPF prog-id=22 op=UNLOAD Jan 20 23:53:29.679000 audit: BPF prog-id=31 op=LOAD Jan 20 23:53:29.679000 audit: BPF prog-id=32 op=LOAD Jan 20 23:53:29.679000 audit: BPF prog-id=23 op=UNLOAD Jan 20 23:53:29.679000 audit: BPF prog-id=24 op=UNLOAD Jan 20 23:53:29.680000 audit: BPF prog-id=33 op=LOAD Jan 20 23:53:29.680000 audit: BPF prog-id=25 op=UNLOAD Jan 20 23:53:29.681000 audit: BPF prog-id=34 op=LOAD Jan 20 23:53:29.681000 audit: BPF prog-id=35 op=LOAD Jan 20 23:53:29.681000 audit: BPF prog-id=26 op=UNLOAD Jan 20 23:53:29.681000 audit: BPF prog-id=27 op=UNLOAD Jan 20 23:53:29.684000 audit: BPF prog-id=36 op=LOAD Jan 20 23:53:29.684000 audit: BPF prog-id=21 op=UNLOAD Jan 20 23:53:29.689000 audit: BPF prog-id=37 op=LOAD Jan 20 23:53:29.689000 audit: BPF prog-id=15 op=UNLOAD Jan 20 23:53:29.689000 audit: BPF prog-id=38 op=LOAD Jan 20 23:53:29.689000 audit: BPF prog-id=39 op=LOAD Jan 20 23:53:29.689000 audit: BPF prog-id=16 op=UNLOAD Jan 20 23:53:29.689000 audit: BPF prog-id=17 op=UNLOAD Jan 20 23:53:29.690000 audit: BPF prog-id=40 op=LOAD Jan 20 23:53:29.690000 audit: BPF prog-id=18 op=UNLOAD Jan 20 23:53:29.692000 audit: BPF prog-id=41 op=LOAD Jan 20 23:53:29.692000 audit: BPF prog-id=42 op=LOAD Jan 20 23:53:29.692000 audit: BPF prog-id=19 op=UNLOAD Jan 20 23:53:29.692000 audit: BPF prog-id=20 op=UNLOAD Jan 20 23:53:29.715046 systemd[1]: Reload requested from client PID 1643 ('systemctl') (unit ensure-sysext.service)... Jan 20 23:53:29.715078 systemd[1]: Reloading... Jan 20 23:53:29.732496 systemd-tmpfiles[1644]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 23:53:29.735641 systemd-tmpfiles[1644]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 23:53:29.736469 systemd-tmpfiles[1644]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 23:53:29.742723 systemd-tmpfiles[1644]: ACLs are not supported, ignoring. Jan 20 23:53:29.742878 systemd-tmpfiles[1644]: ACLs are not supported, ignoring. Jan 20 23:53:29.758371 systemd-tmpfiles[1644]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 23:53:29.758631 systemd-tmpfiles[1644]: Skipping /boot Jan 20 23:53:29.783075 systemd-tmpfiles[1644]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 23:53:29.783274 systemd-tmpfiles[1644]: Skipping /boot Jan 20 23:53:29.807679 systemd-udevd[1645]: Using default interface naming scheme 'v257'. Jan 20 23:53:29.889769 zram_generator::config[1680]: No configuration found. Jan 20 23:53:30.122380 (udev-worker)[1724]: Network interface NamePolicy= disabled on kernel command line. Jan 20 23:53:30.504802 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 23:53:30.506205 systemd[1]: Reloading finished in 790 ms. Jan 20 23:53:30.552865 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 23:53:30.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:30.562000 audit: BPF prog-id=43 op=LOAD Jan 20 23:53:30.562000 audit: BPF prog-id=40 op=UNLOAD Jan 20 23:53:30.562000 audit: BPF prog-id=44 op=LOAD Jan 20 23:53:30.562000 audit: BPF prog-id=45 op=LOAD Jan 20 23:53:30.562000 audit: BPF prog-id=41 op=UNLOAD Jan 20 23:53:30.562000 audit: BPF prog-id=42 op=UNLOAD Jan 20 23:53:30.565000 audit: BPF prog-id=46 op=LOAD Jan 20 23:53:30.565000 audit: BPF prog-id=33 op=UNLOAD Jan 20 23:53:30.565000 audit: BPF prog-id=47 op=LOAD Jan 20 23:53:30.567000 audit: BPF prog-id=48 op=LOAD Jan 20 23:53:30.567000 audit: BPF prog-id=34 op=UNLOAD Jan 20 23:53:30.567000 audit: BPF prog-id=35 op=UNLOAD Jan 20 23:53:30.569000 audit: BPF prog-id=49 op=LOAD Jan 20 23:53:30.569000 audit: BPF prog-id=36 op=UNLOAD Jan 20 23:53:30.571000 audit: BPF prog-id=50 op=LOAD Jan 20 23:53:30.571000 audit: BPF prog-id=51 op=LOAD Jan 20 23:53:30.571000 audit: BPF prog-id=28 op=UNLOAD Jan 20 23:53:30.571000 audit: BPF prog-id=29 op=UNLOAD Jan 20 23:53:30.574000 audit: BPF prog-id=52 op=LOAD Jan 20 23:53:30.574000 audit: BPF prog-id=37 op=UNLOAD Jan 20 23:53:30.574000 audit: BPF prog-id=53 op=LOAD Jan 20 23:53:30.576000 audit: BPF prog-id=54 op=LOAD Jan 20 23:53:30.576000 audit: BPF prog-id=38 op=UNLOAD Jan 20 23:53:30.576000 audit: BPF prog-id=39 op=UNLOAD Jan 20 23:53:30.579000 audit: BPF prog-id=55 op=LOAD Jan 20 23:53:30.579000 audit: BPF prog-id=30 op=UNLOAD Jan 20 23:53:30.580000 audit: BPF prog-id=56 op=LOAD Jan 20 23:53:30.580000 audit: BPF prog-id=57 op=LOAD Jan 20 23:53:30.580000 audit: BPF prog-id=31 op=UNLOAD Jan 20 23:53:30.580000 audit: BPF prog-id=32 op=UNLOAD Jan 20 23:53:30.620933 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 23:53:30.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:30.658094 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 23:53:30.667037 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 23:53:30.675012 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 23:53:30.685020 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 23:53:30.689000 audit: BPF prog-id=58 op=LOAD Jan 20 23:53:30.698051 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 23:53:30.703894 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 23:53:30.722412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 23:53:30.729368 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 23:53:30.738908 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 23:53:30.749764 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 23:53:30.752680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 23:53:30.753159 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 23:53:30.753465 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 23:53:30.765726 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 23:53:30.766236 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 23:53:30.766692 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 23:53:30.766987 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 23:53:30.779054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 23:53:30.783786 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 23:53:30.786569 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 23:53:30.787039 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 23:53:30.787353 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 23:53:30.788985 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 23:53:30.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:30.803692 systemd[1]: Finished ensure-sysext.service. Jan 20 23:53:30.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:30.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:30.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:30.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:30.825100 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 23:53:30.825688 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 23:53:30.829102 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 23:53:30.830607 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 23:53:30.893000 audit[1781]: SYSTEM_BOOT pid=1781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 23:53:30.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:30.955676 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 23:53:30.974430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 23:53:30.980652 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 23:53:30.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:31.006798 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 23:53:31.008066 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 23:53:31.011882 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 23:53:31.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:31.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:31.021492 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 23:53:31.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:31.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:31.023054 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 23:53:31.040337 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 23:53:31.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:31.047072 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 23:53:31.047682 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 23:53:31.091000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 23:53:31.091000 audit[1826]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe3f5f8c0 a2=420 a3=0 items=0 ppid=1776 pid=1826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:31.091000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 23:53:31.093287 augenrules[1826]: No rules Jan 20 23:53:31.097791 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 23:53:31.100169 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 23:53:31.373693 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 20 23:53:31.384796 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 23:53:31.389889 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:53:31.424108 systemd-networkd[1780]: lo: Link UP Jan 20 23:53:31.424652 systemd-networkd[1780]: lo: Gained carrier Jan 20 23:53:31.427804 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 23:53:31.428335 systemd[1]: Reached target network.target - Network. Jan 20 23:53:31.432113 systemd-networkd[1780]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:53:31.432640 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 23:53:31.435627 systemd-networkd[1780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 23:53:31.438018 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 23:53:31.443618 systemd-networkd[1780]: eth0: Link UP Jan 20 23:53:31.444035 systemd-networkd[1780]: eth0: Gained carrier Jan 20 23:53:31.444074 systemd-networkd[1780]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:53:31.460708 systemd-networkd[1780]: eth0: DHCPv4 address 172.31.29.43/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 20 23:53:31.484646 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 23:53:31.488339 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 23:53:32.809692 systemd-networkd[1780]: eth0: Gained IPv6LL Jan 20 23:53:32.814760 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 23:53:32.819043 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 23:53:33.866329 ldconfig[1778]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 23:53:33.880637 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 23:53:33.886067 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 23:53:33.915111 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 23:53:33.918124 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 23:53:33.920806 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 23:53:33.923712 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 23:53:33.926917 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 23:53:33.929686 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 23:53:33.932618 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 23:53:33.935867 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 23:53:33.938614 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 23:53:33.941476 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 23:53:33.941557 systemd[1]: Reached target paths.target - Path Units. Jan 20 23:53:33.943624 systemd[1]: Reached target timers.target - Timer Units. Jan 20 23:53:33.947100 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 23:53:33.952096 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 23:53:33.958879 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 23:53:33.962237 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 23:53:33.965639 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 23:53:33.975863 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 23:53:33.978822 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 23:53:33.982639 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 23:53:33.985192 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 23:53:33.987693 systemd[1]: Reached target basic.target - Basic System. Jan 20 23:53:33.989966 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 23:53:33.990142 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 23:53:33.992099 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 23:53:33.996955 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 23:53:34.005474 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 23:53:34.014584 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 23:53:34.024945 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 23:53:34.033988 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 23:53:34.036992 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 23:53:34.070723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:53:34.079008 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 23:53:34.087108 systemd[1]: Started ntpd.service - Network Time Service. Jan 20 23:53:34.092721 jq[1924]: false Jan 20 23:53:34.094998 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 23:53:34.099882 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 23:53:34.107127 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 20 23:53:34.116979 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 23:53:34.127990 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 23:53:34.139740 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 23:53:34.142134 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 23:53:34.143064 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 23:53:34.145258 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 23:53:34.149855 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 23:53:34.156763 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 23:53:34.160240 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 23:53:34.160752 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 23:53:34.178614 extend-filesystems[1925]: Found /dev/nvme0n1p6 Jan 20 23:53:34.199570 extend-filesystems[1925]: Found /dev/nvme0n1p9 Jan 20 23:53:34.201504 extend-filesystems[1925]: Checking size of /dev/nvme0n1p9 Jan 20 23:53:34.291702 extend-filesystems[1925]: Resized partition /dev/nvme0n1p9 Jan 20 23:53:34.313725 jq[1939]: true Jan 20 23:53:34.338885 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 23:53:34.339424 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 23:53:34.346466 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 23:53:34.346981 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 23:53:34.358580 extend-filesystems[1975]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 23:53:34.371295 ntpd[1931]: ntpd 4.2.8p18@1.4062-o Tue Jan 20 21:35:48 UTC 2026 (1): Starting Jan 20 23:53:34.373111 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: ntpd 4.2.8p18@1.4062-o Tue Jan 20 21:35:48 UTC 2026 (1): Starting Jan 20 23:53:34.373111 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 20 23:53:34.373111 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: ---------------------------------------------------- Jan 20 23:53:34.373111 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: ntp-4 is maintained by Network Time Foundation, Jan 20 23:53:34.373111 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 20 23:53:34.373111 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: corporation. Support and training for ntp-4 are Jan 20 23:53:34.373111 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: available at https://www.nwtime.org/support Jan 20 23:53:34.373111 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: ---------------------------------------------------- Jan 20 23:53:34.371403 ntpd[1931]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 20 23:53:34.371422 ntpd[1931]: ---------------------------------------------------- Jan 20 23:53:34.371439 ntpd[1931]: ntp-4 is maintained by Network Time Foundation, Jan 20 23:53:34.371455 ntpd[1931]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 20 23:53:34.371471 ntpd[1931]: corporation. Support and training for ntp-4 are Jan 20 23:53:34.371488 ntpd[1931]: available at https://www.nwtime.org/support Jan 20 23:53:34.371504 ntpd[1931]: ---------------------------------------------------- Jan 20 23:53:34.378033 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 20 23:53:34.381004 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 23:53:34.387489 ntpd[1931]: proto: precision = 0.096 usec (-23) Jan 20 23:53:34.388124 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: proto: precision = 0.096 usec (-23) Jan 20 23:53:34.388640 ntpd[1931]: basedate set to 2026-01-08 Jan 20 23:53:34.390807 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: basedate set to 2026-01-08 Jan 20 23:53:34.390807 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: gps base set to 2026-01-11 (week 2401) Jan 20 23:53:34.390807 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: Listen and drop on 0 v6wildcard [::]:123 Jan 20 23:53:34.390807 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 20 23:53:34.390807 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: Listen normally on 2 lo 127.0.0.1:123 Jan 20 23:53:34.390807 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: Listen normally on 3 eth0 172.31.29.43:123 Jan 20 23:53:34.390807 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: Listen normally on 4 lo [::1]:123 Jan 20 23:53:34.390807 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: Listen normally on 5 eth0 [fe80::412:dbff:fee9:debb%2]:123 Jan 20 23:53:34.390807 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: Listening on routing socket on fd #22 for interface updates Jan 20 23:53:34.388680 ntpd[1931]: gps base set to 2026-01-11 (week 2401) Jan 20 23:53:34.388873 ntpd[1931]: Listen and drop on 0 v6wildcard [::]:123 Jan 20 23:53:34.388918 ntpd[1931]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 20 23:53:34.389221 ntpd[1931]: Listen normally on 2 lo 127.0.0.1:123 Jan 20 23:53:34.389265 ntpd[1931]: Listen normally on 3 eth0 172.31.29.43:123 Jan 20 23:53:34.389310 ntpd[1931]: Listen normally on 4 lo [::1]:123 Jan 20 23:53:34.389354 ntpd[1931]: Listen normally on 5 eth0 [fe80::412:dbff:fee9:debb%2]:123 Jan 20 23:53:34.389395 ntpd[1931]: Listening on routing socket on fd #22 for interface updates Jan 20 23:53:34.395317 dbus-daemon[1922]: [system] SELinux support is enabled Jan 20 23:53:34.395802 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 23:53:34.404517 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 23:53:34.405643 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 23:53:34.408690 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 23:53:34.408731 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 23:53:34.419665 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 20 23:53:34.421813 ntpd[1931]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 20 23:53:34.437758 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 20 23:53:34.437758 ntpd[1931]: 20 Jan 23:53:34 ntpd[1931]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 20 23:53:34.421864 ntpd[1931]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 20 23:53:34.443293 extend-filesystems[1975]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 20 23:53:34.443293 extend-filesystems[1975]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 20 23:53:34.443293 extend-filesystems[1975]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 20 23:53:34.456362 extend-filesystems[1925]: Resized filesystem in /dev/nvme0n1p9 Jan 20 23:53:34.449213 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 23:53:34.464349 dbus-daemon[1922]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1780 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 20 23:53:34.465926 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 23:53:34.479221 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 23:53:34.482595 jq[1982]: true Jan 20 23:53:34.501602 tar[1949]: linux-arm64/LICENSE Jan 20 23:53:34.501602 tar[1949]: linux-arm64/helm Jan 20 23:53:34.502220 update_engine[1938]: I20260120 23:53:34.494163 1938 main.cc:92] Flatcar Update Engine starting Jan 20 23:53:34.512059 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 20 23:53:34.521956 systemd[1]: Started update-engine.service - Update Engine. Jan 20 23:53:34.529032 update_engine[1938]: I20260120 23:53:34.526394 1938 update_check_scheduler.cc:74] Next update check in 6m14s Jan 20 23:53:34.624038 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 23:53:34.627742 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 20 23:53:34.637884 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 20 23:53:34.801009 bash[2028]: Updated "/home/core/.ssh/authorized_keys" Jan 20 23:53:34.813223 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 23:53:34.820802 systemd[1]: Starting sshkeys.service... Jan 20 23:53:34.826289 coreos-metadata[1921]: Jan 20 23:53:34.824 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 20 23:53:34.841344 coreos-metadata[1921]: Jan 20 23:53:34.838 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 20 23:53:34.841344 coreos-metadata[1921]: Jan 20 23:53:34.839 INFO Fetch successful Jan 20 23:53:34.841344 coreos-metadata[1921]: Jan 20 23:53:34.839 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 20 23:53:34.857626 coreos-metadata[1921]: Jan 20 23:53:34.848 INFO Fetch successful Jan 20 23:53:34.857626 coreos-metadata[1921]: Jan 20 23:53:34.848 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 20 23:53:34.864559 coreos-metadata[1921]: Jan 20 23:53:34.862 INFO Fetch successful Jan 20 23:53:34.864559 coreos-metadata[1921]: Jan 20 23:53:34.862 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 20 23:53:34.864559 coreos-metadata[1921]: Jan 20 23:53:34.862 INFO Fetch successful Jan 20 23:53:34.864559 coreos-metadata[1921]: Jan 20 23:53:34.862 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 20 23:53:34.864559 coreos-metadata[1921]: Jan 20 23:53:34.862 INFO Fetch failed with 404: resource not found Jan 20 23:53:34.864559 coreos-metadata[1921]: Jan 20 23:53:34.862 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 20 23:53:34.874573 coreos-metadata[1921]: Jan 20 23:53:34.874 INFO Fetch successful Jan 20 23:53:34.874573 coreos-metadata[1921]: Jan 20 23:53:34.874 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 20 23:53:34.876040 coreos-metadata[1921]: Jan 20 23:53:34.875 INFO Fetch successful Jan 20 23:53:34.876040 coreos-metadata[1921]: Jan 20 23:53:34.875 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 20 23:53:34.888485 coreos-metadata[1921]: Jan 20 23:53:34.884 INFO Fetch successful Jan 20 23:53:34.888485 coreos-metadata[1921]: Jan 20 23:53:34.884 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 20 23:53:34.888485 coreos-metadata[1921]: Jan 20 23:53:34.888 INFO Fetch successful Jan 20 23:53:34.888485 coreos-metadata[1921]: Jan 20 23:53:34.888 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 20 23:53:34.895077 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 20 23:53:34.900752 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 20 23:53:34.920182 coreos-metadata[1921]: Jan 20 23:53:34.916 INFO Fetch successful Jan 20 23:53:34.958550 systemd-logind[1937]: Watching system buttons on /dev/input/event0 (Power Button) Jan 20 23:53:34.958604 systemd-logind[1937]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 20 23:53:34.962968 systemd-logind[1937]: New seat seat0. Jan 20 23:53:34.970734 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 23:53:35.118934 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 23:53:35.122639 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 23:53:35.165833 amazon-ssm-agent[2026]: Initializing new seelog logger Jan 20 23:53:35.169904 amazon-ssm-agent[2026]: New Seelog Logger Creation Complete Jan 20 23:53:35.169904 amazon-ssm-agent[2026]: 2026/01/20 23:53:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 23:53:35.169904 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 23:53:35.172763 amazon-ssm-agent[2026]: 2026/01/20 23:53:35 processing appconfig overrides Jan 20 23:53:35.178565 amazon-ssm-agent[2026]: 2026/01/20 23:53:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 23:53:35.178565 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 23:53:35.178565 amazon-ssm-agent[2026]: 2026/01/20 23:53:35 processing appconfig overrides Jan 20 23:53:35.178565 amazon-ssm-agent[2026]: 2026/01/20 23:53:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 23:53:35.178565 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 23:53:35.178565 amazon-ssm-agent[2026]: 2026/01/20 23:53:35 processing appconfig overrides Jan 20 23:53:35.181751 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.1759 INFO Proxy environment variables: Jan 20 23:53:35.186388 amazon-ssm-agent[2026]: 2026/01/20 23:53:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 23:53:35.187101 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 23:53:35.187333 amazon-ssm-agent[2026]: 2026/01/20 23:53:35 processing appconfig overrides Jan 20 23:53:35.284040 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.1759 INFO https_proxy: Jan 20 23:53:35.379002 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 20 23:53:35.385747 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 20 23:53:35.391713 dbus-daemon[1922]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2003 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 20 23:53:35.400508 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.1759 INFO http_proxy: Jan 20 23:53:35.401307 systemd[1]: Starting polkit.service - Authorization Manager... Jan 20 23:53:35.442619 coreos-metadata[2069]: Jan 20 23:53:35.437 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 20 23:53:35.450824 coreos-metadata[2069]: Jan 20 23:53:35.450 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 20 23:53:35.452216 coreos-metadata[2069]: Jan 20 23:53:35.452 INFO Fetch successful Jan 20 23:53:35.452216 coreos-metadata[2069]: Jan 20 23:53:35.452 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 20 23:53:35.456962 coreos-metadata[2069]: Jan 20 23:53:35.456 INFO Fetch successful Jan 20 23:53:35.468030 unknown[2069]: wrote ssh authorized keys file for user: core Jan 20 23:53:35.498339 sshd_keygen[1996]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 23:53:35.501575 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.1759 INFO no_proxy: Jan 20 23:53:35.504071 containerd[1967]: time="2026-01-20T23:53:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 23:53:35.516943 containerd[1967]: time="2026-01-20T23:53:35.513170660Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 23:53:35.595191 update-ssh-keys[2129]: Updated "/home/core/.ssh/authorized_keys" Jan 20 23:53:35.597975 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 20 23:53:35.605935 systemd[1]: Finished sshkeys.service. Jan 20 23:53:35.613560 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.1761 INFO Checking if agent identity type OnPrem can be assumed Jan 20 23:53:35.632489 containerd[1967]: time="2026-01-20T23:53:35.632410316Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.996µs" Jan 20 23:53:35.632489 containerd[1967]: time="2026-01-20T23:53:35.632471768Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 23:53:35.632667 containerd[1967]: time="2026-01-20T23:53:35.632559872Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 23:53:35.632667 containerd[1967]: time="2026-01-20T23:53:35.632592140Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 23:53:35.633361 containerd[1967]: time="2026-01-20T23:53:35.632899592Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 23:53:35.633361 containerd[1967]: time="2026-01-20T23:53:35.632948180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 23:53:35.633361 containerd[1967]: time="2026-01-20T23:53:35.633074576Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 23:53:35.633361 containerd[1967]: time="2026-01-20T23:53:35.633100028Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 23:53:35.638563 containerd[1967]: time="2026-01-20T23:53:35.635741264Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 23:53:35.638563 containerd[1967]: time="2026-01-20T23:53:35.635802812Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 23:53:35.638563 containerd[1967]: time="2026-01-20T23:53:35.635840780Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 23:53:35.638563 containerd[1967]: time="2026-01-20T23:53:35.635864360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 23:53:35.638563 containerd[1967]: time="2026-01-20T23:53:35.636206840Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 23:53:35.638563 containerd[1967]: time="2026-01-20T23:53:35.636234164Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 23:53:35.638563 containerd[1967]: time="2026-01-20T23:53:35.636396836Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 23:53:35.644482 containerd[1967]: time="2026-01-20T23:53:35.643432100Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 23:53:35.644482 containerd[1967]: time="2026-01-20T23:53:35.643530896Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 23:53:35.644482 containerd[1967]: time="2026-01-20T23:53:35.643580756Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 23:53:35.644482 containerd[1967]: time="2026-01-20T23:53:35.643666532Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 23:53:35.647667 containerd[1967]: time="2026-01-20T23:53:35.645916304Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 23:53:35.647667 containerd[1967]: time="2026-01-20T23:53:35.646136000Z" level=info msg="metadata content store policy set" policy=shared Jan 20 23:53:35.651843 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.658075388Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.658174772Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.658327796Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.658355276Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.661097696Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.661140308Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.661207940Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.661271576Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.661302632Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.661360520Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.661421912Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.661455620Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.661504136Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 23:53:35.665484 containerd[1967]: time="2026-01-20T23:53:35.661575272Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 23:53:35.658954 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.661908452Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.661993712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.662057024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.662119220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.662155100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.664582700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.664664516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.664695140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.664746704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.664776836Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.664825772Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.664915076Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.665011268Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.665042276Z" level=info msg="Start snapshots syncer" Jan 20 23:53:35.666291 containerd[1967]: time="2026-01-20T23:53:35.665129684Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 23:53:35.667935 containerd[1967]: time="2026-01-20T23:53:35.667832289Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 23:53:35.668166 containerd[1967]: time="2026-01-20T23:53:35.667964649Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 23:53:35.668166 containerd[1967]: time="2026-01-20T23:53:35.668050953Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 23:53:35.669518 containerd[1967]: time="2026-01-20T23:53:35.668293485Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 23:53:35.669518 containerd[1967]: time="2026-01-20T23:53:35.668353821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 23:53:35.669518 containerd[1967]: time="2026-01-20T23:53:35.668384925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 23:53:35.669518 containerd[1967]: time="2026-01-20T23:53:35.668412477Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 23:53:35.669518 containerd[1967]: time="2026-01-20T23:53:35.668443101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 23:53:35.669518 containerd[1967]: time="2026-01-20T23:53:35.668469573Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 23:53:35.669518 containerd[1967]: time="2026-01-20T23:53:35.668496549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 23:53:35.677888 containerd[1967]: time="2026-01-20T23:53:35.668523501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 23:53:35.677888 containerd[1967]: time="2026-01-20T23:53:35.674377125Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 23:53:35.677888 containerd[1967]: time="2026-01-20T23:53:35.674564973Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 23:53:35.677888 containerd[1967]: time="2026-01-20T23:53:35.674602173Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 23:53:35.677888 containerd[1967]: time="2026-01-20T23:53:35.674625837Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 23:53:35.677888 containerd[1967]: time="2026-01-20T23:53:35.674680029Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 23:53:35.677888 containerd[1967]: time="2026-01-20T23:53:35.674713185Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 23:53:35.677888 containerd[1967]: time="2026-01-20T23:53:35.674783925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 23:53:35.677888 containerd[1967]: time="2026-01-20T23:53:35.675603897Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 23:53:35.677888 containerd[1967]: time="2026-01-20T23:53:35.677758077Z" level=info msg="runtime interface created" Jan 20 23:53:35.677888 containerd[1967]: time="2026-01-20T23:53:35.677791233Z" level=info msg="created NRI interface" Jan 20 23:53:35.678392 containerd[1967]: time="2026-01-20T23:53:35.677872305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 23:53:35.678392 containerd[1967]: time="2026-01-20T23:53:35.677942985Z" level=info msg="Connect containerd service" Jan 20 23:53:35.678392 containerd[1967]: time="2026-01-20T23:53:35.678025329Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 23:53:35.688430 containerd[1967]: time="2026-01-20T23:53:35.688360329Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 23:53:35.713635 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.1762 INFO Checking if agent identity type EC2 can be assumed Jan 20 23:53:35.753198 locksmithd[2006]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 23:53:35.754808 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 23:53:35.755727 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 23:53:35.768016 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 23:53:35.822610 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.5419 INFO Agent will take identity from EC2 Jan 20 23:53:35.872816 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 23:53:35.894098 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 23:53:35.906586 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 23:53:35.909506 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 23:53:35.934607 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.5671 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 20 23:53:36.000520 polkitd[2123]: Started polkitd version 126 Jan 20 23:53:36.028942 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.5671 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 20 23:53:36.037070 polkitd[2123]: Loading rules from directory /etc/polkit-1/rules.d Jan 20 23:53:36.038258 polkitd[2123]: Loading rules from directory /run/polkit-1/rules.d Jan 20 23:53:36.038354 polkitd[2123]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 20 23:53:36.040583 polkitd[2123]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 20 23:53:36.040658 polkitd[2123]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 20 23:53:36.040745 polkitd[2123]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 20 23:53:36.045874 polkitd[2123]: Finished loading, compiling and executing 2 rules Jan 20 23:53:36.046516 systemd[1]: Started polkit.service - Authorization Manager. Jan 20 23:53:36.053080 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 20 23:53:36.055415 polkitd[2123]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 20 23:53:36.097723 systemd-hostnamed[2003]: Hostname set to (transient) Jan 20 23:53:36.098273 systemd-resolved[1537]: System hostname changed to 'ip-172-31-29-43'. Jan 20 23:53:36.122568 containerd[1967]: time="2026-01-20T23:53:36.119712595Z" level=info msg="Start subscribing containerd event" Jan 20 23:53:36.122568 containerd[1967]: time="2026-01-20T23:53:36.119877571Z" level=info msg="Start recovering state" Jan 20 23:53:36.122568 containerd[1967]: time="2026-01-20T23:53:36.120159175Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 23:53:36.122568 containerd[1967]: time="2026-01-20T23:53:36.120261991Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 23:53:36.122568 containerd[1967]: time="2026-01-20T23:53:36.121697575Z" level=info msg="Start event monitor" Jan 20 23:53:36.122568 containerd[1967]: time="2026-01-20T23:53:36.121765195Z" level=info msg="Start cni network conf syncer for default" Jan 20 23:53:36.122568 containerd[1967]: time="2026-01-20T23:53:36.121803271Z" level=info msg="Start streaming server" Jan 20 23:53:36.122568 containerd[1967]: time="2026-01-20T23:53:36.121825171Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 23:53:36.122568 containerd[1967]: time="2026-01-20T23:53:36.121878031Z" level=info msg="runtime interface starting up..." Jan 20 23:53:36.122568 containerd[1967]: time="2026-01-20T23:53:36.121896331Z" level=info msg="starting plugins..." Jan 20 23:53:36.122568 containerd[1967]: time="2026-01-20T23:53:36.121952623Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 23:53:36.122931 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 23:53:36.127876 containerd[1967]: time="2026-01-20T23:53:36.127835683Z" level=info msg="containerd successfully booted in 0.628912s" Jan 20 23:53:36.128304 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.5671 INFO [amazon-ssm-agent] Starting Core Agent Jan 20 23:53:36.228473 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.5671 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 20 23:53:36.291056 amazon-ssm-agent[2026]: 2026/01/20 23:53:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 23:53:36.291056 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 23:53:36.291228 amazon-ssm-agent[2026]: 2026/01/20 23:53:36 processing appconfig overrides Jan 20 23:53:36.317314 tar[1949]: linux-arm64/README.md Jan 20 23:53:36.320399 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.5671 INFO [Registrar] Starting registrar module Jan 20 23:53:36.320399 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.5798 INFO [EC2Identity] Checking disk for registration info Jan 20 23:53:36.320399 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.5799 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 20 23:53:36.320399 amazon-ssm-agent[2026]: 2026-01-20 23:53:35.5799 INFO [EC2Identity] Generating registration keypair Jan 20 23:53:36.320399 amazon-ssm-agent[2026]: 2026-01-20 23:53:36.2449 INFO [EC2Identity] Checking write access before registering Jan 20 23:53:36.320399 amazon-ssm-agent[2026]: 2026-01-20 23:53:36.2457 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 20 23:53:36.320399 amazon-ssm-agent[2026]: 2026-01-20 23:53:36.2904 INFO [EC2Identity] EC2 registration was successful. Jan 20 23:53:36.320399 amazon-ssm-agent[2026]: 2026-01-20 23:53:36.2905 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 20 23:53:36.320399 amazon-ssm-agent[2026]: 2026-01-20 23:53:36.2906 INFO [CredentialRefresher] credentialRefresher has started Jan 20 23:53:36.320399 amazon-ssm-agent[2026]: 2026-01-20 23:53:36.2906 INFO [CredentialRefresher] Starting credentials refresher loop Jan 20 23:53:36.320399 amazon-ssm-agent[2026]: 2026-01-20 23:53:36.3197 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 20 23:53:36.320399 amazon-ssm-agent[2026]: 2026-01-20 23:53:36.3200 INFO [CredentialRefresher] Credentials ready Jan 20 23:53:36.329176 amazon-ssm-agent[2026]: 2026-01-20 23:53:36.3203 INFO [CredentialRefresher] Next credential rotation will be in 29.9999908788 minutes Jan 20 23:53:36.339749 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 23:53:37.349466 amazon-ssm-agent[2026]: 2026-01-20 23:53:37.3490 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 20 23:53:37.449718 amazon-ssm-agent[2026]: 2026-01-20 23:53:37.3525 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2190) started Jan 20 23:53:37.550096 amazon-ssm-agent[2026]: 2026-01-20 23:53:37.3526 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 20 23:53:39.219766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:53:39.223464 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 23:53:39.230648 systemd[1]: Startup finished in 4.049s (kernel) + 11.872s (initrd) + 14.593s (userspace) = 30.516s. Jan 20 23:53:39.246253 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:53:40.990683 kubelet[2206]: E0120 23:53:40.990625 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:53:40.995775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:53:40.996454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:53:40.997641 systemd[1]: kubelet.service: Consumed 1.352s CPU time, 249.9M memory peak. Jan 20 23:53:41.853862 systemd-resolved[1537]: Clock change detected. Flushing caches. Jan 20 23:53:43.376333 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 23:53:43.379260 systemd[1]: Started sshd@0-172.31.29.43:22-68.220.241.50:36438.service - OpenSSH per-connection server daemon (68.220.241.50:36438). Jan 20 23:53:44.017262 sshd[2218]: Accepted publickey for core from 68.220.241.50 port 36438 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:53:44.021147 sshd-session[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:53:44.033983 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 23:53:44.036265 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 23:53:44.050177 systemd-logind[1937]: New session 1 of user core. Jan 20 23:53:44.073446 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 23:53:44.079446 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 23:53:44.104567 (systemd)[2224]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:53:44.110442 systemd-logind[1937]: New session 2 of user core. Jan 20 23:53:44.399336 systemd[2224]: Queued start job for default target default.target. Jan 20 23:53:44.411039 systemd[2224]: Created slice app.slice - User Application Slice. Jan 20 23:53:44.411292 systemd[2224]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 23:53:44.411455 systemd[2224]: Reached target paths.target - Paths. Jan 20 23:53:44.411549 systemd[2224]: Reached target timers.target - Timers. Jan 20 23:53:44.414305 systemd[2224]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 23:53:44.418079 systemd[2224]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 23:53:44.447116 systemd[2224]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 23:53:44.447307 systemd[2224]: Reached target sockets.target - Sockets. Jan 20 23:53:44.450549 systemd[2224]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 23:53:44.452782 systemd[2224]: Reached target basic.target - Basic System. Jan 20 23:53:44.452969 systemd[2224]: Reached target default.target - Main User Target. Jan 20 23:53:44.453031 systemd[2224]: Startup finished in 331ms. Jan 20 23:53:44.453446 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 23:53:44.465064 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 23:53:44.729126 systemd[1]: Started sshd@1-172.31.29.43:22-68.220.241.50:36446.service - OpenSSH per-connection server daemon (68.220.241.50:36446). Jan 20 23:53:45.219863 sshd[2238]: Accepted publickey for core from 68.220.241.50 port 36446 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:53:45.222061 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:53:45.230851 systemd-logind[1937]: New session 3 of user core. Jan 20 23:53:45.243036 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 23:53:45.475868 sshd[2242]: Connection closed by 68.220.241.50 port 36446 Jan 20 23:53:45.477122 sshd-session[2238]: pam_unix(sshd:session): session closed for user core Jan 20 23:53:45.483641 systemd[1]: sshd@1-172.31.29.43:22-68.220.241.50:36446.service: Deactivated successfully. Jan 20 23:53:45.487103 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 23:53:45.492371 systemd-logind[1937]: Session 3 logged out. Waiting for processes to exit. Jan 20 23:53:45.494424 systemd-logind[1937]: Removed session 3. Jan 20 23:53:45.564685 systemd[1]: Started sshd@2-172.31.29.43:22-68.220.241.50:36448.service - OpenSSH per-connection server daemon (68.220.241.50:36448). Jan 20 23:53:46.037420 sshd[2248]: Accepted publickey for core from 68.220.241.50 port 36448 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:53:46.040021 sshd-session[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:53:46.047974 systemd-logind[1937]: New session 4 of user core. Jan 20 23:53:46.055991 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 23:53:46.272134 sshd[2252]: Connection closed by 68.220.241.50 port 36448 Jan 20 23:53:46.272965 sshd-session[2248]: pam_unix(sshd:session): session closed for user core Jan 20 23:53:46.282352 systemd-logind[1937]: Session 4 logged out. Waiting for processes to exit. Jan 20 23:53:46.283599 systemd[1]: sshd@2-172.31.29.43:22-68.220.241.50:36448.service: Deactivated successfully. Jan 20 23:53:46.289768 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 23:53:46.294138 systemd-logind[1937]: Removed session 4. Jan 20 23:53:46.378185 systemd[1]: Started sshd@3-172.31.29.43:22-68.220.241.50:36454.service - OpenSSH per-connection server daemon (68.220.241.50:36454). Jan 20 23:53:46.876769 sshd[2258]: Accepted publickey for core from 68.220.241.50 port 36454 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:53:46.878700 sshd-session[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:53:46.888775 systemd-logind[1937]: New session 5 of user core. Jan 20 23:53:46.898018 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 23:53:47.132265 sshd[2262]: Connection closed by 68.220.241.50 port 36454 Jan 20 23:53:47.133080 sshd-session[2258]: pam_unix(sshd:session): session closed for user core Jan 20 23:53:47.142202 systemd-logind[1937]: Session 5 logged out. Waiting for processes to exit. Jan 20 23:53:47.142836 systemd[1]: sshd@3-172.31.29.43:22-68.220.241.50:36454.service: Deactivated successfully. Jan 20 23:53:47.145977 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 23:53:47.149610 systemd-logind[1937]: Removed session 5. Jan 20 23:53:47.215810 systemd[1]: Started sshd@4-172.31.29.43:22-68.220.241.50:36456.service - OpenSSH per-connection server daemon (68.220.241.50:36456). Jan 20 23:53:47.668824 sshd[2268]: Accepted publickey for core from 68.220.241.50 port 36456 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:53:47.671273 sshd-session[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:53:47.679308 systemd-logind[1937]: New session 6 of user core. Jan 20 23:53:47.693990 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 23:53:47.846401 sudo[2273]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 23:53:47.847592 sudo[2273]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 23:53:48.983258 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 23:53:49.005161 (dockerd)[2291]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 23:53:50.068141 dockerd[2291]: time="2026-01-20T23:53:50.068047528Z" level=info msg="Starting up" Jan 20 23:53:50.073497 dockerd[2291]: time="2026-01-20T23:53:50.073428652Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 23:53:50.093197 dockerd[2291]: time="2026-01-20T23:53:50.093076024Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 23:53:50.133117 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1136449247-merged.mount: Deactivated successfully. Jan 20 23:53:50.186194 dockerd[2291]: time="2026-01-20T23:53:50.186007949Z" level=info msg="Loading containers: start." Jan 20 23:53:50.200768 kernel: Initializing XFRM netlink socket Jan 20 23:53:50.666914 (udev-worker)[2312]: Network interface NamePolicy= disabled on kernel command line. Jan 20 23:53:50.745627 systemd-networkd[1780]: docker0: Link UP Jan 20 23:53:50.756763 dockerd[2291]: time="2026-01-20T23:53:50.756612452Z" level=info msg="Loading containers: done." Jan 20 23:53:50.790785 dockerd[2291]: time="2026-01-20T23:53:50.790578692Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 23:53:50.790785 dockerd[2291]: time="2026-01-20T23:53:50.790750304Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 23:53:50.791100 dockerd[2291]: time="2026-01-20T23:53:50.791049872Z" level=info msg="Initializing buildkit" Jan 20 23:53:50.841453 dockerd[2291]: time="2026-01-20T23:53:50.841350332Z" level=info msg="Completed buildkit initialization" Jan 20 23:53:50.855867 dockerd[2291]: time="2026-01-20T23:53:50.855772316Z" level=info msg="Daemon has completed initialization" Jan 20 23:53:50.856147 dockerd[2291]: time="2026-01-20T23:53:50.855922196Z" level=info msg="API listen on /run/docker.sock" Jan 20 23:53:50.857215 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 23:53:51.126006 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3003570791-merged.mount: Deactivated successfully. Jan 20 23:53:51.727941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 23:53:51.731416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:53:52.008882 containerd[1967]: time="2026-01-20T23:53:52.008655606Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 20 23:53:52.148943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:53:52.164227 (kubelet)[2508]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:53:52.244273 kubelet[2508]: E0120 23:53:52.244201 2508 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:53:52.251649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:53:52.252031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:53:52.252873 systemd[1]: kubelet.service: Consumed 329ms CPU time, 106.9M memory peak. Jan 20 23:53:52.735820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795515514.mount: Deactivated successfully. Jan 20 23:53:54.061391 containerd[1967]: time="2026-01-20T23:53:54.061294460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:54.063779 containerd[1967]: time="2026-01-20T23:53:54.063339344Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=22974973" Jan 20 23:53:54.066093 containerd[1967]: time="2026-01-20T23:53:54.066011204Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:54.071538 containerd[1967]: time="2026-01-20T23:53:54.071448116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:54.074909 containerd[1967]: time="2026-01-20T23:53:54.074829896Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.06611459s" Jan 20 23:53:54.074909 containerd[1967]: time="2026-01-20T23:53:54.074900012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 20 23:53:54.075659 containerd[1967]: time="2026-01-20T23:53:54.075597512Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 20 23:53:55.547753 containerd[1967]: time="2026-01-20T23:53:55.547651788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:55.551320 containerd[1967]: time="2026-01-20T23:53:55.551264124Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19127323" Jan 20 23:53:55.553507 containerd[1967]: time="2026-01-20T23:53:55.553435092Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:55.560886 containerd[1967]: time="2026-01-20T23:53:55.560806992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:55.563776 containerd[1967]: time="2026-01-20T23:53:55.562659744Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.487004236s" Jan 20 23:53:55.563776 containerd[1967]: time="2026-01-20T23:53:55.562733868Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 20 23:53:55.564695 containerd[1967]: time="2026-01-20T23:53:55.564625104Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 20 23:53:56.713763 containerd[1967]: time="2026-01-20T23:53:56.713663449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:56.716261 containerd[1967]: time="2026-01-20T23:53:56.715829905Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14183580" Jan 20 23:53:56.718434 containerd[1967]: time="2026-01-20T23:53:56.718380673Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:56.723901 containerd[1967]: time="2026-01-20T23:53:56.723823585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:56.725889 containerd[1967]: time="2026-01-20T23:53:56.725833501Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.161143825s" Jan 20 23:53:56.726000 containerd[1967]: time="2026-01-20T23:53:56.725887573Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 20 23:53:56.726646 containerd[1967]: time="2026-01-20T23:53:56.726586765Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 20 23:53:58.004112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2049863931.mount: Deactivated successfully. Jan 20 23:53:58.397224 containerd[1967]: time="2026-01-20T23:53:58.396030566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:58.398607 containerd[1967]: time="2026-01-20T23:53:58.398539370Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=12960247" Jan 20 23:53:58.400776 containerd[1967]: time="2026-01-20T23:53:58.400707290Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:58.406919 containerd[1967]: time="2026-01-20T23:53:58.406866350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:58.407741 containerd[1967]: time="2026-01-20T23:53:58.407678786Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.681033725s" Jan 20 23:53:58.407903 containerd[1967]: time="2026-01-20T23:53:58.407873330Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 20 23:53:58.408894 containerd[1967]: time="2026-01-20T23:53:58.408857186Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 20 23:53:59.003625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4256414730.mount: Deactivated successfully. Jan 20 23:54:00.155706 containerd[1967]: time="2026-01-20T23:54:00.155623694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:00.159480 containerd[1967]: time="2026-01-20T23:54:00.158972570Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=19575910" Jan 20 23:54:00.161735 containerd[1967]: time="2026-01-20T23:54:00.161656070Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:00.169312 containerd[1967]: time="2026-01-20T23:54:00.169263230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:00.171831 containerd[1967]: time="2026-01-20T23:54:00.171784490Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.762693268s" Jan 20 23:54:00.171998 containerd[1967]: time="2026-01-20T23:54:00.171970442Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 20 23:54:00.173269 containerd[1967]: time="2026-01-20T23:54:00.172971686Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 20 23:54:00.660009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2232571671.mount: Deactivated successfully. Jan 20 23:54:00.674070 containerd[1967]: time="2026-01-20T23:54:00.673966769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:00.676334 containerd[1967]: time="2026-01-20T23:54:00.676258937Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 20 23:54:00.678711 containerd[1967]: time="2026-01-20T23:54:00.678647165Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:00.683752 containerd[1967]: time="2026-01-20T23:54:00.683313473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:00.685016 containerd[1967]: time="2026-01-20T23:54:00.684966353Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 511.945359ms" Jan 20 23:54:00.685195 containerd[1967]: time="2026-01-20T23:54:00.685158209Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 20 23:54:00.686495 containerd[1967]: time="2026-01-20T23:54:00.686436965Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 20 23:54:01.322789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3230251844.mount: Deactivated successfully. Jan 20 23:54:02.502618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 23:54:02.506383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:54:02.888634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:54:02.907142 (kubelet)[2701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:54:02.991436 kubelet[2701]: E0120 23:54:02.991373 2701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:54:02.996232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:54:02.996568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:54:02.997582 systemd[1]: kubelet.service: Consumed 322ms CPU time, 104.8M memory peak. Jan 20 23:54:05.141899 containerd[1967]: time="2026-01-20T23:54:05.141816091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:05.144198 containerd[1967]: time="2026-01-20T23:54:05.144105127Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=96314798" Jan 20 23:54:05.145891 containerd[1967]: time="2026-01-20T23:54:05.145820731Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:05.151776 containerd[1967]: time="2026-01-20T23:54:05.151057351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:05.153968 containerd[1967]: time="2026-01-20T23:54:05.153401395Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 4.466794198s" Jan 20 23:54:05.153968 containerd[1967]: time="2026-01-20T23:54:05.153459979Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 20 23:54:06.617160 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 20 23:54:13.179685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 23:54:13.184861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:54:13.552111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:54:13.570440 (kubelet)[2744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:54:13.651305 kubelet[2744]: E0120 23:54:13.651236 2744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:54:13.655674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:54:13.657047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:54:13.657649 systemd[1]: kubelet.service: Consumed 301ms CPU time, 106.5M memory peak. Jan 20 23:54:16.280053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:54:16.281161 systemd[1]: kubelet.service: Consumed 301ms CPU time, 106.5M memory peak. Jan 20 23:54:16.285233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:54:16.342434 systemd[1]: Reload requested from client PID 2758 ('systemctl') (unit session-6.scope)... Jan 20 23:54:16.342467 systemd[1]: Reloading... Jan 20 23:54:16.589862 zram_generator::config[2808]: No configuration found. Jan 20 23:54:17.070249 systemd[1]: Reloading finished in 727 ms. Jan 20 23:54:17.175153 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 23:54:17.175354 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 23:54:17.176119 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:54:17.176224 systemd[1]: kubelet.service: Consumed 229ms CPU time, 95.1M memory peak. Jan 20 23:54:17.179499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:54:17.630811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:54:17.647199 (kubelet)[2869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 23:54:17.722744 kubelet[2869]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 23:54:17.722744 kubelet[2869]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 23:54:17.723241 kubelet[2869]: I0120 23:54:17.722884 2869 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 23:54:18.545628 kubelet[2869]: I0120 23:54:18.545558 2869 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 23:54:18.545628 kubelet[2869]: I0120 23:54:18.545606 2869 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 23:54:18.545867 kubelet[2869]: I0120 23:54:18.545659 2869 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 23:54:18.545867 kubelet[2869]: I0120 23:54:18.545675 2869 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 23:54:18.546148 kubelet[2869]: I0120 23:54:18.546113 2869 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 23:54:18.558290 kubelet[2869]: E0120 23:54:18.558237 2869 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.29.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 23:54:18.561028 kubelet[2869]: I0120 23:54:18.560966 2869 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 23:54:18.568853 kubelet[2869]: I0120 23:54:18.568820 2869 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 23:54:18.574667 kubelet[2869]: I0120 23:54:18.574538 2869 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 23:54:18.575757 kubelet[2869]: I0120 23:54:18.575195 2869 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 23:54:18.575757 kubelet[2869]: I0120 23:54:18.575239 2869 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-43","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 23:54:18.575757 kubelet[2869]: I0120 23:54:18.575521 2869 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 23:54:18.575757 kubelet[2869]: I0120 23:54:18.575538 2869 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 23:54:18.576131 kubelet[2869]: I0120 23:54:18.575679 2869 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 23:54:18.583014 kubelet[2869]: I0120 23:54:18.582982 2869 state_mem.go:36] "Initialized new in-memory state store" Jan 20 23:54:18.585744 kubelet[2869]: I0120 23:54:18.585690 2869 kubelet.go:475] "Attempting to sync node with API server" Jan 20 23:54:18.585922 kubelet[2869]: I0120 23:54:18.585901 2869 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 23:54:18.586044 kubelet[2869]: I0120 23:54:18.586026 2869 kubelet.go:387] "Adding apiserver pod source" Jan 20 23:54:18.586358 kubelet[2869]: I0120 23:54:18.586146 2869 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 23:54:18.588348 kubelet[2869]: E0120 23:54:18.588295 2869 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-43&limit=500&resourceVersion=0\": dial tcp 172.31.29.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 23:54:18.589231 kubelet[2869]: E0120 23:54:18.589156 2869 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 23:54:18.589526 kubelet[2869]: I0120 23:54:18.589500 2869 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 23:54:18.590710 kubelet[2869]: I0120 23:54:18.590679 2869 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 23:54:18.590710 kubelet[2869]: I0120 23:54:18.590799 2869 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 23:54:18.590710 kubelet[2869]: W0120 23:54:18.590870 2869 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 23:54:18.597286 kubelet[2869]: I0120 23:54:18.597250 2869 server.go:1262] "Started kubelet" Jan 20 23:54:18.604355 kubelet[2869]: I0120 23:54:18.604318 2869 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 23:54:18.608747 kubelet[2869]: E0120 23:54:18.605489 2869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.43:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-43.188c959dcaf4b8da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-43,UID:ip-172-31-29-43,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-43,},FirstTimestamp:2026-01-20 23:54:18.597202138 +0000 UTC m=+0.943320066,LastTimestamp:2026-01-20 23:54:18.597202138 +0000 UTC m=+0.943320066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-43,}" Jan 20 23:54:18.612343 kubelet[2869]: I0120 23:54:18.611920 2869 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 23:54:18.615077 kubelet[2869]: I0120 23:54:18.615042 2869 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 23:54:18.615635 kubelet[2869]: I0120 23:54:18.615569 2869 server.go:310] "Adding debug handlers to kubelet server" Jan 20 23:54:18.616971 kubelet[2869]: E0120 23:54:18.616934 2869 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-29-43\" not found" Jan 20 23:54:18.621172 kubelet[2869]: I0120 23:54:18.621070 2869 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 23:54:18.621312 kubelet[2869]: I0120 23:54:18.621190 2869 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 23:54:18.621557 kubelet[2869]: I0120 23:54:18.621525 2869 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 23:54:18.622046 kubelet[2869]: I0120 23:54:18.622010 2869 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 23:54:18.625083 kubelet[2869]: I0120 23:54:18.624998 2869 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 23:54:18.625222 kubelet[2869]: I0120 23:54:18.625125 2869 reconciler.go:29] "Reconciler: start to sync state" Jan 20 23:54:18.626798 kubelet[2869]: E0120 23:54:18.626701 2869 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 23:54:18.627051 kubelet[2869]: E0120 23:54:18.626955 2869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-43?timeout=10s\": dial tcp 172.31.29.43:6443: connect: connection refused" interval="200ms" Jan 20 23:54:18.628488 kubelet[2869]: I0120 23:54:18.627885 2869 factory.go:223] Registration of the systemd container factory successfully Jan 20 23:54:18.628488 kubelet[2869]: I0120 23:54:18.628131 2869 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 23:54:18.630284 kubelet[2869]: E0120 23:54:18.630215 2869 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 23:54:18.630659 kubelet[2869]: I0120 23:54:18.630624 2869 factory.go:223] Registration of the containerd container factory successfully Jan 20 23:54:18.658545 kubelet[2869]: I0120 23:54:18.658498 2869 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 23:54:18.658545 kubelet[2869]: I0120 23:54:18.658532 2869 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 23:54:18.659180 kubelet[2869]: I0120 23:54:18.658565 2869 state_mem.go:36] "Initialized new in-memory state store" Jan 20 23:54:18.663480 kubelet[2869]: I0120 23:54:18.663423 2869 policy_none.go:49] "None policy: Start" Jan 20 23:54:18.663480 kubelet[2869]: I0120 23:54:18.663466 2869 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 23:54:18.663679 kubelet[2869]: I0120 23:54:18.663491 2869 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 23:54:18.667891 kubelet[2869]: I0120 23:54:18.667849 2869 policy_none.go:47] "Start" Jan 20 23:54:18.677685 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 23:54:18.683897 kubelet[2869]: I0120 23:54:18.683836 2869 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 23:54:18.690755 kubelet[2869]: I0120 23:54:18.689403 2869 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 23:54:18.690755 kubelet[2869]: I0120 23:54:18.689450 2869 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 23:54:18.690755 kubelet[2869]: I0120 23:54:18.689487 2869 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 23:54:18.690755 kubelet[2869]: E0120 23:54:18.689550 2869 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 23:54:18.694579 kubelet[2869]: E0120 23:54:18.694511 2869 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 23:54:18.705168 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 23:54:18.713530 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 23:54:18.717464 kubelet[2869]: E0120 23:54:18.717420 2869 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-29-43\" not found" Jan 20 23:54:18.735055 kubelet[2869]: E0120 23:54:18.734779 2869 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 23:54:18.735615 kubelet[2869]: I0120 23:54:18.735080 2869 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 23:54:18.735615 kubelet[2869]: I0120 23:54:18.735100 2869 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 23:54:18.738843 kubelet[2869]: I0120 23:54:18.738608 2869 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 23:54:18.740125 kubelet[2869]: E0120 23:54:18.740028 2869 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 23:54:18.740125 kubelet[2869]: E0120 23:54:18.740090 2869 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-43\" not found" Jan 20 23:54:18.813412 systemd[1]: Created slice kubepods-burstable-podcbb6e54197e9230f3c5d7d8c70812cd1.slice - libcontainer container kubepods-burstable-podcbb6e54197e9230f3c5d7d8c70812cd1.slice. Jan 20 23:54:18.826450 kubelet[2869]: I0120 23:54:18.826407 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbb6e54197e9230f3c5d7d8c70812cd1-ca-certs\") pod \"kube-apiserver-ip-172-31-29-43\" (UID: \"cbb6e54197e9230f3c5d7d8c70812cd1\") " pod="kube-system/kube-apiserver-ip-172-31-29-43" Jan 20 23:54:18.826702 kubelet[2869]: I0120 23:54:18.826669 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbb6e54197e9230f3c5d7d8c70812cd1-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-43\" (UID: \"cbb6e54197e9230f3c5d7d8c70812cd1\") " pod="kube-system/kube-apiserver-ip-172-31-29-43" Jan 20 23:54:18.827288 kubelet[2869]: I0120 23:54:18.827248 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbb6e54197e9230f3c5d7d8c70812cd1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-43\" (UID: \"cbb6e54197e9230f3c5d7d8c70812cd1\") " pod="kube-system/kube-apiserver-ip-172-31-29-43" Jan 20 23:54:18.827465 kubelet[2869]: I0120 23:54:18.827439 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2053c755696a10871a1d523518fb5db6-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-43\" (UID: \"2053c755696a10871a1d523518fb5db6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:18.827601 kubelet[2869]: I0120 23:54:18.827578 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2053c755696a10871a1d523518fb5db6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-43\" (UID: \"2053c755696a10871a1d523518fb5db6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:18.827771 kubelet[2869]: I0120 23:54:18.827745 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2053c755696a10871a1d523518fb5db6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-43\" (UID: \"2053c755696a10871a1d523518fb5db6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:18.827928 kubelet[2869]: I0120 23:54:18.827900 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2053c755696a10871a1d523518fb5db6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-43\" (UID: \"2053c755696a10871a1d523518fb5db6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:18.828069 kubelet[2869]: I0120 23:54:18.828043 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26f72f5014e6c4c079156a41f7073671-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-43\" (UID: \"26f72f5014e6c4c079156a41f7073671\") " pod="kube-system/kube-scheduler-ip-172-31-29-43" Jan 20 23:54:18.828231 kubelet[2869]: I0120 23:54:18.828172 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2053c755696a10871a1d523518fb5db6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-43\" (UID: \"2053c755696a10871a1d523518fb5db6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:18.828596 kubelet[2869]: E0120 23:54:18.828531 2869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-43?timeout=10s\": dial tcp 172.31.29.43:6443: connect: connection refused" interval="400ms" Jan 20 23:54:18.829366 kubelet[2869]: E0120 23:54:18.829035 2869 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-43\" not found" node="ip-172-31-29-43" Jan 20 23:54:18.835226 systemd[1]: Created slice kubepods-burstable-pod2053c755696a10871a1d523518fb5db6.slice - libcontainer container kubepods-burstable-pod2053c755696a10871a1d523518fb5db6.slice. Jan 20 23:54:18.842049 kubelet[2869]: I0120 23:54:18.842013 2869 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-43" Jan 20 23:54:18.844652 kubelet[2869]: E0120 23:54:18.843525 2869 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.43:6443/api/v1/nodes\": dial tcp 172.31.29.43:6443: connect: connection refused" node="ip-172-31-29-43" Jan 20 23:54:18.844652 kubelet[2869]: E0120 23:54:18.844098 2869 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-43\" not found" node="ip-172-31-29-43" Jan 20 23:54:18.848568 systemd[1]: Created slice kubepods-burstable-pod26f72f5014e6c4c079156a41f7073671.slice - libcontainer container kubepods-burstable-pod26f72f5014e6c4c079156a41f7073671.slice. Jan 20 23:54:18.852576 kubelet[2869]: E0120 23:54:18.852532 2869 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-43\" not found" node="ip-172-31-29-43" Jan 20 23:54:19.046685 kubelet[2869]: I0120 23:54:19.046624 2869 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-43" Jan 20 23:54:19.047143 kubelet[2869]: E0120 23:54:19.047093 2869 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.43:6443/api/v1/nodes\": dial tcp 172.31.29.43:6443: connect: connection refused" node="ip-172-31-29-43" Jan 20 23:54:19.136352 containerd[1967]: time="2026-01-20T23:54:19.136200129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-43,Uid:cbb6e54197e9230f3c5d7d8c70812cd1,Namespace:kube-system,Attempt:0,}" Jan 20 23:54:19.149987 containerd[1967]: time="2026-01-20T23:54:19.149912925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-43,Uid:2053c755696a10871a1d523518fb5db6,Namespace:kube-system,Attempt:0,}" Jan 20 23:54:19.158271 containerd[1967]: time="2026-01-20T23:54:19.158166261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-43,Uid:26f72f5014e6c4c079156a41f7073671,Namespace:kube-system,Attempt:0,}" Jan 20 23:54:19.229452 kubelet[2869]: E0120 23:54:19.229385 2869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-43?timeout=10s\": dial tcp 172.31.29.43:6443: connect: connection refused" interval="800ms" Jan 20 23:54:19.450411 kubelet[2869]: I0120 23:54:19.450356 2869 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-43" Jan 20 23:54:19.450906 kubelet[2869]: E0120 23:54:19.450862 2869 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.43:6443/api/v1/nodes\": dial tcp 172.31.29.43:6443: connect: connection refused" node="ip-172-31-29-43" Jan 20 23:54:19.495879 kubelet[2869]: E0120 23:54:19.495802 2869 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 23:54:19.511313 kubelet[2869]: E0120 23:54:19.511256 2869 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 23:54:19.636191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount445409293.mount: Deactivated successfully. Jan 20 23:54:19.654681 containerd[1967]: time="2026-01-20T23:54:19.654605003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 23:54:19.663527 containerd[1967]: time="2026-01-20T23:54:19.663441011Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 23:54:19.665531 containerd[1967]: time="2026-01-20T23:54:19.665465447Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 23:54:19.668750 containerd[1967]: time="2026-01-20T23:54:19.668124587Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 23:54:19.671647 containerd[1967]: time="2026-01-20T23:54:19.671604575Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 23:54:19.673447 containerd[1967]: time="2026-01-20T23:54:19.673393067Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 23:54:19.675499 containerd[1967]: time="2026-01-20T23:54:19.675428459Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 23:54:19.677904 containerd[1967]: time="2026-01-20T23:54:19.677848139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 23:54:19.679378 containerd[1967]: time="2026-01-20T23:54:19.679336331Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 537.402842ms" Jan 20 23:54:19.688808 containerd[1967]: time="2026-01-20T23:54:19.688744799Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 525.40685ms" Jan 20 23:54:19.689198 containerd[1967]: time="2026-01-20T23:54:19.689138843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 534.190262ms" Jan 20 23:54:19.742841 containerd[1967]: time="2026-01-20T23:54:19.738427584Z" level=info msg="connecting to shim 1e99a2bfb4389eddfe8cf4296a55c98523d4eda27105c0bdab01a689cc4c5d8c" address="unix:///run/containerd/s/03d11f2a08503d2cd2e043c629bf27ab633ab46c7e3902b07eea6fd3914791de" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:54:19.772460 containerd[1967]: time="2026-01-20T23:54:19.772388448Z" level=info msg="connecting to shim 01677a9f3bd408cb5faadee1f83963f6b0b94cd8a52f99e31cb2b0b46fd23987" address="unix:///run/containerd/s/754d361ef18c2ff5829c3bdb4341715c1e179b8c50c50f9bb3ba38c4ad2bfcdb" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:54:19.781153 containerd[1967]: time="2026-01-20T23:54:19.781082100Z" level=info msg="connecting to shim b92b0375fdd333220f07c8775ca58764db4d2194d75aed60563846ad999f11aa" address="unix:///run/containerd/s/f3c48fe03ebbfd56a727b2203a3e516334df76a1cbb832f2cf3a8a63cb987a80" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:54:19.814131 systemd[1]: Started cri-containerd-1e99a2bfb4389eddfe8cf4296a55c98523d4eda27105c0bdab01a689cc4c5d8c.scope - libcontainer container 1e99a2bfb4389eddfe8cf4296a55c98523d4eda27105c0bdab01a689cc4c5d8c. Jan 20 23:54:19.862062 systemd[1]: Started cri-containerd-01677a9f3bd408cb5faadee1f83963f6b0b94cd8a52f99e31cb2b0b46fd23987.scope - libcontainer container 01677a9f3bd408cb5faadee1f83963f6b0b94cd8a52f99e31cb2b0b46fd23987. Jan 20 23:54:19.872355 systemd[1]: Started cri-containerd-b92b0375fdd333220f07c8775ca58764db4d2194d75aed60563846ad999f11aa.scope - libcontainer container b92b0375fdd333220f07c8775ca58764db4d2194d75aed60563846ad999f11aa. Jan 20 23:54:19.894961 kubelet[2869]: E0120 23:54:19.894685 2869 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 23:54:19.978542 containerd[1967]: time="2026-01-20T23:54:19.978492337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-43,Uid:cbb6e54197e9230f3c5d7d8c70812cd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e99a2bfb4389eddfe8cf4296a55c98523d4eda27105c0bdab01a689cc4c5d8c\"" Jan 20 23:54:19.980072 kubelet[2869]: E0120 23:54:19.980024 2869 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-43&limit=500&resourceVersion=0\": dial tcp 172.31.29.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 23:54:19.998647 containerd[1967]: time="2026-01-20T23:54:19.997830673Z" level=info msg="CreateContainer within sandbox \"1e99a2bfb4389eddfe8cf4296a55c98523d4eda27105c0bdab01a689cc4c5d8c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 23:54:20.029131 containerd[1967]: time="2026-01-20T23:54:20.029077365Z" level=info msg="Container 331e96d9ab50f3897c515ab58277d8384a69abafa2c2122432747d1dff99726c: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:54:20.030187 kubelet[2869]: E0120 23:54:20.030105 2869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-43?timeout=10s\": dial tcp 172.31.29.43:6443: connect: connection refused" interval="1.6s" Jan 20 23:54:20.034384 containerd[1967]: time="2026-01-20T23:54:20.034334361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-43,Uid:2053c755696a10871a1d523518fb5db6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b92b0375fdd333220f07c8775ca58764db4d2194d75aed60563846ad999f11aa\"" Jan 20 23:54:20.045926 containerd[1967]: time="2026-01-20T23:54:20.045257397Z" level=info msg="CreateContainer within sandbox \"b92b0375fdd333220f07c8775ca58764db4d2194d75aed60563846ad999f11aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 23:54:20.064323 containerd[1967]: time="2026-01-20T23:54:20.064264641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-43,Uid:26f72f5014e6c4c079156a41f7073671,Namespace:kube-system,Attempt:0,} returns sandbox id \"01677a9f3bd408cb5faadee1f83963f6b0b94cd8a52f99e31cb2b0b46fd23987\"" Jan 20 23:54:20.067646 containerd[1967]: time="2026-01-20T23:54:20.067581345Z" level=info msg="CreateContainer within sandbox \"1e99a2bfb4389eddfe8cf4296a55c98523d4eda27105c0bdab01a689cc4c5d8c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"331e96d9ab50f3897c515ab58277d8384a69abafa2c2122432747d1dff99726c\"" Jan 20 23:54:20.069017 containerd[1967]: time="2026-01-20T23:54:20.068749437Z" level=info msg="StartContainer for \"331e96d9ab50f3897c515ab58277d8384a69abafa2c2122432747d1dff99726c\"" Jan 20 23:54:20.071629 containerd[1967]: time="2026-01-20T23:54:20.071577573Z" level=info msg="connecting to shim 331e96d9ab50f3897c515ab58277d8384a69abafa2c2122432747d1dff99726c" address="unix:///run/containerd/s/03d11f2a08503d2cd2e043c629bf27ab633ab46c7e3902b07eea6fd3914791de" protocol=ttrpc version=3 Jan 20 23:54:20.075767 containerd[1967]: time="2026-01-20T23:54:20.075650361Z" level=info msg="Container 398699eabcaaa54ba23c6aba57fefec32c39fd0b839ca1e3f0bb35284f567656: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:54:20.076748 containerd[1967]: time="2026-01-20T23:54:20.076170333Z" level=info msg="CreateContainer within sandbox \"01677a9f3bd408cb5faadee1f83963f6b0b94cd8a52f99e31cb2b0b46fd23987\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 23:54:20.107423 containerd[1967]: time="2026-01-20T23:54:20.107357278Z" level=info msg="Container 0c7e106b48e497a0403d214a671910df4b13d8c988c14a769bfb8c977342daac: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:54:20.110801 containerd[1967]: time="2026-01-20T23:54:20.109579738Z" level=info msg="CreateContainer within sandbox \"b92b0375fdd333220f07c8775ca58764db4d2194d75aed60563846ad999f11aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"398699eabcaaa54ba23c6aba57fefec32c39fd0b839ca1e3f0bb35284f567656\"" Jan 20 23:54:20.119096 systemd[1]: Started cri-containerd-331e96d9ab50f3897c515ab58277d8384a69abafa2c2122432747d1dff99726c.scope - libcontainer container 331e96d9ab50f3897c515ab58277d8384a69abafa2c2122432747d1dff99726c. Jan 20 23:54:20.122909 containerd[1967]: time="2026-01-20T23:54:20.122834806Z" level=info msg="StartContainer for \"398699eabcaaa54ba23c6aba57fefec32c39fd0b839ca1e3f0bb35284f567656\"" Jan 20 23:54:20.129416 containerd[1967]: time="2026-01-20T23:54:20.128951518Z" level=info msg="connecting to shim 398699eabcaaa54ba23c6aba57fefec32c39fd0b839ca1e3f0bb35284f567656" address="unix:///run/containerd/s/f3c48fe03ebbfd56a727b2203a3e516334df76a1cbb832f2cf3a8a63cb987a80" protocol=ttrpc version=3 Jan 20 23:54:20.139540 containerd[1967]: time="2026-01-20T23:54:20.139482874Z" level=info msg="CreateContainer within sandbox \"01677a9f3bd408cb5faadee1f83963f6b0b94cd8a52f99e31cb2b0b46fd23987\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0c7e106b48e497a0403d214a671910df4b13d8c988c14a769bfb8c977342daac\"" Jan 20 23:54:20.144545 containerd[1967]: time="2026-01-20T23:54:20.144462562Z" level=info msg="StartContainer for \"0c7e106b48e497a0403d214a671910df4b13d8c988c14a769bfb8c977342daac\"" Jan 20 23:54:20.150873 containerd[1967]: time="2026-01-20T23:54:20.150703474Z" level=info msg="connecting to shim 0c7e106b48e497a0403d214a671910df4b13d8c988c14a769bfb8c977342daac" address="unix:///run/containerd/s/754d361ef18c2ff5829c3bdb4341715c1e179b8c50c50f9bb3ba38c4ad2bfcdb" protocol=ttrpc version=3 Jan 20 23:54:20.177404 systemd[1]: Started cri-containerd-398699eabcaaa54ba23c6aba57fefec32c39fd0b839ca1e3f0bb35284f567656.scope - libcontainer container 398699eabcaaa54ba23c6aba57fefec32c39fd0b839ca1e3f0bb35284f567656. Jan 20 23:54:20.196508 update_engine[1938]: I20260120 23:54:20.195761 1938 update_attempter.cc:509] Updating boot flags... Jan 20 23:54:20.207120 systemd[1]: Started cri-containerd-0c7e106b48e497a0403d214a671910df4b13d8c988c14a769bfb8c977342daac.scope - libcontainer container 0c7e106b48e497a0403d214a671910df4b13d8c988c14a769bfb8c977342daac. Jan 20 23:54:20.257128 kubelet[2869]: I0120 23:54:20.256425 2869 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-43" Jan 20 23:54:20.257128 kubelet[2869]: E0120 23:54:20.256949 2869 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.43:6443/api/v1/nodes\": dial tcp 172.31.29.43:6443: connect: connection refused" node="ip-172-31-29-43" Jan 20 23:54:20.315235 containerd[1967]: time="2026-01-20T23:54:20.315153899Z" level=info msg="StartContainer for \"331e96d9ab50f3897c515ab58277d8384a69abafa2c2122432747d1dff99726c\" returns successfully" Jan 20 23:54:20.489555 containerd[1967]: time="2026-01-20T23:54:20.489343187Z" level=info msg="StartContainer for \"398699eabcaaa54ba23c6aba57fefec32c39fd0b839ca1e3f0bb35284f567656\" returns successfully" Jan 20 23:54:20.492576 containerd[1967]: time="2026-01-20T23:54:20.492420215Z" level=info msg="StartContainer for \"0c7e106b48e497a0403d214a671910df4b13d8c988c14a769bfb8c977342daac\" returns successfully" Jan 20 23:54:20.739646 kubelet[2869]: E0120 23:54:20.739302 2869 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-43\" not found" node="ip-172-31-29-43" Jan 20 23:54:20.754809 kubelet[2869]: E0120 23:54:20.752036 2869 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-43\" not found" node="ip-172-31-29-43" Jan 20 23:54:20.780543 kubelet[2869]: E0120 23:54:20.780433 2869 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-43\" not found" node="ip-172-31-29-43" Jan 20 23:54:21.776877 kubelet[2869]: E0120 23:54:21.776827 2869 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-43\" not found" node="ip-172-31-29-43" Jan 20 23:54:21.777823 kubelet[2869]: E0120 23:54:21.777455 2869 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-43\" not found" node="ip-172-31-29-43" Jan 20 23:54:21.863088 kubelet[2869]: I0120 23:54:21.863044 2869 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-43" Jan 20 23:54:22.609748 kubelet[2869]: E0120 23:54:22.609672 2869 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-43\" not found" node="ip-172-31-29-43" Jan 20 23:54:22.890585 kubelet[2869]: E0120 23:54:22.890450 2869 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-43\" not found" node="ip-172-31-29-43" Jan 20 23:54:25.127917 kubelet[2869]: E0120 23:54:25.127853 2869 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-43\" not found" node="ip-172-31-29-43" Jan 20 23:54:25.226568 kubelet[2869]: I0120 23:54:25.226501 2869 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-43" Jan 20 23:54:25.226568 kubelet[2869]: E0120 23:54:25.226565 2869 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-29-43\": node \"ip-172-31-29-43\" not found" Jan 20 23:54:25.281936 kubelet[2869]: E0120 23:54:25.281787 2869 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-43.188c959dcaf4b8da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-43,UID:ip-172-31-29-43,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-43,},FirstTimestamp:2026-01-20 23:54:18.597202138 +0000 UTC m=+0.943320066,LastTimestamp:2026-01-20 23:54:18.597202138 +0000 UTC m=+0.943320066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-43,}" Jan 20 23:54:25.325682 kubelet[2869]: I0120 23:54:25.325626 2869 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-43" Jan 20 23:54:25.381644 kubelet[2869]: E0120 23:54:25.381471 2869 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-43\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-43" Jan 20 23:54:25.381644 kubelet[2869]: I0120 23:54:25.381560 2869 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:25.394887 kubelet[2869]: E0120 23:54:25.394825 2869 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-43\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:25.395045 kubelet[2869]: I0120 23:54:25.394899 2869 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-43" Jan 20 23:54:25.401042 kubelet[2869]: E0120 23:54:25.400979 2869 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-43\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-29-43" Jan 20 23:54:25.593753 kubelet[2869]: I0120 23:54:25.591393 2869 apiserver.go:52] "Watching apiserver" Jan 20 23:54:25.625570 kubelet[2869]: I0120 23:54:25.625517 2869 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 23:54:27.937517 systemd[1]: Reload requested from client PID 3335 ('systemctl') (unit session-6.scope)... Jan 20 23:54:27.937543 systemd[1]: Reloading... Jan 20 23:54:28.232856 zram_generator::config[3391]: No configuration found. Jan 20 23:54:28.754880 systemd[1]: Reloading finished in 816 ms. Jan 20 23:54:28.809616 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:54:28.834477 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 23:54:28.835033 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:54:28.835215 systemd[1]: kubelet.service: Consumed 1.805s CPU time, 121.5M memory peak. Jan 20 23:54:28.839446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:54:29.215548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:54:29.233288 (kubelet)[3444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 23:54:29.327066 kubelet[3444]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 23:54:29.327846 kubelet[3444]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 23:54:29.328188 kubelet[3444]: I0120 23:54:29.328133 3444 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 23:54:29.349198 kubelet[3444]: I0120 23:54:29.349150 3444 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 23:54:29.349388 kubelet[3444]: I0120 23:54:29.349370 3444 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 23:54:29.349518 kubelet[3444]: I0120 23:54:29.349501 3444 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 23:54:29.349620 kubelet[3444]: I0120 23:54:29.349598 3444 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 23:54:29.350157 kubelet[3444]: I0120 23:54:29.350133 3444 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 23:54:29.352830 kubelet[3444]: I0120 23:54:29.352793 3444 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 23:54:29.358756 kubelet[3444]: I0120 23:54:29.358498 3444 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 23:54:29.370343 kubelet[3444]: I0120 23:54:29.369640 3444 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 23:54:29.381099 kubelet[3444]: I0120 23:54:29.381044 3444 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 23:54:29.381903 kubelet[3444]: I0120 23:54:29.381415 3444 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 23:54:29.381903 kubelet[3444]: I0120 23:54:29.381472 3444 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-43","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 23:54:29.381903 kubelet[3444]: I0120 23:54:29.381768 3444 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 23:54:29.381903 kubelet[3444]: I0120 23:54:29.381787 3444 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 23:54:29.382242 kubelet[3444]: I0120 23:54:29.381826 3444 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 23:54:29.390642 kubelet[3444]: I0120 23:54:29.390563 3444 state_mem.go:36] "Initialized new in-memory state store" Jan 20 23:54:29.392421 kubelet[3444]: I0120 23:54:29.391980 3444 kubelet.go:475] "Attempting to sync node with API server" Jan 20 23:54:29.392421 kubelet[3444]: I0120 23:54:29.392024 3444 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 23:54:29.392421 kubelet[3444]: I0120 23:54:29.392094 3444 kubelet.go:387] "Adding apiserver pod source" Jan 20 23:54:29.392421 kubelet[3444]: I0120 23:54:29.392115 3444 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 23:54:29.406850 kubelet[3444]: I0120 23:54:29.406810 3444 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 23:54:29.407961 kubelet[3444]: I0120 23:54:29.407934 3444 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 23:54:29.408120 kubelet[3444]: I0120 23:54:29.408101 3444 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 23:54:29.414841 kubelet[3444]: I0120 23:54:29.414524 3444 server.go:1262] "Started kubelet" Jan 20 23:54:29.415859 kubelet[3444]: I0120 23:54:29.415829 3444 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 23:54:29.419696 kubelet[3444]: I0120 23:54:29.419613 3444 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 23:54:29.424701 kubelet[3444]: I0120 23:54:29.424661 3444 server.go:310] "Adding debug handlers to kubelet server" Jan 20 23:54:29.434485 kubelet[3444]: I0120 23:54:29.434452 3444 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 23:54:29.435200 kubelet[3444]: I0120 23:54:29.435113 3444 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 23:54:29.435432 kubelet[3444]: I0120 23:54:29.435215 3444 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 23:54:29.435681 kubelet[3444]: I0120 23:54:29.435608 3444 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 23:54:29.436036 kubelet[3444]: I0120 23:54:29.435997 3444 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 23:54:29.436463 kubelet[3444]: E0120 23:54:29.436435 3444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-29-43\" not found" Jan 20 23:54:29.437793 kubelet[3444]: I0120 23:54:29.437710 3444 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 23:54:29.442470 kubelet[3444]: I0120 23:54:29.442353 3444 reconciler.go:29] "Reconciler: start to sync state" Jan 20 23:54:29.455219 kubelet[3444]: I0120 23:54:29.455158 3444 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 23:54:29.461148 kubelet[3444]: I0120 23:54:29.461108 3444 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 23:54:29.461975 kubelet[3444]: I0120 23:54:29.461346 3444 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 23:54:29.461975 kubelet[3444]: I0120 23:54:29.461392 3444 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 23:54:29.461975 kubelet[3444]: E0120 23:54:29.461491 3444 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 23:54:29.479544 kubelet[3444]: I0120 23:54:29.478680 3444 factory.go:223] Registration of the systemd container factory successfully Jan 20 23:54:29.479544 kubelet[3444]: I0120 23:54:29.478924 3444 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 23:54:29.493490 kubelet[3444]: I0120 23:54:29.493197 3444 factory.go:223] Registration of the containerd container factory successfully Jan 20 23:54:29.536677 kubelet[3444]: E0120 23:54:29.536556 3444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-29-43\" not found" Jan 20 23:54:29.563046 kubelet[3444]: E0120 23:54:29.562958 3444 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 23:54:29.712106 kubelet[3444]: I0120 23:54:29.710978 3444 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 23:54:29.712106 kubelet[3444]: I0120 23:54:29.711014 3444 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 23:54:29.712106 kubelet[3444]: I0120 23:54:29.711052 3444 state_mem.go:36] "Initialized new in-memory state store" Jan 20 23:54:29.712106 kubelet[3444]: I0120 23:54:29.711270 3444 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 23:54:29.712106 kubelet[3444]: I0120 23:54:29.711293 3444 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 23:54:29.712106 kubelet[3444]: I0120 23:54:29.711323 3444 policy_none.go:49] "None policy: Start" Jan 20 23:54:29.712106 kubelet[3444]: I0120 23:54:29.711339 3444 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 23:54:29.712106 kubelet[3444]: I0120 23:54:29.711378 3444 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 23:54:29.712106 kubelet[3444]: I0120 23:54:29.711563 3444 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 20 23:54:29.712106 kubelet[3444]: I0120 23:54:29.711580 3444 policy_none.go:47] "Start" Jan 20 23:54:29.725787 kubelet[3444]: E0120 23:54:29.725679 3444 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 23:54:29.726170 kubelet[3444]: I0120 23:54:29.726132 3444 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 23:54:29.726341 kubelet[3444]: I0120 23:54:29.726168 3444 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 23:54:29.729074 kubelet[3444]: I0120 23:54:29.729026 3444 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 23:54:29.745119 kubelet[3444]: E0120 23:54:29.742267 3444 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 23:54:29.764385 kubelet[3444]: I0120 23:54:29.764348 3444 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-43" Jan 20 23:54:29.766333 kubelet[3444]: I0120 23:54:29.765306 3444 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-43" Jan 20 23:54:29.766474 kubelet[3444]: I0120 23:54:29.765598 3444 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:29.845051 kubelet[3444]: I0120 23:54:29.844994 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbb6e54197e9230f3c5d7d8c70812cd1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-43\" (UID: \"cbb6e54197e9230f3c5d7d8c70812cd1\") " pod="kube-system/kube-apiserver-ip-172-31-29-43" Jan 20 23:54:29.845394 kubelet[3444]: I0120 23:54:29.845317 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2053c755696a10871a1d523518fb5db6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-43\" (UID: \"2053c755696a10871a1d523518fb5db6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:29.845928 kubelet[3444]: I0120 23:54:29.845768 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26f72f5014e6c4c079156a41f7073671-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-43\" (UID: \"26f72f5014e6c4c079156a41f7073671\") " pod="kube-system/kube-scheduler-ip-172-31-29-43" Jan 20 23:54:29.845928 kubelet[3444]: I0120 23:54:29.845825 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbb6e54197e9230f3c5d7d8c70812cd1-ca-certs\") pod \"kube-apiserver-ip-172-31-29-43\" (UID: \"cbb6e54197e9230f3c5d7d8c70812cd1\") " pod="kube-system/kube-apiserver-ip-172-31-29-43" Jan 20 23:54:29.845928 kubelet[3444]: I0120 23:54:29.845861 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbb6e54197e9230f3c5d7d8c70812cd1-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-43\" (UID: \"cbb6e54197e9230f3c5d7d8c70812cd1\") " pod="kube-system/kube-apiserver-ip-172-31-29-43" Jan 20 23:54:29.846472 kubelet[3444]: I0120 23:54:29.846109 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2053c755696a10871a1d523518fb5db6-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-43\" (UID: \"2053c755696a10871a1d523518fb5db6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:29.846472 kubelet[3444]: I0120 23:54:29.846158 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2053c755696a10871a1d523518fb5db6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-43\" (UID: \"2053c755696a10871a1d523518fb5db6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:29.846472 kubelet[3444]: I0120 23:54:29.846195 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2053c755696a10871a1d523518fb5db6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-43\" (UID: \"2053c755696a10871a1d523518fb5db6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:29.846472 kubelet[3444]: I0120 23:54:29.846234 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2053c755696a10871a1d523518fb5db6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-43\" (UID: \"2053c755696a10871a1d523518fb5db6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:29.849615 kubelet[3444]: I0120 23:54:29.849258 3444 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-43" Jan 20 23:54:29.867635 kubelet[3444]: I0120 23:54:29.866620 3444 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-29-43" Jan 20 23:54:29.867635 kubelet[3444]: I0120 23:54:29.866808 3444 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-43" Jan 20 23:54:30.393883 kubelet[3444]: I0120 23:54:30.393799 3444 apiserver.go:52] "Watching apiserver" Jan 20 23:54:30.438662 kubelet[3444]: I0120 23:54:30.438594 3444 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 23:54:30.630865 kubelet[3444]: I0120 23:54:30.630591 3444 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-43" Jan 20 23:54:30.633186 kubelet[3444]: I0120 23:54:30.633105 3444 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:30.635202 kubelet[3444]: I0120 23:54:30.634533 3444 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-43" Jan 20 23:54:30.645945 kubelet[3444]: E0120 23:54:30.645796 3444 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-43\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-43" Jan 20 23:54:30.651749 kubelet[3444]: E0120 23:54:30.649985 3444 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-43\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-43" Jan 20 23:54:30.653856 kubelet[3444]: E0120 23:54:30.653797 3444 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-43\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-43" Jan 20 23:54:30.705576 kubelet[3444]: I0120 23:54:30.705472 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-43" podStartSLOduration=1.705448546 podStartE2EDuration="1.705448546s" podCreationTimestamp="2026-01-20 23:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:54:30.687037222 +0000 UTC m=+1.446565928" watchObservedRunningTime="2026-01-20 23:54:30.705448546 +0000 UTC m=+1.464977240" Jan 20 23:54:30.726457 kubelet[3444]: I0120 23:54:30.726335 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-43" podStartSLOduration=1.72631195 podStartE2EDuration="1.72631195s" podCreationTimestamp="2026-01-20 23:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:54:30.70674049 +0000 UTC m=+1.466269196" watchObservedRunningTime="2026-01-20 23:54:30.72631195 +0000 UTC m=+1.485840656" Jan 20 23:54:31.081340 kubelet[3444]: I0120 23:54:31.081080 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-43" podStartSLOduration=2.08105552 podStartE2EDuration="2.08105552s" podCreationTimestamp="2026-01-20 23:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:54:30.73088749 +0000 UTC m=+1.490416196" watchObservedRunningTime="2026-01-20 23:54:31.08105552 +0000 UTC m=+1.840584214" Jan 20 23:54:31.153533 sudo[2273]: pam_unix(sudo:session): session closed for user root Jan 20 23:54:31.231122 sshd[2272]: Connection closed by 68.220.241.50 port 36456 Jan 20 23:54:31.233996 sshd-session[2268]: pam_unix(sshd:session): session closed for user core Jan 20 23:54:31.242873 systemd[1]: sshd@4-172.31.29.43:22-68.220.241.50:36456.service: Deactivated successfully. Jan 20 23:54:31.249373 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 23:54:31.251148 systemd[1]: session-6.scope: Consumed 13.162s CPU time, 223.5M memory peak. Jan 20 23:54:31.254926 systemd-logind[1937]: Session 6 logged out. Waiting for processes to exit. Jan 20 23:54:31.258494 systemd-logind[1937]: Removed session 6. Jan 20 23:54:33.154313 kubelet[3444]: I0120 23:54:33.154231 3444 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 23:54:33.155202 containerd[1967]: time="2026-01-20T23:54:33.154976830Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 23:54:33.156103 kubelet[3444]: I0120 23:54:33.155781 3444 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 23:54:33.940567 systemd[1]: Created slice kubepods-besteffort-pod43c303be_2e14_4312_b675_313b041a5014.slice - libcontainer container kubepods-besteffort-pod43c303be_2e14_4312_b675_313b041a5014.slice. Jan 20 23:54:33.974062 kubelet[3444]: I0120 23:54:33.973924 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43c303be-2e14-4312-b675-313b041a5014-xtables-lock\") pod \"kube-proxy-dt9kg\" (UID: \"43c303be-2e14-4312-b675-313b041a5014\") " pod="kube-system/kube-proxy-dt9kg" Jan 20 23:54:33.974062 kubelet[3444]: I0120 23:54:33.973996 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43c303be-2e14-4312-b675-313b041a5014-lib-modules\") pod \"kube-proxy-dt9kg\" (UID: \"43c303be-2e14-4312-b675-313b041a5014\") " pod="kube-system/kube-proxy-dt9kg" Jan 20 23:54:33.974062 kubelet[3444]: I0120 23:54:33.974037 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43c303be-2e14-4312-b675-313b041a5014-kube-proxy\") pod \"kube-proxy-dt9kg\" (UID: \"43c303be-2e14-4312-b675-313b041a5014\") " pod="kube-system/kube-proxy-dt9kg" Jan 20 23:54:33.974347 kubelet[3444]: I0120 23:54:33.974072 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lptmv\" (UniqueName: \"kubernetes.io/projected/43c303be-2e14-4312-b675-313b041a5014-kube-api-access-lptmv\") pod \"kube-proxy-dt9kg\" (UID: \"43c303be-2e14-4312-b675-313b041a5014\") " pod="kube-system/kube-proxy-dt9kg" Jan 20 23:54:33.988654 systemd[1]: Created slice kubepods-burstable-poda750ce47_f09d_4250_b095_1911c332f55f.slice - libcontainer container kubepods-burstable-poda750ce47_f09d_4250_b095_1911c332f55f.slice. Jan 20 23:54:34.074750 kubelet[3444]: I0120 23:54:34.074621 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a750ce47-f09d-4250-b095-1911c332f55f-cni\") pod \"kube-flannel-ds-9cwsp\" (UID: \"a750ce47-f09d-4250-b095-1911c332f55f\") " pod="kube-flannel/kube-flannel-ds-9cwsp" Jan 20 23:54:34.074750 kubelet[3444]: I0120 23:54:34.074685 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a750ce47-f09d-4250-b095-1911c332f55f-xtables-lock\") pod \"kube-flannel-ds-9cwsp\" (UID: \"a750ce47-f09d-4250-b095-1911c332f55f\") " pod="kube-flannel/kube-flannel-ds-9cwsp" Jan 20 23:54:34.076914 kubelet[3444]: I0120 23:54:34.076798 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a750ce47-f09d-4250-b095-1911c332f55f-flannel-cfg\") pod \"kube-flannel-ds-9cwsp\" (UID: \"a750ce47-f09d-4250-b095-1911c332f55f\") " pod="kube-flannel/kube-flannel-ds-9cwsp" Jan 20 23:54:34.077108 kubelet[3444]: I0120 23:54:34.077033 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a750ce47-f09d-4250-b095-1911c332f55f-cni-plugin\") pod \"kube-flannel-ds-9cwsp\" (UID: \"a750ce47-f09d-4250-b095-1911c332f55f\") " pod="kube-flannel/kube-flannel-ds-9cwsp" Jan 20 23:54:34.077292 kubelet[3444]: I0120 23:54:34.077082 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a750ce47-f09d-4250-b095-1911c332f55f-run\") pod \"kube-flannel-ds-9cwsp\" (UID: \"a750ce47-f09d-4250-b095-1911c332f55f\") " pod="kube-flannel/kube-flannel-ds-9cwsp" Jan 20 23:54:34.077292 kubelet[3444]: I0120 23:54:34.077249 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmnn5\" (UniqueName: \"kubernetes.io/projected/a750ce47-f09d-4250-b095-1911c332f55f-kube-api-access-nmnn5\") pod \"kube-flannel-ds-9cwsp\" (UID: \"a750ce47-f09d-4250-b095-1911c332f55f\") " pod="kube-flannel/kube-flannel-ds-9cwsp" Jan 20 23:54:34.088028 kubelet[3444]: E0120 23:54:34.087938 3444 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 20 23:54:34.088028 kubelet[3444]: E0120 23:54:34.088021 3444 projected.go:196] Error preparing data for projected volume kube-api-access-lptmv for pod kube-system/kube-proxy-dt9kg: configmap "kube-root-ca.crt" not found Jan 20 23:54:34.088362 kubelet[3444]: E0120 23:54:34.088201 3444 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43c303be-2e14-4312-b675-313b041a5014-kube-api-access-lptmv podName:43c303be-2e14-4312-b675-313b041a5014 nodeName:}" failed. No retries permitted until 2026-01-20 23:54:34.588163339 +0000 UTC m=+5.347692021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lptmv" (UniqueName: "kubernetes.io/projected/43c303be-2e14-4312-b675-313b041a5014-kube-api-access-lptmv") pod "kube-proxy-dt9kg" (UID: "43c303be-2e14-4312-b675-313b041a5014") : configmap "kube-root-ca.crt" not found Jan 20 23:54:34.305561 containerd[1967]: time="2026-01-20T23:54:34.305399196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9cwsp,Uid:a750ce47-f09d-4250-b095-1911c332f55f,Namespace:kube-flannel,Attempt:0,}" Jan 20 23:54:34.364691 containerd[1967]: time="2026-01-20T23:54:34.362890104Z" level=info msg="connecting to shim 4e04bc86b95842f29d744ef1e3ff1761e86b9eb1a389a75edaa8eccbc5235fe3" address="unix:///run/containerd/s/b8243ddbfd35274407d0d0d886e09578cb1891497fceb69568fb2d455dc66407" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:54:34.414110 systemd[1]: Started cri-containerd-4e04bc86b95842f29d744ef1e3ff1761e86b9eb1a389a75edaa8eccbc5235fe3.scope - libcontainer container 4e04bc86b95842f29d744ef1e3ff1761e86b9eb1a389a75edaa8eccbc5235fe3. Jan 20 23:54:34.490047 containerd[1967]: time="2026-01-20T23:54:34.489975949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9cwsp,Uid:a750ce47-f09d-4250-b095-1911c332f55f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"4e04bc86b95842f29d744ef1e3ff1761e86b9eb1a389a75edaa8eccbc5235fe3\"" Jan 20 23:54:34.494100 containerd[1967]: time="2026-01-20T23:54:34.493994005Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 20 23:54:34.859953 containerd[1967]: time="2026-01-20T23:54:34.859890075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dt9kg,Uid:43c303be-2e14-4312-b675-313b041a5014,Namespace:kube-system,Attempt:0,}" Jan 20 23:54:34.902543 containerd[1967]: time="2026-01-20T23:54:34.901979115Z" level=info msg="connecting to shim f88be1b1c85c466881efa25d78e3442c4d24f93d3979ffe8ecef7ba4e60bd028" address="unix:///run/containerd/s/cd92d705c569d53448853be01ce581759235ad810a397305e86ba5cbf889c86e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:54:34.944099 systemd[1]: Started cri-containerd-f88be1b1c85c466881efa25d78e3442c4d24f93d3979ffe8ecef7ba4e60bd028.scope - libcontainer container f88be1b1c85c466881efa25d78e3442c4d24f93d3979ffe8ecef7ba4e60bd028. Jan 20 23:54:34.997325 containerd[1967]: time="2026-01-20T23:54:34.997192851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dt9kg,Uid:43c303be-2e14-4312-b675-313b041a5014,Namespace:kube-system,Attempt:0,} returns sandbox id \"f88be1b1c85c466881efa25d78e3442c4d24f93d3979ffe8ecef7ba4e60bd028\"" Jan 20 23:54:35.013755 containerd[1967]: time="2026-01-20T23:54:35.013241364Z" level=info msg="CreateContainer within sandbox \"f88be1b1c85c466881efa25d78e3442c4d24f93d3979ffe8ecef7ba4e60bd028\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 23:54:35.034754 containerd[1967]: time="2026-01-20T23:54:35.034669056Z" level=info msg="Container 713391dea6c35f1020f88909ce2a690e185c090254062aa3047baaceabb7050b: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:54:35.050757 containerd[1967]: time="2026-01-20T23:54:35.050666796Z" level=info msg="CreateContainer within sandbox \"f88be1b1c85c466881efa25d78e3442c4d24f93d3979ffe8ecef7ba4e60bd028\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"713391dea6c35f1020f88909ce2a690e185c090254062aa3047baaceabb7050b\"" Jan 20 23:54:35.053794 containerd[1967]: time="2026-01-20T23:54:35.053076900Z" level=info msg="StartContainer for \"713391dea6c35f1020f88909ce2a690e185c090254062aa3047baaceabb7050b\"" Jan 20 23:54:35.057573 containerd[1967]: time="2026-01-20T23:54:35.057476868Z" level=info msg="connecting to shim 713391dea6c35f1020f88909ce2a690e185c090254062aa3047baaceabb7050b" address="unix:///run/containerd/s/cd92d705c569d53448853be01ce581759235ad810a397305e86ba5cbf889c86e" protocol=ttrpc version=3 Jan 20 23:54:35.093108 systemd[1]: Started cri-containerd-713391dea6c35f1020f88909ce2a690e185c090254062aa3047baaceabb7050b.scope - libcontainer container 713391dea6c35f1020f88909ce2a690e185c090254062aa3047baaceabb7050b. Jan 20 23:54:35.222465 containerd[1967]: time="2026-01-20T23:54:35.222265261Z" level=info msg="StartContainer for \"713391dea6c35f1020f88909ce2a690e185c090254062aa3047baaceabb7050b\" returns successfully" Jan 20 23:54:35.956325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount166024685.mount: Deactivated successfully. Jan 20 23:54:36.070126 containerd[1967]: time="2026-01-20T23:54:36.070012813Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:36.073681 containerd[1967]: time="2026-01-20T23:54:36.073170277Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=0" Jan 20 23:54:36.076763 containerd[1967]: time="2026-01-20T23:54:36.075899605Z" level=info msg="ImageCreate event name:\"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:36.083113 containerd[1967]: time="2026-01-20T23:54:36.083058133Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:36.085792 containerd[1967]: time="2026-01-20T23:54:36.085552801Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"5125394\" in 1.591491416s" Jan 20 23:54:36.086133 containerd[1967]: time="2026-01-20T23:54:36.086095513Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\"" Jan 20 23:54:36.097280 containerd[1967]: time="2026-01-20T23:54:36.097218877Z" level=info msg="CreateContainer within sandbox \"4e04bc86b95842f29d744ef1e3ff1761e86b9eb1a389a75edaa8eccbc5235fe3\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 20 23:54:36.118763 containerd[1967]: time="2026-01-20T23:54:36.117934585Z" level=info msg="Container ee3e64a5c026f30ac16446ed6e29a0a487a9251541c4a1e89e37da2c4037f015: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:54:36.132111 containerd[1967]: time="2026-01-20T23:54:36.132043777Z" level=info msg="CreateContainer within sandbox \"4e04bc86b95842f29d744ef1e3ff1761e86b9eb1a389a75edaa8eccbc5235fe3\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"ee3e64a5c026f30ac16446ed6e29a0a487a9251541c4a1e89e37da2c4037f015\"" Jan 20 23:54:36.133367 containerd[1967]: time="2026-01-20T23:54:36.133322701Z" level=info msg="StartContainer for \"ee3e64a5c026f30ac16446ed6e29a0a487a9251541c4a1e89e37da2c4037f015\"" Jan 20 23:54:36.136291 containerd[1967]: time="2026-01-20T23:54:36.136163053Z" level=info msg="connecting to shim ee3e64a5c026f30ac16446ed6e29a0a487a9251541c4a1e89e37da2c4037f015" address="unix:///run/containerd/s/b8243ddbfd35274407d0d0d886e09578cb1891497fceb69568fb2d455dc66407" protocol=ttrpc version=3 Jan 20 23:54:36.169077 systemd[1]: Started cri-containerd-ee3e64a5c026f30ac16446ed6e29a0a487a9251541c4a1e89e37da2c4037f015.scope - libcontainer container ee3e64a5c026f30ac16446ed6e29a0a487a9251541c4a1e89e37da2c4037f015. Jan 20 23:54:36.236666 containerd[1967]: time="2026-01-20T23:54:36.236532566Z" level=info msg="StartContainer for \"ee3e64a5c026f30ac16446ed6e29a0a487a9251541c4a1e89e37da2c4037f015\" returns successfully" Jan 20 23:54:36.238499 systemd[1]: cri-containerd-ee3e64a5c026f30ac16446ed6e29a0a487a9251541c4a1e89e37da2c4037f015.scope: Deactivated successfully. Jan 20 23:54:36.247160 containerd[1967]: time="2026-01-20T23:54:36.246838382Z" level=info msg="received container exit event container_id:\"ee3e64a5c026f30ac16446ed6e29a0a487a9251541c4a1e89e37da2c4037f015\" id:\"ee3e64a5c026f30ac16446ed6e29a0a487a9251541c4a1e89e37da2c4037f015\" pid:3788 exited_at:{seconds:1768953276 nanos:246266798}" Jan 20 23:54:36.287758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee3e64a5c026f30ac16446ed6e29a0a487a9251541c4a1e89e37da2c4037f015-rootfs.mount: Deactivated successfully. Jan 20 23:54:36.677513 containerd[1967]: time="2026-01-20T23:54:36.676945624Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 20 23:54:36.696497 kubelet[3444]: I0120 23:54:36.696057 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dt9kg" podStartSLOduration=3.695982004 podStartE2EDuration="3.695982004s" podCreationTimestamp="2026-01-20 23:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:54:35.685297647 +0000 UTC m=+6.444826365" watchObservedRunningTime="2026-01-20 23:54:36.695982004 +0000 UTC m=+7.455510686" Jan 20 23:54:39.202790 containerd[1967]: time="2026-01-20T23:54:39.202420144Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:39.205149 containerd[1967]: time="2026-01-20T23:54:39.204708868Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=3544" Jan 20 23:54:39.207331 containerd[1967]: time="2026-01-20T23:54:39.207265252Z" level=info msg="ImageCreate event name:\"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:39.213802 containerd[1967]: time="2026-01-20T23:54:39.213738220Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:54:39.218356 containerd[1967]: time="2026-01-20T23:54:39.217805080Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32412118\" in 2.540654124s" Jan 20 23:54:39.218356 containerd[1967]: time="2026-01-20T23:54:39.217866820Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\"" Jan 20 23:54:39.228347 containerd[1967]: time="2026-01-20T23:54:39.228279316Z" level=info msg="CreateContainer within sandbox \"4e04bc86b95842f29d744ef1e3ff1761e86b9eb1a389a75edaa8eccbc5235fe3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 23:54:39.242165 containerd[1967]: time="2026-01-20T23:54:39.242114681Z" level=info msg="Container a1d0bcfa262b15f0ed0b58ede2c3e114e98b9a306f3f252c4761fbeb11ba8f9b: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:54:39.256834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2737440455.mount: Deactivated successfully. Jan 20 23:54:39.261288 containerd[1967]: time="2026-01-20T23:54:39.261231965Z" level=info msg="CreateContainer within sandbox \"4e04bc86b95842f29d744ef1e3ff1761e86b9eb1a389a75edaa8eccbc5235fe3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a1d0bcfa262b15f0ed0b58ede2c3e114e98b9a306f3f252c4761fbeb11ba8f9b\"" Jan 20 23:54:39.263525 containerd[1967]: time="2026-01-20T23:54:39.263466677Z" level=info msg="StartContainer for \"a1d0bcfa262b15f0ed0b58ede2c3e114e98b9a306f3f252c4761fbeb11ba8f9b\"" Jan 20 23:54:39.266479 containerd[1967]: time="2026-01-20T23:54:39.266425673Z" level=info msg="connecting to shim a1d0bcfa262b15f0ed0b58ede2c3e114e98b9a306f3f252c4761fbeb11ba8f9b" address="unix:///run/containerd/s/b8243ddbfd35274407d0d0d886e09578cb1891497fceb69568fb2d455dc66407" protocol=ttrpc version=3 Jan 20 23:54:39.307068 systemd[1]: Started cri-containerd-a1d0bcfa262b15f0ed0b58ede2c3e114e98b9a306f3f252c4761fbeb11ba8f9b.scope - libcontainer container a1d0bcfa262b15f0ed0b58ede2c3e114e98b9a306f3f252c4761fbeb11ba8f9b. Jan 20 23:54:39.370052 systemd[1]: cri-containerd-a1d0bcfa262b15f0ed0b58ede2c3e114e98b9a306f3f252c4761fbeb11ba8f9b.scope: Deactivated successfully. Jan 20 23:54:39.375424 containerd[1967]: time="2026-01-20T23:54:39.375354353Z" level=info msg="received container exit event container_id:\"a1d0bcfa262b15f0ed0b58ede2c3e114e98b9a306f3f252c4761fbeb11ba8f9b\" id:\"a1d0bcfa262b15f0ed0b58ede2c3e114e98b9a306f3f252c4761fbeb11ba8f9b\" pid:3863 exited_at:{seconds:1768953279 nanos:375025553}" Jan 20 23:54:39.376644 containerd[1967]: time="2026-01-20T23:54:39.376578305Z" level=info msg="StartContainer for \"a1d0bcfa262b15f0ed0b58ede2c3e114e98b9a306f3f252c4761fbeb11ba8f9b\" returns successfully" Jan 20 23:54:39.396708 kubelet[3444]: I0120 23:54:39.396610 3444 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 20 23:54:39.439120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1d0bcfa262b15f0ed0b58ede2c3e114e98b9a306f3f252c4761fbeb11ba8f9b-rootfs.mount: Deactivated successfully. Jan 20 23:54:39.487374 systemd[1]: Created slice kubepods-burstable-pod53d33dc6_2281_4fe2_a6f6_a040757c11cb.slice - libcontainer container kubepods-burstable-pod53d33dc6_2281_4fe2_a6f6_a040757c11cb.slice. Jan 20 23:54:39.515086 kubelet[3444]: I0120 23:54:39.515037 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjbf9\" (UniqueName: \"kubernetes.io/projected/9d018f2e-c778-44ff-abb9-4093304c7bad-kube-api-access-fjbf9\") pod \"coredns-66bc5c9577-mrhkq\" (UID: \"9d018f2e-c778-44ff-abb9-4093304c7bad\") " pod="kube-system/coredns-66bc5c9577-mrhkq" Jan 20 23:54:39.516205 kubelet[3444]: I0120 23:54:39.515434 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8qv7\" (UniqueName: \"kubernetes.io/projected/53d33dc6-2281-4fe2-a6f6-a040757c11cb-kube-api-access-h8qv7\") pod \"coredns-66bc5c9577-qf5dr\" (UID: \"53d33dc6-2281-4fe2-a6f6-a040757c11cb\") " pod="kube-system/coredns-66bc5c9577-qf5dr" Jan 20 23:54:39.518443 systemd[1]: Created slice kubepods-burstable-pod9d018f2e_c778_44ff_abb9_4093304c7bad.slice - libcontainer container kubepods-burstable-pod9d018f2e_c778_44ff_abb9_4093304c7bad.slice. Jan 20 23:54:39.522651 kubelet[3444]: I0120 23:54:39.518528 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d018f2e-c778-44ff-abb9-4093304c7bad-config-volume\") pod \"coredns-66bc5c9577-mrhkq\" (UID: \"9d018f2e-c778-44ff-abb9-4093304c7bad\") " pod="kube-system/coredns-66bc5c9577-mrhkq" Jan 20 23:54:39.522910 kubelet[3444]: I0120 23:54:39.522874 3444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53d33dc6-2281-4fe2-a6f6-a040757c11cb-config-volume\") pod \"coredns-66bc5c9577-qf5dr\" (UID: \"53d33dc6-2281-4fe2-a6f6-a040757c11cb\") " pod="kube-system/coredns-66bc5c9577-qf5dr" Jan 20 23:54:39.706949 containerd[1967]: time="2026-01-20T23:54:39.706886143Z" level=info msg="CreateContainer within sandbox \"4e04bc86b95842f29d744ef1e3ff1761e86b9eb1a389a75edaa8eccbc5235fe3\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 20 23:54:39.722207 containerd[1967]: time="2026-01-20T23:54:39.722156755Z" level=info msg="Container 636917b313af83681471e6a12a189d7235383a9a91e1594d2844de478798980d: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:54:39.733689 containerd[1967]: time="2026-01-20T23:54:39.733613719Z" level=info msg="CreateContainer within sandbox \"4e04bc86b95842f29d744ef1e3ff1761e86b9eb1a389a75edaa8eccbc5235fe3\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"636917b313af83681471e6a12a189d7235383a9a91e1594d2844de478798980d\"" Jan 20 23:54:39.735819 containerd[1967]: time="2026-01-20T23:54:39.734594143Z" level=info msg="StartContainer for \"636917b313af83681471e6a12a189d7235383a9a91e1594d2844de478798980d\"" Jan 20 23:54:39.736892 containerd[1967]: time="2026-01-20T23:54:39.736828495Z" level=info msg="connecting to shim 636917b313af83681471e6a12a189d7235383a9a91e1594d2844de478798980d" address="unix:///run/containerd/s/b8243ddbfd35274407d0d0d886e09578cb1891497fceb69568fb2d455dc66407" protocol=ttrpc version=3 Jan 20 23:54:39.773102 systemd[1]: Started cri-containerd-636917b313af83681471e6a12a189d7235383a9a91e1594d2844de478798980d.scope - libcontainer container 636917b313af83681471e6a12a189d7235383a9a91e1594d2844de478798980d. Jan 20 23:54:39.811308 containerd[1967]: time="2026-01-20T23:54:39.811231459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qf5dr,Uid:53d33dc6-2281-4fe2-a6f6-a040757c11cb,Namespace:kube-system,Attempt:0,}" Jan 20 23:54:39.839375 containerd[1967]: time="2026-01-20T23:54:39.838639928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mrhkq,Uid:9d018f2e-c778-44ff-abb9-4093304c7bad,Namespace:kube-system,Attempt:0,}" Jan 20 23:54:39.873786 containerd[1967]: time="2026-01-20T23:54:39.872296832Z" level=info msg="StartContainer for \"636917b313af83681471e6a12a189d7235383a9a91e1594d2844de478798980d\" returns successfully" Jan 20 23:54:39.930787 containerd[1967]: time="2026-01-20T23:54:39.930575228Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qf5dr,Uid:53d33dc6-2281-4fe2-a6f6-a040757c11cb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a6f6df670109c3b9eeec33be3d4cc80b59e9e455dca969acba5a5b7b646a4d8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 23:54:39.931830 kubelet[3444]: E0120 23:54:39.931444 3444 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a6f6df670109c3b9eeec33be3d4cc80b59e9e455dca969acba5a5b7b646a4d8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 23:54:39.931830 kubelet[3444]: E0120 23:54:39.931539 3444 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a6f6df670109c3b9eeec33be3d4cc80b59e9e455dca969acba5a5b7b646a4d8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-qf5dr" Jan 20 23:54:39.931830 kubelet[3444]: E0120 23:54:39.931572 3444 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a6f6df670109c3b9eeec33be3d4cc80b59e9e455dca969acba5a5b7b646a4d8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-qf5dr" Jan 20 23:54:39.933361 kubelet[3444]: E0120 23:54:39.933252 3444 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qf5dr_kube-system(53d33dc6-2281-4fe2-a6f6-a040757c11cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qf5dr_kube-system(53d33dc6-2281-4fe2-a6f6-a040757c11cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a6f6df670109c3b9eeec33be3d4cc80b59e9e455dca969acba5a5b7b646a4d8\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-qf5dr" podUID="53d33dc6-2281-4fe2-a6f6-a040757c11cb" Jan 20 23:54:39.942012 containerd[1967]: time="2026-01-20T23:54:39.941934248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mrhkq,Uid:9d018f2e-c778-44ff-abb9-4093304c7bad,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e71e594b4c9070f6110fe7dc948a37a5a72d0d4f8ff03e5a4f1d36725428766\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 23:54:39.942346 kubelet[3444]: E0120 23:54:39.942261 3444 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e71e594b4c9070f6110fe7dc948a37a5a72d0d4f8ff03e5a4f1d36725428766\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 23:54:39.942998 kubelet[3444]: E0120 23:54:39.942713 3444 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e71e594b4c9070f6110fe7dc948a37a5a72d0d4f8ff03e5a4f1d36725428766\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-mrhkq" Jan 20 23:54:39.942998 kubelet[3444]: E0120 23:54:39.942784 3444 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e71e594b4c9070f6110fe7dc948a37a5a72d0d4f8ff03e5a4f1d36725428766\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-mrhkq" Jan 20 23:54:39.942998 kubelet[3444]: E0120 23:54:39.942876 3444 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-mrhkq_kube-system(9d018f2e-c778-44ff-abb9-4093304c7bad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-mrhkq_kube-system(9d018f2e-c778-44ff-abb9-4093304c7bad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e71e594b4c9070f6110fe7dc948a37a5a72d0d4f8ff03e5a4f1d36725428766\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-mrhkq" podUID="9d018f2e-c778-44ff-abb9-4093304c7bad" Jan 20 23:54:41.001213 (udev-worker)[3978]: Network interface NamePolicy= disabled on kernel command line. Jan 20 23:54:41.026044 systemd-networkd[1780]: flannel.1: Link UP Jan 20 23:54:41.026059 systemd-networkd[1780]: flannel.1: Gained carrier Jan 20 23:54:42.282977 systemd-networkd[1780]: flannel.1: Gained IPv6LL Jan 20 23:54:44.853552 ntpd[1931]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 20 23:54:44.853640 ntpd[1931]: Listen normally on 7 flannel.1 [fe80::ccdc:d9ff:fe2f:c747%4]:123 Jan 20 23:54:44.855028 ntpd[1931]: 20 Jan 23:54:44 ntpd[1931]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 20 23:54:44.855028 ntpd[1931]: 20 Jan 23:54:44 ntpd[1931]: Listen normally on 7 flannel.1 [fe80::ccdc:d9ff:fe2f:c747%4]:123 Jan 20 23:54:52.466962 containerd[1967]: time="2026-01-20T23:54:52.466805070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mrhkq,Uid:9d018f2e-c778-44ff-abb9-4093304c7bad,Namespace:kube-system,Attempt:0,}" Jan 20 23:54:52.505668 systemd-networkd[1780]: cni0: Link UP Jan 20 23:54:52.507999 systemd-networkd[1780]: cni0: Gained carrier Jan 20 23:54:52.519166 systemd-networkd[1780]: vethfb508f4c: Link UP Jan 20 23:54:52.524616 kernel: cni0: port 1(vethfb508f4c) entered blocking state Jan 20 23:54:52.524780 kernel: cni0: port 1(vethfb508f4c) entered disabled state Jan 20 23:54:52.526313 kernel: vethfb508f4c: entered allmulticast mode Jan 20 23:54:52.528755 kernel: vethfb508f4c: entered promiscuous mode Jan 20 23:54:52.530134 systemd-networkd[1780]: cni0: Lost carrier Jan 20 23:54:52.530953 (udev-worker)[4094]: Network interface NamePolicy= disabled on kernel command line. Jan 20 23:54:52.533008 (udev-worker)[4091]: Network interface NamePolicy= disabled on kernel command line. Jan 20 23:54:52.558743 kernel: cni0: port 1(vethfb508f4c) entered blocking state Jan 20 23:54:52.558851 kernel: cni0: port 1(vethfb508f4c) entered forwarding state Jan 20 23:54:52.561207 systemd-networkd[1780]: vethfb508f4c: Gained carrier Jan 20 23:54:52.561894 systemd-networkd[1780]: cni0: Gained carrier Jan 20 23:54:52.567017 containerd[1967]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a2950), "name":"cbr0", "type":"bridge"} Jan 20 23:54:52.567017 containerd[1967]: delegateAdd: netconf sent to delegate plugin: Jan 20 23:54:52.623925 containerd[1967]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-20T23:54:52.623852359Z" level=info msg="connecting to shim f5e215f8793f1138be206cbef371669a952fa77d0b8662aa4dff8185cd5c8aed" address="unix:///run/containerd/s/77dadd5922295b9bb52b0c4ab893712dfabb1d3d77e8cbc508f53846d6b0d1c5" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:54:52.676074 systemd[1]: Started cri-containerd-f5e215f8793f1138be206cbef371669a952fa77d0b8662aa4dff8185cd5c8aed.scope - libcontainer container f5e215f8793f1138be206cbef371669a952fa77d0b8662aa4dff8185cd5c8aed. Jan 20 23:54:52.755888 containerd[1967]: time="2026-01-20T23:54:52.755332412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mrhkq,Uid:9d018f2e-c778-44ff-abb9-4093304c7bad,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5e215f8793f1138be206cbef371669a952fa77d0b8662aa4dff8185cd5c8aed\"" Jan 20 23:54:52.767648 containerd[1967]: time="2026-01-20T23:54:52.767558072Z" level=info msg="CreateContainer within sandbox \"f5e215f8793f1138be206cbef371669a952fa77d0b8662aa4dff8185cd5c8aed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 23:54:52.792754 containerd[1967]: time="2026-01-20T23:54:52.791037188Z" level=info msg="Container 7c2befbb5f7576f82ec6f2be997b18b6f188a92ad9cdc7a2092fdfe47abcbf2f: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:54:52.809013 containerd[1967]: time="2026-01-20T23:54:52.808939388Z" level=info msg="CreateContainer within sandbox \"f5e215f8793f1138be206cbef371669a952fa77d0b8662aa4dff8185cd5c8aed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c2befbb5f7576f82ec6f2be997b18b6f188a92ad9cdc7a2092fdfe47abcbf2f\"" Jan 20 23:54:52.810155 containerd[1967]: time="2026-01-20T23:54:52.809982608Z" level=info msg="StartContainer for \"7c2befbb5f7576f82ec6f2be997b18b6f188a92ad9cdc7a2092fdfe47abcbf2f\"" Jan 20 23:54:52.814008 containerd[1967]: time="2026-01-20T23:54:52.813949808Z" level=info msg="connecting to shim 7c2befbb5f7576f82ec6f2be997b18b6f188a92ad9cdc7a2092fdfe47abcbf2f" address="unix:///run/containerd/s/77dadd5922295b9bb52b0c4ab893712dfabb1d3d77e8cbc508f53846d6b0d1c5" protocol=ttrpc version=3 Jan 20 23:54:52.851068 systemd[1]: Started cri-containerd-7c2befbb5f7576f82ec6f2be997b18b6f188a92ad9cdc7a2092fdfe47abcbf2f.scope - libcontainer container 7c2befbb5f7576f82ec6f2be997b18b6f188a92ad9cdc7a2092fdfe47abcbf2f. Jan 20 23:54:52.912621 containerd[1967]: time="2026-01-20T23:54:52.912572696Z" level=info msg="StartContainer for \"7c2befbb5f7576f82ec6f2be997b18b6f188a92ad9cdc7a2092fdfe47abcbf2f\" returns successfully" Jan 20 23:54:53.466618 containerd[1967]: time="2026-01-20T23:54:53.466560751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qf5dr,Uid:53d33dc6-2281-4fe2-a6f6-a040757c11cb,Namespace:kube-system,Attempt:0,}" Jan 20 23:54:53.499809 (udev-worker)[4103]: Network interface NamePolicy= disabled on kernel command line. Jan 20 23:54:53.504246 systemd-networkd[1780]: vethd7ab5b59: Link UP Jan 20 23:54:53.509504 kernel: cni0: port 2(vethd7ab5b59) entered blocking state Jan 20 23:54:53.509627 kernel: cni0: port 2(vethd7ab5b59) entered disabled state Jan 20 23:54:53.509681 kernel: vethd7ab5b59: entered allmulticast mode Jan 20 23:54:53.511016 kernel: vethd7ab5b59: entered promiscuous mode Jan 20 23:54:53.527342 kernel: cni0: port 2(vethd7ab5b59) entered blocking state Jan 20 23:54:53.527488 kernel: cni0: port 2(vethd7ab5b59) entered forwarding state Jan 20 23:54:53.527954 systemd-networkd[1780]: vethd7ab5b59: Gained carrier Jan 20 23:54:53.531746 containerd[1967]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400000e9a0), "name":"cbr0", "type":"bridge"} Jan 20 23:54:53.531746 containerd[1967]: delegateAdd: netconf sent to delegate plugin: Jan 20 23:54:53.595597 containerd[1967]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-20T23:54:53.595503932Z" level=info msg="connecting to shim 57fb90ebe3ac1a2f659f1a024700c11a954e80df971c67f56905c4fdadaabee9" address="unix:///run/containerd/s/611cd3a7d34ec937fa4b3c3bb81945defbd08f932ca0691022f1e3788e9dcf47" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:54:53.613832 systemd-networkd[1780]: cni0: Gained IPv6LL Jan 20 23:54:53.645420 systemd[1]: Started cri-containerd-57fb90ebe3ac1a2f659f1a024700c11a954e80df971c67f56905c4fdadaabee9.scope - libcontainer container 57fb90ebe3ac1a2f659f1a024700c11a954e80df971c67f56905c4fdadaabee9. Jan 20 23:54:53.724626 containerd[1967]: time="2026-01-20T23:54:53.723899180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qf5dr,Uid:53d33dc6-2281-4fe2-a6f6-a040757c11cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"57fb90ebe3ac1a2f659f1a024700c11a954e80df971c67f56905c4fdadaabee9\"" Jan 20 23:54:53.737264 containerd[1967]: time="2026-01-20T23:54:53.736694229Z" level=info msg="CreateContainer within sandbox \"57fb90ebe3ac1a2f659f1a024700c11a954e80df971c67f56905c4fdadaabee9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 23:54:53.764956 containerd[1967]: time="2026-01-20T23:54:53.763965489Z" level=info msg="Container ae5ff8bb50754a797c1b0473280d5f4eeb024839aa5bd4df9a54f6b311ac100c: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:54:53.775564 kubelet[3444]: I0120 23:54:53.775466 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-9cwsp" podStartSLOduration=16.049079882 podStartE2EDuration="20.775445637s" podCreationTimestamp="2026-01-20 23:54:33 +0000 UTC" firstStartedPulling="2026-01-20 23:54:34.492990169 +0000 UTC m=+5.252518839" lastFinishedPulling="2026-01-20 23:54:39.219355912 +0000 UTC m=+9.978884594" observedRunningTime="2026-01-20 23:54:40.734060552 +0000 UTC m=+11.493589258" watchObservedRunningTime="2026-01-20 23:54:53.775445637 +0000 UTC m=+24.534974307" Jan 20 23:54:53.777865 kubelet[3444]: I0120 23:54:53.777751 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mrhkq" podStartSLOduration=19.775703169 podStartE2EDuration="19.775703169s" podCreationTimestamp="2026-01-20 23:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:54:53.770192469 +0000 UTC m=+24.529721163" watchObservedRunningTime="2026-01-20 23:54:53.775703169 +0000 UTC m=+24.535231839" Jan 20 23:54:53.794039 containerd[1967]: time="2026-01-20T23:54:53.793700013Z" level=info msg="CreateContainer within sandbox \"57fb90ebe3ac1a2f659f1a024700c11a954e80df971c67f56905c4fdadaabee9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae5ff8bb50754a797c1b0473280d5f4eeb024839aa5bd4df9a54f6b311ac100c\"" Jan 20 23:54:53.795654 containerd[1967]: time="2026-01-20T23:54:53.795590865Z" level=info msg="StartContainer for \"ae5ff8bb50754a797c1b0473280d5f4eeb024839aa5bd4df9a54f6b311ac100c\"" Jan 20 23:54:53.798991 containerd[1967]: time="2026-01-20T23:54:53.798919005Z" level=info msg="connecting to shim ae5ff8bb50754a797c1b0473280d5f4eeb024839aa5bd4df9a54f6b311ac100c" address="unix:///run/containerd/s/611cd3a7d34ec937fa4b3c3bb81945defbd08f932ca0691022f1e3788e9dcf47" protocol=ttrpc version=3 Jan 20 23:54:53.855196 systemd[1]: Started cri-containerd-ae5ff8bb50754a797c1b0473280d5f4eeb024839aa5bd4df9a54f6b311ac100c.scope - libcontainer container ae5ff8bb50754a797c1b0473280d5f4eeb024839aa5bd4df9a54f6b311ac100c. Jan 20 23:54:53.927667 containerd[1967]: time="2026-01-20T23:54:53.927560433Z" level=info msg="StartContainer for \"ae5ff8bb50754a797c1b0473280d5f4eeb024839aa5bd4df9a54f6b311ac100c\" returns successfully" Jan 20 23:54:54.570968 systemd-networkd[1780]: vethfb508f4c: Gained IPv6LL Jan 20 23:54:54.774544 kubelet[3444]: I0120 23:54:54.774453 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qf5dr" podStartSLOduration=20.77442799 podStartE2EDuration="20.77442799s" podCreationTimestamp="2026-01-20 23:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:54:54.772486558 +0000 UTC m=+25.532015288" watchObservedRunningTime="2026-01-20 23:54:54.77442799 +0000 UTC m=+25.533956684" Jan 20 23:54:54.827022 systemd-networkd[1780]: vethd7ab5b59: Gained IPv6LL Jan 20 23:54:56.853636 ntpd[1931]: Listen normally on 8 cni0 192.168.0.1:123 Jan 20 23:54:56.853761 ntpd[1931]: Listen normally on 9 cni0 [fe80::a406:a0ff:fea8:ea71%5]:123 Jan 20 23:54:56.854533 ntpd[1931]: 20 Jan 23:54:56 ntpd[1931]: Listen normally on 8 cni0 192.168.0.1:123 Jan 20 23:54:56.854533 ntpd[1931]: 20 Jan 23:54:56 ntpd[1931]: Listen normally on 9 cni0 [fe80::a406:a0ff:fea8:ea71%5]:123 Jan 20 23:54:56.854533 ntpd[1931]: 20 Jan 23:54:56 ntpd[1931]: Listen normally on 10 vethfb508f4c [fe80::ecef:3eff:fe2a:f343%6]:123 Jan 20 23:54:56.854533 ntpd[1931]: 20 Jan 23:54:56 ntpd[1931]: Listen normally on 11 vethd7ab5b59 [fe80::8cc3:85ff:fec2:4c4f%7]:123 Jan 20 23:54:56.853814 ntpd[1931]: Listen normally on 10 vethfb508f4c [fe80::ecef:3eff:fe2a:f343%6]:123 Jan 20 23:54:56.853861 ntpd[1931]: Listen normally on 11 vethd7ab5b59 [fe80::8cc3:85ff:fec2:4c4f%7]:123 Jan 20 23:55:24.192238 systemd[1]: Started sshd@5-172.31.29.43:22-68.220.241.50:53584.service - OpenSSH per-connection server daemon (68.220.241.50:53584). Jan 20 23:55:24.664888 sshd[4437]: Accepted publickey for core from 68.220.241.50 port 53584 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:55:24.667434 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:24.676763 systemd-logind[1937]: New session 7 of user core. Jan 20 23:55:24.684032 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 23:55:25.049378 sshd[4441]: Connection closed by 68.220.241.50 port 53584 Jan 20 23:55:25.049807 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:25.058024 systemd[1]: sshd@5-172.31.29.43:22-68.220.241.50:53584.service: Deactivated successfully. Jan 20 23:55:25.063979 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 23:55:25.067510 systemd-logind[1937]: Session 7 logged out. Waiting for processes to exit. Jan 20 23:55:25.070245 systemd-logind[1937]: Removed session 7. Jan 20 23:55:30.148013 systemd[1]: Started sshd@6-172.31.29.43:22-68.220.241.50:53600.service - OpenSSH per-connection server daemon (68.220.241.50:53600). Jan 20 23:55:30.608774 sshd[4476]: Accepted publickey for core from 68.220.241.50 port 53600 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:55:30.612223 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:30.621827 systemd-logind[1937]: New session 8 of user core. Jan 20 23:55:30.631056 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 23:55:30.971942 sshd[4480]: Connection closed by 68.220.241.50 port 53600 Jan 20 23:55:30.974102 sshd-session[4476]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:30.982930 systemd[1]: sshd@6-172.31.29.43:22-68.220.241.50:53600.service: Deactivated successfully. Jan 20 23:55:30.986690 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 23:55:30.988921 systemd-logind[1937]: Session 8 logged out. Waiting for processes to exit. Jan 20 23:55:30.992220 systemd-logind[1937]: Removed session 8. Jan 20 23:55:36.068173 systemd[1]: Started sshd@7-172.31.29.43:22-68.220.241.50:50944.service - OpenSSH per-connection server daemon (68.220.241.50:50944). Jan 20 23:55:36.533923 sshd[4515]: Accepted publickey for core from 68.220.241.50 port 50944 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:55:36.538323 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:36.550898 systemd-logind[1937]: New session 9 of user core. Jan 20 23:55:36.557004 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 23:55:36.889118 sshd[4539]: Connection closed by 68.220.241.50 port 50944 Jan 20 23:55:36.890985 sshd-session[4515]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:36.900344 systemd[1]: sshd@7-172.31.29.43:22-68.220.241.50:50944.service: Deactivated successfully. Jan 20 23:55:36.904280 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 23:55:36.906886 systemd-logind[1937]: Session 9 logged out. Waiting for processes to exit. Jan 20 23:55:36.910356 systemd-logind[1937]: Removed session 9. Jan 20 23:55:36.992037 systemd[1]: Started sshd@8-172.31.29.43:22-68.220.241.50:50954.service - OpenSSH per-connection server daemon (68.220.241.50:50954). Jan 20 23:55:37.475909 sshd[4551]: Accepted publickey for core from 68.220.241.50 port 50954 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:55:37.479369 sshd-session[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:37.489815 systemd-logind[1937]: New session 10 of user core. Jan 20 23:55:37.497041 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 23:55:37.916030 sshd[4555]: Connection closed by 68.220.241.50 port 50954 Jan 20 23:55:37.915829 sshd-session[4551]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:37.924114 systemd-logind[1937]: Session 10 logged out. Waiting for processes to exit. Jan 20 23:55:37.924316 systemd[1]: sshd@8-172.31.29.43:22-68.220.241.50:50954.service: Deactivated successfully. Jan 20 23:55:37.929138 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 23:55:37.935851 systemd-logind[1937]: Removed session 10. Jan 20 23:55:38.016336 systemd[1]: Started sshd@9-172.31.29.43:22-68.220.241.50:50962.service - OpenSSH per-connection server daemon (68.220.241.50:50962). Jan 20 23:55:38.504402 sshd[4564]: Accepted publickey for core from 68.220.241.50 port 50962 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:55:38.507034 sshd-session[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:38.516110 systemd-logind[1937]: New session 11 of user core. Jan 20 23:55:38.527031 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 23:55:38.879163 sshd[4568]: Connection closed by 68.220.241.50 port 50962 Jan 20 23:55:38.878944 sshd-session[4564]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:38.892213 systemd[1]: sshd@9-172.31.29.43:22-68.220.241.50:50962.service: Deactivated successfully. Jan 20 23:55:38.897064 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 23:55:38.899579 systemd-logind[1937]: Session 11 logged out. Waiting for processes to exit. Jan 20 23:55:38.902652 systemd-logind[1937]: Removed session 11. Jan 20 23:55:43.968372 systemd[1]: Started sshd@10-172.31.29.43:22-68.220.241.50:44564.service - OpenSSH per-connection server daemon (68.220.241.50:44564). Jan 20 23:55:44.428861 sshd[4601]: Accepted publickey for core from 68.220.241.50 port 44564 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:55:44.433118 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:44.442096 systemd-logind[1937]: New session 12 of user core. Jan 20 23:55:44.451024 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 23:55:44.787943 sshd[4605]: Connection closed by 68.220.241.50 port 44564 Jan 20 23:55:44.787805 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:44.797198 systemd[1]: sshd@10-172.31.29.43:22-68.220.241.50:44564.service: Deactivated successfully. Jan 20 23:55:44.801174 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 23:55:44.805416 systemd-logind[1937]: Session 12 logged out. Waiting for processes to exit. Jan 20 23:55:44.807368 systemd-logind[1937]: Removed session 12. Jan 20 23:55:44.889632 systemd[1]: Started sshd@11-172.31.29.43:22-68.220.241.50:44578.service - OpenSSH per-connection server daemon (68.220.241.50:44578). Jan 20 23:55:45.380068 sshd[4617]: Accepted publickey for core from 68.220.241.50 port 44578 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:55:45.382778 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:45.391296 systemd-logind[1937]: New session 13 of user core. Jan 20 23:55:45.405000 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 23:55:45.828777 sshd[4621]: Connection closed by 68.220.241.50 port 44578 Jan 20 23:55:45.831042 sshd-session[4617]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:45.839648 systemd[1]: sshd@11-172.31.29.43:22-68.220.241.50:44578.service: Deactivated successfully. Jan 20 23:55:45.845435 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 23:55:45.847748 systemd-logind[1937]: Session 13 logged out. Waiting for processes to exit. Jan 20 23:55:45.851068 systemd-logind[1937]: Removed session 13. Jan 20 23:55:45.927906 systemd[1]: Started sshd@12-172.31.29.43:22-68.220.241.50:44592.service - OpenSSH per-connection server daemon (68.220.241.50:44592). Jan 20 23:55:46.433777 sshd[4630]: Accepted publickey for core from 68.220.241.50 port 44592 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:55:46.436295 sshd-session[4630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:46.444968 systemd-logind[1937]: New session 14 of user core. Jan 20 23:55:46.460046 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 23:55:47.510801 sshd[4640]: Connection closed by 68.220.241.50 port 44592 Jan 20 23:55:47.511966 sshd-session[4630]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:47.519649 systemd[1]: sshd@12-172.31.29.43:22-68.220.241.50:44592.service: Deactivated successfully. Jan 20 23:55:47.523516 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 23:55:47.526410 systemd-logind[1937]: Session 14 logged out. Waiting for processes to exit. Jan 20 23:55:47.531114 systemd-logind[1937]: Removed session 14. Jan 20 23:55:47.611943 systemd[1]: Started sshd@13-172.31.29.43:22-68.220.241.50:44598.service - OpenSSH per-connection server daemon (68.220.241.50:44598). Jan 20 23:55:48.105782 sshd[4669]: Accepted publickey for core from 68.220.241.50 port 44598 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:55:48.107953 sshd-session[4669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:48.116823 systemd-logind[1937]: New session 15 of user core. Jan 20 23:55:48.124063 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 23:55:48.721778 sshd[4673]: Connection closed by 68.220.241.50 port 44598 Jan 20 23:55:48.721576 sshd-session[4669]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:48.728505 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 23:55:48.728639 systemd-logind[1937]: Session 15 logged out. Waiting for processes to exit. Jan 20 23:55:48.731739 systemd[1]: sshd@13-172.31.29.43:22-68.220.241.50:44598.service: Deactivated successfully. Jan 20 23:55:48.738999 systemd-logind[1937]: Removed session 15. Jan 20 23:55:48.809103 systemd[1]: Started sshd@14-172.31.29.43:22-68.220.241.50:44604.service - OpenSSH per-connection server daemon (68.220.241.50:44604). Jan 20 23:55:49.276024 sshd[4685]: Accepted publickey for core from 68.220.241.50 port 44604 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:55:49.278918 sshd-session[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:49.287106 systemd-logind[1937]: New session 16 of user core. Jan 20 23:55:49.300048 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 23:55:49.640931 sshd[4689]: Connection closed by 68.220.241.50 port 44604 Jan 20 23:55:49.642018 sshd-session[4685]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:49.652879 systemd[1]: sshd@14-172.31.29.43:22-68.220.241.50:44604.service: Deactivated successfully. Jan 20 23:55:49.659865 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 23:55:49.662662 systemd-logind[1937]: Session 16 logged out. Waiting for processes to exit. Jan 20 23:55:49.666097 systemd-logind[1937]: Removed session 16. Jan 20 23:55:54.733646 systemd[1]: Started sshd@15-172.31.29.43:22-68.220.241.50:42600.service - OpenSSH per-connection server daemon (68.220.241.50:42600). Jan 20 23:55:55.203092 sshd[4725]: Accepted publickey for core from 68.220.241.50 port 42600 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:55:55.205885 sshd-session[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:55.215483 systemd-logind[1937]: New session 17 of user core. Jan 20 23:55:55.224996 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 23:55:55.560903 sshd[4729]: Connection closed by 68.220.241.50 port 42600 Jan 20 23:55:55.560668 sshd-session[4725]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:55.569939 systemd[1]: sshd@15-172.31.29.43:22-68.220.241.50:42600.service: Deactivated successfully. Jan 20 23:55:55.570375 systemd-logind[1937]: Session 17 logged out. Waiting for processes to exit. Jan 20 23:55:55.575488 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 23:55:55.581112 systemd-logind[1937]: Removed session 17. Jan 20 23:56:00.655821 systemd[1]: Started sshd@16-172.31.29.43:22-68.220.241.50:42616.service - OpenSSH per-connection server daemon (68.220.241.50:42616). Jan 20 23:56:01.124223 sshd[4761]: Accepted publickey for core from 68.220.241.50 port 42616 ssh2: RSA SHA256:EGjPGkfI4KoHui2XqipyUEF2abYBfFMTrz/7tqw8EwU Jan 20 23:56:01.127150 sshd-session[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:56:01.135611 systemd-logind[1937]: New session 18 of user core. Jan 20 23:56:01.147068 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 23:56:01.486791 sshd[4765]: Connection closed by 68.220.241.50 port 42616 Jan 20 23:56:01.487980 sshd-session[4761]: pam_unix(sshd:session): session closed for user core Jan 20 23:56:01.497921 systemd[1]: sshd@16-172.31.29.43:22-68.220.241.50:42616.service: Deactivated successfully. Jan 20 23:56:01.502945 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 23:56:01.506079 systemd-logind[1937]: Session 18 logged out. Waiting for processes to exit. Jan 20 23:56:01.509383 systemd-logind[1937]: Removed session 18. Jan 20 23:56:16.380686 systemd[1]: cri-containerd-398699eabcaaa54ba23c6aba57fefec32c39fd0b839ca1e3f0bb35284f567656.scope: Deactivated successfully. Jan 20 23:56:16.381963 systemd[1]: cri-containerd-398699eabcaaa54ba23c6aba57fefec32c39fd0b839ca1e3f0bb35284f567656.scope: Consumed 3.945s CPU time, 54.1M memory peak. Jan 20 23:56:16.388659 containerd[1967]: time="2026-01-20T23:56:16.388594827Z" level=info msg="received container exit event container_id:\"398699eabcaaa54ba23c6aba57fefec32c39fd0b839ca1e3f0bb35284f567656\" id:\"398699eabcaaa54ba23c6aba57fefec32c39fd0b839ca1e3f0bb35284f567656\" pid:3094 exit_status:1 exited_at:{seconds:1768953376 nanos:388028871}" Jan 20 23:56:16.444555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-398699eabcaaa54ba23c6aba57fefec32c39fd0b839ca1e3f0bb35284f567656-rootfs.mount: Deactivated successfully. Jan 20 23:56:17.008249 kubelet[3444]: I0120 23:56:17.008125 3444 scope.go:117] "RemoveContainer" containerID="398699eabcaaa54ba23c6aba57fefec32c39fd0b839ca1e3f0bb35284f567656" Jan 20 23:56:17.013980 containerd[1967]: time="2026-01-20T23:56:17.013922498Z" level=info msg="CreateContainer within sandbox \"b92b0375fdd333220f07c8775ca58764db4d2194d75aed60563846ad999f11aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 23:56:17.030941 containerd[1967]: time="2026-01-20T23:56:17.030390098Z" level=info msg="Container 7194d08b3890d0c0448ac9145088b9dd26b067d4b68213ae73ee5ce595701c0a: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:56:17.049646 containerd[1967]: time="2026-01-20T23:56:17.049568546Z" level=info msg="CreateContainer within sandbox \"b92b0375fdd333220f07c8775ca58764db4d2194d75aed60563846ad999f11aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7194d08b3890d0c0448ac9145088b9dd26b067d4b68213ae73ee5ce595701c0a\"" Jan 20 23:56:17.051135 containerd[1967]: time="2026-01-20T23:56:17.050950826Z" level=info msg="StartContainer for \"7194d08b3890d0c0448ac9145088b9dd26b067d4b68213ae73ee5ce595701c0a\"" Jan 20 23:56:17.053486 containerd[1967]: time="2026-01-20T23:56:17.053384150Z" level=info msg="connecting to shim 7194d08b3890d0c0448ac9145088b9dd26b067d4b68213ae73ee5ce595701c0a" address="unix:///run/containerd/s/f3c48fe03ebbfd56a727b2203a3e516334df76a1cbb832f2cf3a8a63cb987a80" protocol=ttrpc version=3 Jan 20 23:56:17.095081 systemd[1]: Started cri-containerd-7194d08b3890d0c0448ac9145088b9dd26b067d4b68213ae73ee5ce595701c0a.scope - libcontainer container 7194d08b3890d0c0448ac9145088b9dd26b067d4b68213ae73ee5ce595701c0a. Jan 20 23:56:17.183034 containerd[1967]: time="2026-01-20T23:56:17.182974023Z" level=info msg="StartContainer for \"7194d08b3890d0c0448ac9145088b9dd26b067d4b68213ae73ee5ce595701c0a\" returns successfully" Jan 20 23:56:20.971688 systemd[1]: cri-containerd-0c7e106b48e497a0403d214a671910df4b13d8c988c14a769bfb8c977342daac.scope: Deactivated successfully. Jan 20 23:56:20.972945 systemd[1]: cri-containerd-0c7e106b48e497a0403d214a671910df4b13d8c988c14a769bfb8c977342daac.scope: Consumed 5.119s CPU time, 20.6M memory peak. Jan 20 23:56:20.978572 containerd[1967]: time="2026-01-20T23:56:20.978480418Z" level=info msg="received container exit event container_id:\"0c7e106b48e497a0403d214a671910df4b13d8c988c14a769bfb8c977342daac\" id:\"0c7e106b48e497a0403d214a671910df4b13d8c988c14a769bfb8c977342daac\" pid:3107 exit_status:1 exited_at:{seconds:1768953380 nanos:978036454}" Jan 20 23:56:21.021376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c7e106b48e497a0403d214a671910df4b13d8c988c14a769bfb8c977342daac-rootfs.mount: Deactivated successfully. Jan 20 23:56:21.751915 kubelet[3444]: E0120 23:56:21.751840 3444 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-43?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 23:56:22.033841 kubelet[3444]: I0120 23:56:22.033413 3444 scope.go:117] "RemoveContainer" containerID="0c7e106b48e497a0403d214a671910df4b13d8c988c14a769bfb8c977342daac" Jan 20 23:56:22.037076 containerd[1967]: time="2026-01-20T23:56:22.036996007Z" level=info msg="CreateContainer within sandbox \"01677a9f3bd408cb5faadee1f83963f6b0b94cd8a52f99e31cb2b0b46fd23987\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 23:56:22.059395 containerd[1967]: time="2026-01-20T23:56:22.056994163Z" level=info msg="Container 390e9f958c6199e63b8ca3992b555c6f38ad6b11d9a53acc37eec6e2f3b98638: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:56:22.075834 containerd[1967]: time="2026-01-20T23:56:22.075780583Z" level=info msg="CreateContainer within sandbox \"01677a9f3bd408cb5faadee1f83963f6b0b94cd8a52f99e31cb2b0b46fd23987\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"390e9f958c6199e63b8ca3992b555c6f38ad6b11d9a53acc37eec6e2f3b98638\"" Jan 20 23:56:22.077020 containerd[1967]: time="2026-01-20T23:56:22.076972771Z" level=info msg="StartContainer for \"390e9f958c6199e63b8ca3992b555c6f38ad6b11d9a53acc37eec6e2f3b98638\"" Jan 20 23:56:22.079167 containerd[1967]: time="2026-01-20T23:56:22.079096567Z" level=info msg="connecting to shim 390e9f958c6199e63b8ca3992b555c6f38ad6b11d9a53acc37eec6e2f3b98638" address="unix:///run/containerd/s/754d361ef18c2ff5829c3bdb4341715c1e179b8c50c50f9bb3ba38c4ad2bfcdb" protocol=ttrpc version=3 Jan 20 23:56:22.116064 systemd[1]: Started cri-containerd-390e9f958c6199e63b8ca3992b555c6f38ad6b11d9a53acc37eec6e2f3b98638.scope - libcontainer container 390e9f958c6199e63b8ca3992b555c6f38ad6b11d9a53acc37eec6e2f3b98638. Jan 20 23:56:22.208335 containerd[1967]: time="2026-01-20T23:56:22.208269404Z" level=info msg="StartContainer for \"390e9f958c6199e63b8ca3992b555c6f38ad6b11d9a53acc37eec6e2f3b98638\" returns successfully" Jan 20 23:56:31.752201 kubelet[3444]: E0120 23:56:31.752054 3444 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-43?timeout=10s\": context deadline exceeded"