Jan 23 17:57:59.146930 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 17:57:59.146973 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Jan 23 16:10:02 -00 2026 Jan 23 17:57:59.146997 kernel: KASLR disabled due to lack of seed Jan 23 17:57:59.147013 kernel: efi: EFI v2.7 by EDK II Jan 23 17:57:59.147029 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78551598 Jan 23 17:57:59.147044 kernel: secureboot: Secure boot disabled Jan 23 17:57:59.147061 kernel: ACPI: Early table checksum verification disabled Jan 23 17:57:59.147076 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 17:57:59.147092 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 17:57:59.147107 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 17:57:59.147175 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 17:57:59.147201 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 23 17:57:59.147217 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 17:57:59.147234 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 17:57:59.147252 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 17:57:59.147268 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 17:57:59.147289 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 17:57:59.147305 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 17:57:59.147321 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 17:57:59.147336 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 17:57:59.147352 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 17:57:59.147368 kernel: printk: legacy bootconsole [uart0] enabled Jan 23 17:57:59.147384 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 17:57:59.147400 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 17:57:59.147416 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Jan 23 17:57:59.147432 kernel: Zone ranges: Jan 23 17:57:59.147448 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 17:57:59.147468 kernel: DMA32 empty Jan 23 17:57:59.147484 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 17:57:59.147499 kernel: Device empty Jan 23 17:57:59.147515 kernel: Movable zone start for each node Jan 23 17:57:59.147531 kernel: Early memory node ranges Jan 23 17:57:59.147547 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 17:57:59.147563 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 17:57:59.147578 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 17:57:59.147594 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 17:57:59.147610 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 17:57:59.147626 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 17:57:59.147641 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 17:57:59.147661 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 17:57:59.147684 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 17:57:59.147701 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 17:57:59.147718 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 23 17:57:59.147734 kernel: psci: probing for conduit method from ACPI. Jan 23 17:57:59.147755 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 17:57:59.147772 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 17:57:59.147788 kernel: psci: Trusted OS migration not required Jan 23 17:57:59.147805 kernel: psci: SMC Calling Convention v1.1 Jan 23 17:57:59.147821 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 17:57:59.147838 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 17:57:59.147855 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 17:57:59.147872 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 17:57:59.147889 kernel: Detected PIPT I-cache on CPU0 Jan 23 17:57:59.147906 kernel: CPU features: detected: GIC system register CPU interface Jan 23 17:57:59.147923 kernel: CPU features: detected: Spectre-v2 Jan 23 17:57:59.147943 kernel: CPU features: detected: Spectre-v3a Jan 23 17:57:59.147960 kernel: CPU features: detected: Spectre-BHB Jan 23 17:57:59.147976 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 17:57:59.147993 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 17:57:59.148009 kernel: alternatives: applying boot alternatives Jan 23 17:57:59.148028 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:57:59.148046 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 17:57:59.148063 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 17:57:59.148080 kernel: Fallback order for Node 0: 0 Jan 23 17:57:59.148097 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 23 17:57:59.148206 kernel: Policy zone: Normal Jan 23 17:57:59.148234 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 17:57:59.148251 kernel: software IO TLB: area num 2. Jan 23 17:57:59.148268 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Jan 23 17:57:59.148285 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 17:57:59.148301 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 17:57:59.148319 kernel: rcu: RCU event tracing is enabled. Jan 23 17:57:59.148336 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 17:57:59.148353 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 17:57:59.148370 kernel: Tracing variant of Tasks RCU enabled. Jan 23 17:57:59.148387 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 17:57:59.148404 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 17:57:59.148425 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:57:59.148443 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:57:59.148460 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 17:57:59.148476 kernel: GICv3: 96 SPIs implemented Jan 23 17:57:59.148493 kernel: GICv3: 0 Extended SPIs implemented Jan 23 17:57:59.148509 kernel: Root IRQ handler: gic_handle_irq Jan 23 17:57:59.148526 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 17:57:59.148543 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 17:57:59.148559 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 17:57:59.148576 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 17:57:59.148593 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 17:57:59.148610 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 23 17:57:59.148631 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 23 17:57:59.148648 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 17:57:59.148664 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 23 17:57:59.148681 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 17:57:59.148698 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 17:57:59.148715 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 17:57:59.148732 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 17:57:59.148749 kernel: Console: colour dummy device 80x25 Jan 23 17:57:59.148766 kernel: printk: legacy console [tty1] enabled Jan 23 17:57:59.148784 kernel: ACPI: Core revision 20240827 Jan 23 17:57:59.148801 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 17:57:59.148823 kernel: pid_max: default: 32768 minimum: 301 Jan 23 17:57:59.148840 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 17:57:59.149619 kernel: landlock: Up and running. Jan 23 17:57:59.149652 kernel: SELinux: Initializing. Jan 23 17:57:59.149671 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:57:59.149690 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:57:59.149708 kernel: rcu: Hierarchical SRCU implementation. Jan 23 17:57:59.149726 kernel: rcu: Max phase no-delay instances is 400. Jan 23 17:57:59.149744 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 17:57:59.149772 kernel: Remapping and enabling EFI services. Jan 23 17:57:59.149790 kernel: smp: Bringing up secondary CPUs ... Jan 23 17:57:59.149808 kernel: Detected PIPT I-cache on CPU1 Jan 23 17:57:59.149825 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 17:57:59.149843 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 23 17:57:59.149860 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 17:57:59.149878 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 17:57:59.149895 kernel: SMP: Total of 2 processors activated. Jan 23 17:57:59.149913 kernel: CPU: All CPU(s) started at EL1 Jan 23 17:57:59.149945 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 17:57:59.149964 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 17:57:59.149985 kernel: CPU features: detected: CRC32 instructions Jan 23 17:57:59.150003 kernel: alternatives: applying system-wide alternatives Jan 23 17:57:59.150022 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Jan 23 17:57:59.150041 kernel: devtmpfs: initialized Jan 23 17:57:59.150059 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 17:57:59.150082 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 17:57:59.150100 kernel: 16880 pages in range for non-PLT usage Jan 23 17:57:59.150158 kernel: 508400 pages in range for PLT usage Jan 23 17:57:59.150213 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 17:57:59.150235 kernel: SMBIOS 3.0.0 present. Jan 23 17:57:59.150253 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 17:57:59.150272 kernel: DMI: Memory slots populated: 0/0 Jan 23 17:57:59.150290 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 17:57:59.150308 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 17:57:59.150334 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 17:57:59.150352 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 17:57:59.150370 kernel: audit: initializing netlink subsys (disabled) Jan 23 17:57:59.150388 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Jan 23 17:57:59.150406 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 17:57:59.150424 kernel: cpuidle: using governor menu Jan 23 17:57:59.150442 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 17:57:59.150461 kernel: ASID allocator initialised with 65536 entries Jan 23 17:57:59.150479 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 17:57:59.150500 kernel: Serial: AMBA PL011 UART driver Jan 23 17:57:59.150519 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 17:57:59.150536 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 17:57:59.150554 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 17:57:59.150572 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 17:57:59.150590 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 17:57:59.150608 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 17:57:59.150626 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 17:57:59.150645 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 17:57:59.150667 kernel: ACPI: Added _OSI(Module Device) Jan 23 17:57:59.150686 kernel: ACPI: Added _OSI(Processor Device) Jan 23 17:57:59.150704 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 17:57:59.150722 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 17:57:59.150740 kernel: ACPI: Interpreter enabled Jan 23 17:57:59.150758 kernel: ACPI: Using GIC for interrupt routing Jan 23 17:57:59.150776 kernel: ACPI: MCFG table detected, 1 entries Jan 23 17:57:59.150794 kernel: ACPI: CPU0 has been hot-added Jan 23 17:57:59.150812 kernel: ACPI: CPU1 has been hot-added Jan 23 17:57:59.150835 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 17:57:59.151178 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 17:57:59.155413 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 17:57:59.155607 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 17:57:59.155793 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 17:57:59.155981 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 17:57:59.156006 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 17:57:59.156034 kernel: acpiphp: Slot [1] registered Jan 23 17:57:59.156053 kernel: acpiphp: Slot [2] registered Jan 23 17:57:59.156071 kernel: acpiphp: Slot [3] registered Jan 23 17:57:59.156089 kernel: acpiphp: Slot [4] registered Jan 23 17:57:59.156107 kernel: acpiphp: Slot [5] registered Jan 23 17:57:59.156179 kernel: acpiphp: Slot [6] registered Jan 23 17:57:59.156200 kernel: acpiphp: Slot [7] registered Jan 23 17:57:59.156218 kernel: acpiphp: Slot [8] registered Jan 23 17:57:59.156236 kernel: acpiphp: Slot [9] registered Jan 23 17:57:59.156254 kernel: acpiphp: Slot [10] registered Jan 23 17:57:59.156279 kernel: acpiphp: Slot [11] registered Jan 23 17:57:59.156297 kernel: acpiphp: Slot [12] registered Jan 23 17:57:59.156315 kernel: acpiphp: Slot [13] registered Jan 23 17:57:59.156334 kernel: acpiphp: Slot [14] registered Jan 23 17:57:59.156352 kernel: acpiphp: Slot [15] registered Jan 23 17:57:59.156369 kernel: acpiphp: Slot [16] registered Jan 23 17:57:59.156387 kernel: acpiphp: Slot [17] registered Jan 23 17:57:59.156405 kernel: acpiphp: Slot [18] registered Jan 23 17:57:59.156423 kernel: acpiphp: Slot [19] registered Jan 23 17:57:59.156444 kernel: acpiphp: Slot [20] registered Jan 23 17:57:59.156462 kernel: acpiphp: Slot [21] registered Jan 23 17:57:59.156480 kernel: acpiphp: Slot [22] registered Jan 23 17:57:59.156497 kernel: acpiphp: Slot [23] registered Jan 23 17:57:59.156515 kernel: acpiphp: Slot [24] registered Jan 23 17:57:59.156533 kernel: acpiphp: Slot [25] registered Jan 23 17:57:59.156550 kernel: acpiphp: Slot [26] registered Jan 23 17:57:59.156568 kernel: acpiphp: Slot [27] registered Jan 23 17:57:59.156586 kernel: acpiphp: Slot [28] registered Jan 23 17:57:59.156603 kernel: acpiphp: Slot [29] registered Jan 23 17:57:59.156625 kernel: acpiphp: Slot [30] registered Jan 23 17:57:59.156643 kernel: acpiphp: Slot [31] registered Jan 23 17:57:59.156661 kernel: PCI host bridge to bus 0000:00 Jan 23 17:57:59.156882 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 17:57:59.157055 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 17:57:59.157260 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 17:57:59.157428 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 17:57:59.157654 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 23 17:57:59.157867 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 23 17:57:59.158099 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 23 17:57:59.158386 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 23 17:57:59.158582 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 23 17:57:59.158777 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 17:57:59.159001 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 23 17:57:59.159266 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 23 17:57:59.159461 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 23 17:57:59.159653 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 23 17:57:59.159841 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 17:57:59.160020 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 17:57:59.160226 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 17:57:59.160413 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 17:57:59.160440 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 17:57:59.160460 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 17:57:59.160479 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 17:57:59.160498 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 17:57:59.160517 kernel: iommu: Default domain type: Translated Jan 23 17:57:59.160535 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 17:57:59.160554 kernel: efivars: Registered efivars operations Jan 23 17:57:59.160572 kernel: vgaarb: loaded Jan 23 17:57:59.160598 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 17:57:59.160617 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 17:57:59.160636 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 17:57:59.160655 kernel: pnp: PnP ACPI init Jan 23 17:57:59.160867 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 17:57:59.160896 kernel: pnp: PnP ACPI: found 1 devices Jan 23 17:57:59.160915 kernel: NET: Registered PF_INET protocol family Jan 23 17:57:59.160934 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 17:57:59.160964 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 17:57:59.160984 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 17:57:59.161002 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:57:59.161021 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 17:57:59.161039 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 17:57:59.161058 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:57:59.161076 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:57:59.161094 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 17:57:59.161163 kernel: PCI: CLS 0 bytes, default 64 Jan 23 17:57:59.161193 kernel: kvm [1]: HYP mode not available Jan 23 17:57:59.161212 kernel: Initialise system trusted keyrings Jan 23 17:57:59.161231 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 17:57:59.161249 kernel: Key type asymmetric registered Jan 23 17:57:59.161267 kernel: Asymmetric key parser 'x509' registered Jan 23 17:57:59.161285 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 17:57:59.161304 kernel: io scheduler mq-deadline registered Jan 23 17:57:59.161322 kernel: io scheduler kyber registered Jan 23 17:57:59.161342 kernel: io scheduler bfq registered Jan 23 17:57:59.161613 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 17:57:59.161643 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 17:57:59.161662 kernel: ACPI: button: Power Button [PWRB] Jan 23 17:57:59.161680 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 17:57:59.161698 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 17:57:59.161717 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 17:57:59.161757 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 17:57:59.164540 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 17:57:59.164588 kernel: printk: legacy console [ttyS0] disabled Jan 23 17:57:59.164607 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 17:57:59.164627 kernel: printk: legacy console [ttyS0] enabled Jan 23 17:57:59.164645 kernel: printk: legacy bootconsole [uart0] disabled Jan 23 17:57:59.164663 kernel: thunder_xcv, ver 1.0 Jan 23 17:57:59.164681 kernel: thunder_bgx, ver 1.0 Jan 23 17:57:59.164698 kernel: nicpf, ver 1.0 Jan 23 17:57:59.164716 kernel: nicvf, ver 1.0 Jan 23 17:57:59.164919 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 17:57:59.165103 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T17:57:58 UTC (1769191078) Jan 23 17:57:59.165160 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 17:57:59.165179 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 23 17:57:59.165198 kernel: watchdog: NMI not fully supported Jan 23 17:57:59.165215 kernel: NET: Registered PF_INET6 protocol family Jan 23 17:57:59.165233 kernel: watchdog: Hard watchdog permanently disabled Jan 23 17:57:59.165251 kernel: Segment Routing with IPv6 Jan 23 17:57:59.165268 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 17:57:59.165286 kernel: NET: Registered PF_PACKET protocol family Jan 23 17:57:59.165310 kernel: Key type dns_resolver registered Jan 23 17:57:59.165328 kernel: registered taskstats version 1 Jan 23 17:57:59.165346 kernel: Loading compiled-in X.509 certificates Jan 23 17:57:59.165364 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3b281aa2bfe49764dd224485ec54e6070c82b8fb' Jan 23 17:57:59.165382 kernel: Demotion targets for Node 0: null Jan 23 17:57:59.165399 kernel: Key type .fscrypt registered Jan 23 17:57:59.165417 kernel: Key type fscrypt-provisioning registered Jan 23 17:57:59.165434 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 17:57:59.165452 kernel: ima: Allocated hash algorithm: sha1 Jan 23 17:57:59.165474 kernel: ima: No architecture policies found Jan 23 17:57:59.165492 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 17:57:59.165510 kernel: clk: Disabling unused clocks Jan 23 17:57:59.165528 kernel: PM: genpd: Disabling unused power domains Jan 23 17:57:59.165545 kernel: Warning: unable to open an initial console. Jan 23 17:57:59.165564 kernel: Freeing unused kernel memory: 39552K Jan 23 17:57:59.165581 kernel: Run /init as init process Jan 23 17:57:59.165599 kernel: with arguments: Jan 23 17:57:59.165617 kernel: /init Jan 23 17:57:59.165638 kernel: with environment: Jan 23 17:57:59.165656 kernel: HOME=/ Jan 23 17:57:59.165673 kernel: TERM=linux Jan 23 17:57:59.165692 systemd[1]: Successfully made /usr/ read-only. Jan 23 17:57:59.165716 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:57:59.165736 systemd[1]: Detected virtualization amazon. Jan 23 17:57:59.165755 systemd[1]: Detected architecture arm64. Jan 23 17:57:59.165777 systemd[1]: Running in initrd. Jan 23 17:57:59.165796 systemd[1]: No hostname configured, using default hostname. Jan 23 17:57:59.165816 systemd[1]: Hostname set to . Jan 23 17:57:59.165835 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:57:59.165854 systemd[1]: Queued start job for default target initrd.target. Jan 23 17:57:59.165873 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:57:59.165892 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:57:59.165913 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 17:57:59.165937 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:57:59.165957 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 17:57:59.165978 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 17:57:59.166000 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 17:57:59.166020 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 17:57:59.166039 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:57:59.166059 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:57:59.166081 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:57:59.166101 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:57:59.166140 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:57:59.166162 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:57:59.166181 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:57:59.166201 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:57:59.166220 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 17:57:59.166240 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 17:57:59.166259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:57:59.166284 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:57:59.166304 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:57:59.166323 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:57:59.166342 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 17:57:59.166362 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:57:59.166381 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 17:57:59.166401 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 17:57:59.166420 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 17:57:59.166444 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:57:59.166464 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:57:59.166483 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:57:59.166502 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 17:57:59.166523 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:57:59.166594 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 17:57:59.166616 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:57:59.166636 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 17:57:59.166697 systemd-journald[259]: Collecting audit messages is disabled. Jan 23 17:57:59.166744 kernel: Bridge firewalling registered Jan 23 17:57:59.166764 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:57:59.166784 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:57:59.166804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:57:59.166824 systemd-journald[259]: Journal started Jan 23 17:57:59.166860 systemd-journald[259]: Runtime Journal (/run/log/journal/ec28ba7182331ed79b2e1656be1e84c4) is 8M, max 75.3M, 67.3M free. Jan 23 17:57:59.116813 systemd-modules-load[260]: Inserted module 'overlay' Jan 23 17:57:59.182767 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:57:59.143060 systemd-modules-load[260]: Inserted module 'br_netfilter' Jan 23 17:57:59.191819 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:57:59.203474 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 17:57:59.208769 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:57:59.221436 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:57:59.240079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:57:59.257604 systemd-tmpfiles[283]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 17:57:59.268325 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:57:59.277924 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:57:59.287754 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:57:59.310249 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:57:59.317076 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 17:57:59.361244 dracut-cmdline[302]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:57:59.391980 systemd-resolved[292]: Positive Trust Anchors: Jan 23 17:57:59.399410 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:57:59.399478 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:57:59.546154 kernel: SCSI subsystem initialized Jan 23 17:57:59.554155 kernel: Loading iSCSI transport class v2.0-870. Jan 23 17:57:59.566151 kernel: iscsi: registered transport (tcp) Jan 23 17:57:59.588396 kernel: iscsi: registered transport (qla4xxx) Jan 23 17:57:59.588480 kernel: QLogic iSCSI HBA Driver Jan 23 17:57:59.624303 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:57:59.665873 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:57:59.679683 systemd-resolved[292]: Defaulting to hostname 'linux'. Jan 23 17:57:59.688170 kernel: random: crng init done Jan 23 17:57:59.682812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:57:59.690274 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:57:59.699308 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:57:59.778873 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 17:57:59.788632 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 17:57:59.875197 kernel: raid6: neonx8 gen() 6553 MB/s Jan 23 17:57:59.892164 kernel: raid6: neonx4 gen() 6587 MB/s Jan 23 17:57:59.909157 kernel: raid6: neonx2 gen() 5469 MB/s Jan 23 17:57:59.926162 kernel: raid6: neonx1 gen() 3966 MB/s Jan 23 17:57:59.943166 kernel: raid6: int64x8 gen() 3689 MB/s Jan 23 17:57:59.960169 kernel: raid6: int64x4 gen() 3723 MB/s Jan 23 17:57:59.977158 kernel: raid6: int64x2 gen() 3618 MB/s Jan 23 17:57:59.995281 kernel: raid6: int64x1 gen() 2774 MB/s Jan 23 17:57:59.995329 kernel: raid6: using algorithm neonx4 gen() 6587 MB/s Jan 23 17:58:00.014177 kernel: raid6: .... xor() 4894 MB/s, rmw enabled Jan 23 17:58:00.014238 kernel: raid6: using neon recovery algorithm Jan 23 17:58:00.023053 kernel: xor: measuring software checksum speed Jan 23 17:58:00.023142 kernel: 8regs : 12979 MB/sec Jan 23 17:58:00.025576 kernel: 32regs : 11913 MB/sec Jan 23 17:58:00.025623 kernel: arm64_neon : 9216 MB/sec Jan 23 17:58:00.025648 kernel: xor: using function: 8regs (12979 MB/sec) Jan 23 17:58:00.120231 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 17:58:00.130189 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:58:00.140889 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:58:00.205558 systemd-udevd[510]: Using default interface naming scheme 'v255'. Jan 23 17:58:00.215718 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:58:00.232833 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 17:58:00.268012 dracut-pre-trigger[521]: rd.md=0: removing MD RAID activation Jan 23 17:58:00.319101 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:58:00.331414 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:58:00.459537 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:58:00.474792 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 17:58:00.636142 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 17:58:00.636214 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 17:58:00.648146 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 17:58:00.650611 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 17:58:00.650929 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 17:58:00.653859 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 17:58:00.663429 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 17:58:00.663719 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:b9:92:a1:54:2d Jan 23 17:58:00.666010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:58:00.669245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:58:00.675351 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:58:00.683815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:58:00.693084 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 17:58:00.693162 kernel: GPT:9289727 != 33554431 Jan 23 17:58:00.693190 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 17:58:00.684562 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:58:00.701276 kernel: GPT:9289727 != 33554431 Jan 23 17:58:00.701680 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 17:58:00.701792 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:58:00.709910 (udev-worker)[570]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:58:00.745195 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:58:00.759182 kernel: nvme nvme0: using unchecked data buffer Jan 23 17:58:00.905851 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 17:58:00.928789 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 17:58:00.970818 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 17:58:00.977926 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 17:58:01.017982 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 17:58:01.023323 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 17:58:01.031865 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:58:01.034884 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:58:01.038419 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:58:01.042858 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 17:58:01.051002 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 17:58:01.096181 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:58:01.096604 disk-uuid[690]: Primary Header is updated. Jan 23 17:58:01.096604 disk-uuid[690]: Secondary Entries is updated. Jan 23 17:58:01.096604 disk-uuid[690]: Secondary Header is updated. Jan 23 17:58:01.099596 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:58:02.131265 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:58:02.132518 disk-uuid[698]: The operation has completed successfully. Jan 23 17:58:02.319514 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 17:58:02.321209 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 17:58:02.426193 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 17:58:02.469468 sh[958]: Success Jan 23 17:58:02.499531 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 17:58:02.499836 kernel: device-mapper: uevent: version 1.0.3 Jan 23 17:58:02.499869 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 17:58:02.514144 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 17:58:02.611049 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:58:02.624245 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 17:58:02.634224 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 17:58:02.674154 kernel: BTRFS: device fsid 8784b097-3924-47e8-98b3-06e8cbe78a64 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (981) Jan 23 17:58:02.678856 kernel: BTRFS info (device dm-0): first mount of filesystem 8784b097-3924-47e8-98b3-06e8cbe78a64 Jan 23 17:58:02.678912 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:58:02.829161 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 17:58:02.829256 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 17:58:02.830584 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 17:58:02.857577 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 17:58:02.865481 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:58:02.871785 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 17:58:02.878482 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 17:58:02.887315 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 17:58:02.941219 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1012) Jan 23 17:58:02.946946 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:58:02.947033 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:58:02.955945 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:58:02.956019 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:58:02.966167 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:58:02.972218 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 17:58:02.977668 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 17:58:03.109958 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:58:03.121208 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:58:03.197958 systemd-networkd[1165]: lo: Link UP Jan 23 17:58:03.198510 systemd-networkd[1165]: lo: Gained carrier Jan 23 17:58:03.203602 systemd-networkd[1165]: Enumeration completed Jan 23 17:58:03.204783 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:58:03.204790 systemd-networkd[1165]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:58:03.214429 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:58:03.223913 systemd[1]: Reached target network.target - Network. Jan 23 17:58:03.232832 systemd-networkd[1165]: eth0: Link UP Jan 23 17:58:03.232840 systemd-networkd[1165]: eth0: Gained carrier Jan 23 17:58:03.232863 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:58:03.268234 systemd-networkd[1165]: eth0: DHCPv4 address 172.31.16.186/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 17:58:03.519714 ignition[1067]: Ignition 2.22.0 Jan 23 17:58:03.520199 ignition[1067]: Stage: fetch-offline Jan 23 17:58:03.521140 ignition[1067]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:03.521169 ignition[1067]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:03.521615 ignition[1067]: Ignition finished successfully Jan 23 17:58:03.533640 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:58:03.541406 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 17:58:03.608844 ignition[1176]: Ignition 2.22.0 Jan 23 17:58:03.608878 ignition[1176]: Stage: fetch Jan 23 17:58:03.609441 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:03.609466 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:03.609610 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:03.626101 ignition[1176]: PUT result: OK Jan 23 17:58:03.632993 ignition[1176]: parsed url from cmdline: "" Jan 23 17:58:03.633023 ignition[1176]: no config URL provided Jan 23 17:58:03.633039 ignition[1176]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:58:03.633065 ignition[1176]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:58:03.633138 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:03.643318 ignition[1176]: PUT result: OK Jan 23 17:58:03.643534 ignition[1176]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 17:58:03.648497 ignition[1176]: GET result: OK Jan 23 17:58:03.650529 ignition[1176]: parsing config with SHA512: 9b8ed63414a716c1386ca6c4b1f82b3fe3baedc2ed9e3dda44b856fed6ac382fd8f1470824000be232c6e8c76ff9687a84258aeacce74a6d7e08a883c1e792dd Jan 23 17:58:03.658682 unknown[1176]: fetched base config from "system" Jan 23 17:58:03.658702 unknown[1176]: fetched base config from "system" Jan 23 17:58:03.662906 ignition[1176]: fetch: fetch complete Jan 23 17:58:03.658715 unknown[1176]: fetched user config from "aws" Jan 23 17:58:03.662921 ignition[1176]: fetch: fetch passed Jan 23 17:58:03.663042 ignition[1176]: Ignition finished successfully Jan 23 17:58:03.680203 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 17:58:03.687806 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 17:58:03.751579 ignition[1182]: Ignition 2.22.0 Jan 23 17:58:03.752146 ignition[1182]: Stage: kargs Jan 23 17:58:03.752699 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:03.752722 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:03.752849 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:03.764045 ignition[1182]: PUT result: OK Jan 23 17:58:03.769643 ignition[1182]: kargs: kargs passed Jan 23 17:58:03.770012 ignition[1182]: Ignition finished successfully Jan 23 17:58:03.778333 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 17:58:03.789357 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 17:58:03.857228 ignition[1188]: Ignition 2.22.0 Jan 23 17:58:03.857784 ignition[1188]: Stage: disks Jan 23 17:58:03.859583 ignition[1188]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:03.859619 ignition[1188]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:03.859794 ignition[1188]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:03.871407 ignition[1188]: PUT result: OK Jan 23 17:58:03.880784 ignition[1188]: disks: disks passed Jan 23 17:58:03.884488 ignition[1188]: Ignition finished successfully Jan 23 17:58:03.891264 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 17:58:03.896950 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 17:58:03.897109 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 17:58:03.898015 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:58:03.922247 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:58:03.926733 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:58:03.935773 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 17:58:04.009973 systemd-fsck[1196]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 17:58:04.017620 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 17:58:04.025488 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 17:58:04.166156 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5f1f19a2-81b4-48e9-bfdb-d3843ff70e8e r/w with ordered data mode. Quota mode: none. Jan 23 17:58:04.168304 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 17:58:04.172786 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 17:58:04.183478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:58:04.188268 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 17:58:04.193789 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 17:58:04.194041 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 17:58:04.194090 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:58:04.224448 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 17:58:04.229880 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 17:58:04.251215 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1215) Jan 23 17:58:04.255724 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:58:04.255774 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:58:04.263579 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:58:04.263649 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:58:04.267236 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:58:04.374269 systemd-networkd[1165]: eth0: Gained IPv6LL Jan 23 17:58:04.564658 initrd-setup-root[1239]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 17:58:04.589632 initrd-setup-root[1246]: cut: /sysroot/etc/group: No such file or directory Jan 23 17:58:04.607885 initrd-setup-root[1253]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 17:58:04.628764 initrd-setup-root[1260]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 17:58:04.965587 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 17:58:04.974281 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 17:58:04.983600 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 17:58:05.009494 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 17:58:05.015236 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:58:05.046275 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 17:58:05.069627 ignition[1328]: INFO : Ignition 2.22.0 Jan 23 17:58:05.069627 ignition[1328]: INFO : Stage: mount Jan 23 17:58:05.074385 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:05.074385 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:05.074385 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:05.074385 ignition[1328]: INFO : PUT result: OK Jan 23 17:58:05.091430 ignition[1328]: INFO : mount: mount passed Jan 23 17:58:05.093479 ignition[1328]: INFO : Ignition finished successfully Jan 23 17:58:05.098771 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 17:58:05.107291 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 17:58:05.171258 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:58:05.223203 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1340) Jan 23 17:58:05.228501 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:58:05.228620 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:58:05.238269 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:58:05.238365 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:58:05.241653 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:58:05.294628 ignition[1357]: INFO : Ignition 2.22.0 Jan 23 17:58:05.294628 ignition[1357]: INFO : Stage: files Jan 23 17:58:05.299200 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:05.299200 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:05.299200 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:05.308095 ignition[1357]: INFO : PUT result: OK Jan 23 17:58:05.313582 ignition[1357]: DEBUG : files: compiled without relabeling support, skipping Jan 23 17:58:05.316765 ignition[1357]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 17:58:05.316765 ignition[1357]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 17:58:05.338969 ignition[1357]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 17:58:05.343263 ignition[1357]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 17:58:05.343263 ignition[1357]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 17:58:05.339904 unknown[1357]: wrote ssh authorized keys file for user: core Jan 23 17:58:05.352675 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:58:05.352675 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 17:58:05.782067 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 17:58:06.639870 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:58:06.645573 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 17:58:06.645573 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 17:58:06.645573 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:58:06.645573 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:58:06.645573 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:58:06.645573 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:58:06.645573 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:58:06.681497 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:58:06.681497 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:58:06.693166 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:58:06.693166 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:58:06.693166 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:58:06.693166 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:58:06.693166 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 23 17:58:07.154697 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 17:58:07.549429 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:58:07.549429 ignition[1357]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 17:58:07.563614 ignition[1357]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:58:07.570027 ignition[1357]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:58:07.570027 ignition[1357]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 17:58:07.570027 ignition[1357]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 17:58:07.570027 ignition[1357]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 17:58:07.570027 ignition[1357]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:58:07.570027 ignition[1357]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:58:07.570027 ignition[1357]: INFO : files: files passed Jan 23 17:58:07.570027 ignition[1357]: INFO : Ignition finished successfully Jan 23 17:58:07.608416 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 17:58:07.613340 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 17:58:07.621667 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 17:58:07.649731 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 17:58:07.650175 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 17:58:07.669201 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:58:07.673276 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:58:07.673276 initrd-setup-root-after-ignition[1386]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:58:07.686202 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:58:07.693549 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 17:58:07.698265 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 17:58:07.777026 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 17:58:07.777291 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 17:58:07.788557 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 17:58:07.791886 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 17:58:07.801258 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 17:58:07.802777 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 17:58:07.847075 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:58:07.855739 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 17:58:07.908521 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:58:07.914300 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:58:07.917850 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 17:58:07.925370 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 17:58:07.925747 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:58:07.934286 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 17:58:07.937154 systemd[1]: Stopped target basic.target - Basic System. Jan 23 17:58:07.940405 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 17:58:07.949959 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:58:07.956007 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 17:58:07.961778 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:58:07.967785 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 17:58:07.971009 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:58:07.980011 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 17:58:07.983470 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 17:58:07.991176 systemd[1]: Stopped target swap.target - Swaps. Jan 23 17:58:07.993465 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 17:58:07.993720 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:58:08.000242 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:58:08.007387 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:58:08.013307 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 17:58:08.015872 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:58:08.019341 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 17:58:08.019563 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 17:58:08.023026 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 17:58:08.023325 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:58:08.032758 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 17:58:08.032984 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 17:58:08.038014 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 17:58:08.072732 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 17:58:08.080723 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 17:58:08.081190 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:58:08.090630 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 17:58:08.091435 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:58:08.111312 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 17:58:08.113427 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 17:58:08.140664 ignition[1410]: INFO : Ignition 2.22.0 Jan 23 17:58:08.142983 ignition[1410]: INFO : Stage: umount Jan 23 17:58:08.142983 ignition[1410]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:08.142983 ignition[1410]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:08.142983 ignition[1410]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:08.156719 ignition[1410]: INFO : PUT result: OK Jan 23 17:58:08.162440 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 17:58:08.165655 ignition[1410]: INFO : umount: umount passed Jan 23 17:58:08.165655 ignition[1410]: INFO : Ignition finished successfully Jan 23 17:58:08.173779 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 17:58:08.174032 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 17:58:08.178616 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 17:58:08.179246 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 17:58:08.186666 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 17:58:08.186850 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 17:58:08.193536 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 17:58:08.193634 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 17:58:08.196597 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 17:58:08.196690 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 17:58:08.204216 systemd[1]: Stopped target network.target - Network. Jan 23 17:58:08.208814 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 17:58:08.208918 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:58:08.212064 systemd[1]: Stopped target paths.target - Path Units. Jan 23 17:58:08.214366 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 17:58:08.216670 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:58:08.220046 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 17:58:08.226872 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 17:58:08.229541 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 17:58:08.229612 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:58:08.237033 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 17:58:08.237101 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:58:08.240002 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 17:58:08.240096 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 17:58:08.247785 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 17:58:08.247872 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 17:58:08.250858 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 17:58:08.250943 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 17:58:08.260689 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 17:58:08.268003 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 17:58:08.295524 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 17:58:08.295819 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 17:58:08.331447 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 17:58:08.332181 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 17:58:08.332268 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:58:08.349195 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:58:08.356840 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 17:58:08.359590 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 17:58:08.367927 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 17:58:08.368467 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 17:58:08.376900 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 17:58:08.376975 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:58:08.385108 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 17:58:08.390480 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 17:58:08.390587 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:58:08.395067 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:58:08.395191 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:58:08.413866 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 17:58:08.419311 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 17:58:08.428998 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:58:08.444068 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 17:58:08.454739 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 17:58:08.459528 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:58:08.467978 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 17:58:08.468091 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 17:58:08.475707 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 17:58:08.475935 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:58:08.485248 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 17:58:08.485865 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:58:08.489570 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 17:58:08.489658 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 17:58:08.492747 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 17:58:08.492847 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:58:08.514952 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 17:58:08.521667 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 17:58:08.521955 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:58:08.531366 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 17:58:08.531475 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:58:08.537611 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 17:58:08.537701 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:58:08.547219 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 17:58:08.547305 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:58:08.559090 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:58:08.559216 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:58:08.569081 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 17:58:08.571774 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 17:58:08.588837 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 17:58:08.589079 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 17:58:08.592719 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 17:58:08.594263 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 17:58:08.638320 systemd[1]: Switching root. Jan 23 17:58:08.705432 systemd-journald[259]: Journal stopped Jan 23 17:58:11.434598 systemd-journald[259]: Received SIGTERM from PID 1 (systemd). Jan 23 17:58:11.434706 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 17:58:11.434751 kernel: SELinux: policy capability open_perms=1 Jan 23 17:58:11.434788 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 17:58:11.434817 kernel: SELinux: policy capability always_check_network=0 Jan 23 17:58:11.434852 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 17:58:11.434888 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 17:58:11.434916 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 17:58:11.434944 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 17:58:11.434973 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 17:58:11.435006 kernel: audit: type=1403 audit(1769191089.221:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 17:58:11.435044 systemd[1]: Successfully loaded SELinux policy in 132.008ms. Jan 23 17:58:11.435086 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.201ms. Jan 23 17:58:11.435178 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:58:11.435217 systemd[1]: Detected virtualization amazon. Jan 23 17:58:11.435250 systemd[1]: Detected architecture arm64. Jan 23 17:58:11.435278 systemd[1]: Detected first boot. Jan 23 17:58:11.437275 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:58:11.437308 zram_generator::config[1455]: No configuration found. Jan 23 17:58:11.437366 kernel: NET: Registered PF_VSOCK protocol family Jan 23 17:58:11.440517 systemd[1]: Populated /etc with preset unit settings. Jan 23 17:58:11.440557 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 17:58:11.440599 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 17:58:11.440628 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 17:58:11.440659 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 17:58:11.440690 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 17:58:11.440720 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 17:58:11.440750 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 17:58:11.440781 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 17:58:11.440809 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 17:58:11.440841 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 17:58:11.440873 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 17:58:11.440903 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 17:58:11.440931 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:58:11.440983 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:58:11.441018 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 17:58:11.441049 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 17:58:11.441080 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 17:58:11.441109 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:58:11.442279 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 17:58:11.442316 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:58:11.442347 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:58:11.442375 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 17:58:11.442407 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 17:58:11.442438 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 17:58:11.442466 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 17:58:11.442495 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:58:11.442527 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:58:11.442555 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:58:11.442585 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:58:11.442613 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 17:58:11.442641 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 17:58:11.442670 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 17:58:11.442698 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:58:11.442727 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:58:11.442758 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:58:11.442786 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 17:58:11.442817 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 17:58:11.442844 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 17:58:11.442876 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 17:58:11.442907 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 17:58:11.442935 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 17:58:11.442963 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 17:58:11.442992 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 17:58:11.443026 systemd[1]: Reached target machines.target - Containers. Jan 23 17:58:11.443058 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 17:58:11.443086 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:58:11.445223 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:58:11.445294 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 17:58:11.445325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:58:11.445380 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:58:11.451274 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:58:11.451313 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 17:58:11.451341 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:58:11.451378 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 17:58:11.451410 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 17:58:11.451438 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 17:58:11.451466 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 17:58:11.451496 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 17:58:11.451525 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:58:11.451553 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:58:11.451581 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:58:11.451612 kernel: fuse: init (API version 7.41) Jan 23 17:58:11.451645 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:58:11.451676 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 17:58:11.451704 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 17:58:11.451732 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:58:11.451764 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 17:58:11.451792 kernel: loop: module loaded Jan 23 17:58:11.451818 systemd[1]: Stopped verity-setup.service. Jan 23 17:58:11.451845 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 17:58:11.451875 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 17:58:11.451906 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 17:58:11.451939 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 17:58:11.451967 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 17:58:11.451995 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 17:58:11.452022 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:58:11.452050 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 17:58:11.452077 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 17:58:11.452105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:58:11.452157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:58:11.452187 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:58:11.452220 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:58:11.452249 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 17:58:11.452282 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 17:58:11.452313 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:58:11.452340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:58:11.452368 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:58:11.452395 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 17:58:11.452422 kernel: ACPI: bus type drm_connector registered Jan 23 17:58:11.452453 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 17:58:11.452483 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 17:58:11.452511 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 17:58:11.452539 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:58:11.452568 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 17:58:11.452596 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 17:58:11.452624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:58:11.452653 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 17:58:11.452683 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:58:11.452715 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 17:58:11.452745 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:58:11.452773 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:58:11.452801 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 17:58:11.452832 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:58:11.452860 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:58:11.452943 systemd-journald[1538]: Collecting audit messages is disabled. Jan 23 17:58:11.452995 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:58:11.453024 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:58:11.453052 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 17:58:11.453079 systemd-journald[1538]: Journal started Jan 23 17:58:11.455199 systemd-journald[1538]: Runtime Journal (/run/log/journal/ec28ba7182331ed79b2e1656be1e84c4) is 8M, max 75.3M, 67.3M free. Jan 23 17:58:11.460025 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 17:58:10.639948 systemd[1]: Queued start job for default target multi-user.target. Jan 23 17:58:10.655894 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 17:58:10.656740 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 17:58:11.479289 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:58:11.475489 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 17:58:11.507034 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 17:58:11.523441 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 17:58:11.544977 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 17:58:11.550283 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:58:11.558374 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 17:58:11.567482 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 17:58:11.606588 systemd-journald[1538]: Time spent on flushing to /var/log/journal/ec28ba7182331ed79b2e1656be1e84c4 is 46.617ms for 926 entries. Jan 23 17:58:11.606588 systemd-journald[1538]: System Journal (/var/log/journal/ec28ba7182331ed79b2e1656be1e84c4) is 8M, max 195.6M, 187.6M free. Jan 23 17:58:11.675764 systemd-journald[1538]: Received client request to flush runtime journal. Jan 23 17:58:11.675832 kernel: loop0: detected capacity change from 0 to 100632 Jan 23 17:58:11.619218 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:58:11.623079 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 17:58:11.661089 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 17:58:11.669778 systemd-tmpfiles[1571]: ACLs are not supported, ignoring. Jan 23 17:58:11.669804 systemd-tmpfiles[1571]: ACLs are not supported, ignoring. Jan 23 17:58:11.684139 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 17:58:11.691254 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:58:11.700494 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 17:58:11.750179 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 17:58:11.773169 kernel: loop1: detected capacity change from 0 to 211168 Jan 23 17:58:11.800526 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:58:11.838225 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 17:58:11.854450 kernel: loop2: detected capacity change from 0 to 119840 Jan 23 17:58:11.848379 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:58:11.892341 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Jan 23 17:58:11.892826 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Jan 23 17:58:11.900941 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:58:11.980651 kernel: loop3: detected capacity change from 0 to 61264 Jan 23 17:58:12.024460 kernel: loop4: detected capacity change from 0 to 100632 Jan 23 17:58:12.051178 kernel: loop5: detected capacity change from 0 to 211168 Jan 23 17:58:12.095188 kernel: loop6: detected capacity change from 0 to 119840 Jan 23 17:58:12.117243 kernel: loop7: detected capacity change from 0 to 61264 Jan 23 17:58:12.137073 (sd-merge)[1620]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 17:58:12.138322 (sd-merge)[1620]: Merged extensions into '/usr'. Jan 23 17:58:12.155004 systemd[1]: Reload requested from client PID 1570 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 17:58:12.155032 systemd[1]: Reloading... Jan 23 17:58:12.387172 zram_generator::config[1652]: No configuration found. Jan 23 17:58:12.934088 systemd[1]: Reloading finished in 778 ms. Jan 23 17:58:12.970418 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 17:58:13.278277 ldconfig[1563]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 17:58:13.487344 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 17:58:13.490929 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 17:58:13.508364 systemd[1]: Starting ensure-sysext.service... Jan 23 17:58:13.519452 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:58:13.528741 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:58:13.553310 systemd[1]: Reload requested from client PID 1699 ('systemctl') (unit ensure-sysext.service)... Jan 23 17:58:13.553341 systemd[1]: Reloading... Jan 23 17:58:13.616579 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 17:58:13.616663 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 17:58:13.617274 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 17:58:13.617786 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 17:58:13.621703 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 17:58:13.623390 systemd-udevd[1701]: Using default interface naming scheme 'v255'. Jan 23 17:58:13.624165 systemd-tmpfiles[1700]: ACLs are not supported, ignoring. Jan 23 17:58:13.624301 systemd-tmpfiles[1700]: ACLs are not supported, ignoring. Jan 23 17:58:13.636497 systemd-tmpfiles[1700]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:58:13.636524 systemd-tmpfiles[1700]: Skipping /boot Jan 23 17:58:13.670337 systemd-tmpfiles[1700]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:58:13.670370 systemd-tmpfiles[1700]: Skipping /boot Jan 23 17:58:13.755162 zram_generator::config[1728]: No configuration found. Jan 23 17:58:14.003232 (udev-worker)[1740]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:58:14.334869 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 17:58:14.335747 systemd[1]: Reloading finished in 781 ms. Jan 23 17:58:14.357584 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:58:14.381254 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:58:14.416365 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:58:14.426510 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 17:58:14.437654 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 17:58:14.446093 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:58:14.456521 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:58:14.465590 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 17:58:14.479907 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:58:14.500746 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:58:14.506654 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:58:14.514340 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:58:14.517924 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:58:14.518212 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:58:14.527008 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 17:58:14.535409 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:58:14.535793 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:58:14.535974 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:58:14.547187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:58:14.573686 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:58:14.577696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:58:14.577941 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:58:14.578309 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 17:58:14.594202 systemd[1]: Finished ensure-sysext.service. Jan 23 17:58:14.648587 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:58:14.650975 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:58:14.662888 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 17:58:14.666797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:58:14.667250 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:58:14.673383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:58:14.673791 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:58:14.690316 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:58:14.690475 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:58:14.743252 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 17:58:14.754664 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 17:58:14.776569 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:58:14.778382 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:58:14.802373 augenrules[1921]: No rules Jan 23 17:58:14.808381 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:58:14.809337 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:58:14.816155 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 17:58:14.825236 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:58:14.832245 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 17:58:14.927157 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:58:15.128773 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 17:58:15.130538 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 17:58:15.136566 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 17:58:15.182215 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 17:58:15.273603 systemd-networkd[1840]: lo: Link UP Jan 23 17:58:15.274051 systemd-networkd[1840]: lo: Gained carrier Jan 23 17:58:15.277305 systemd-networkd[1840]: Enumeration completed Jan 23 17:58:15.277625 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:58:15.280146 systemd-networkd[1840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:58:15.281097 systemd-networkd[1840]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:58:15.284192 systemd-networkd[1840]: eth0: Link UP Jan 23 17:58:15.284624 systemd-networkd[1840]: eth0: Gained carrier Jan 23 17:58:15.284750 systemd-networkd[1840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:58:15.287436 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 17:58:15.290647 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 17:58:15.318336 systemd-resolved[1841]: Positive Trust Anchors: Jan 23 17:58:15.318374 systemd-resolved[1841]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:58:15.318438 systemd-resolved[1841]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:58:15.321217 systemd-networkd[1840]: eth0: DHCPv4 address 172.31.16.186/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 17:58:15.343374 systemd-resolved[1841]: Defaulting to hostname 'linux'. Jan 23 17:58:15.349491 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:58:15.349837 systemd[1]: Reached target network.target - Network. Jan 23 17:58:15.352267 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:58:15.363336 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 17:58:15.377814 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:58:15.381517 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:58:15.384883 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 17:58:15.388471 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 17:58:15.392670 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 17:58:15.396030 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 17:58:15.399504 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 17:58:15.402986 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 17:58:15.403280 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:58:15.405504 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:58:15.410052 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 17:58:15.415818 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 17:58:15.422763 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 17:58:15.428058 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 17:58:15.431587 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 17:58:15.444380 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 17:58:15.447754 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 17:58:15.452154 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 17:58:15.455578 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:58:15.458528 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:58:15.461228 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:58:15.461290 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:58:15.463506 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 17:58:15.472397 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 17:58:15.481500 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 17:58:15.495058 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 17:58:15.503730 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 17:58:15.514433 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 17:58:15.519197 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 17:58:15.529928 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 17:58:15.540456 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 17:58:15.548961 jq[1986]: false Jan 23 17:58:15.549802 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 17:58:15.561614 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 17:58:15.572548 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 17:58:15.581722 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 17:58:15.593616 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 17:58:15.602536 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 17:58:15.624107 extend-filesystems[1987]: Found /dev/nvme0n1p6 Jan 23 17:58:15.625904 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 17:58:15.638516 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 17:58:15.648662 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 17:58:15.662333 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 17:58:15.666612 extend-filesystems[1987]: Found /dev/nvme0n1p9 Jan 23 17:58:15.670433 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 17:58:15.672245 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 17:58:15.698435 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 17:58:15.701222 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 17:58:15.702186 extend-filesystems[1987]: Checking size of /dev/nvme0n1p9 Jan 23 17:58:15.782461 coreos-metadata[1983]: Jan 23 17:58:15.782 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 17:58:15.787055 coreos-metadata[1983]: Jan 23 17:58:15.786 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 17:58:15.788093 coreos-metadata[1983]: Jan 23 17:58:15.787 INFO Fetch successful Jan 23 17:58:15.788093 coreos-metadata[1983]: Jan 23 17:58:15.787 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 17:58:15.789558 coreos-metadata[1983]: Jan 23 17:58:15.789 INFO Fetch successful Jan 23 17:58:15.789866 coreos-metadata[1983]: Jan 23 17:58:15.789 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 17:58:15.792832 coreos-metadata[1983]: Jan 23 17:58:15.791 INFO Fetch successful Jan 23 17:58:15.792832 coreos-metadata[1983]: Jan 23 17:58:15.791 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 17:58:15.797424 jq[2002]: true Jan 23 17:58:15.797968 coreos-metadata[1983]: Jan 23 17:58:15.797 INFO Fetch successful Jan 23 17:58:15.797968 coreos-metadata[1983]: Jan 23 17:58:15.797 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 17:58:15.807362 coreos-metadata[1983]: Jan 23 17:58:15.805 INFO Fetch failed with 404: resource not found Jan 23 17:58:15.807362 coreos-metadata[1983]: Jan 23 17:58:15.805 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 17:58:15.807362 coreos-metadata[1983]: Jan 23 17:58:15.805 INFO Fetch successful Jan 23 17:58:15.807362 coreos-metadata[1983]: Jan 23 17:58:15.805 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 17:58:15.810447 coreos-metadata[1983]: Jan 23 17:58:15.810 INFO Fetch successful Jan 23 17:58:15.810447 coreos-metadata[1983]: Jan 23 17:58:15.810 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 17:58:15.815512 coreos-metadata[1983]: Jan 23 17:58:15.814 INFO Fetch successful Jan 23 17:58:15.815512 coreos-metadata[1983]: Jan 23 17:58:15.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 17:58:15.821654 coreos-metadata[1983]: Jan 23 17:58:15.819 INFO Fetch successful Jan 23 17:58:15.821654 coreos-metadata[1983]: Jan 23 17:58:15.819 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 17:58:15.829166 coreos-metadata[1983]: Jan 23 17:58:15.824 INFO Fetch successful Jan 23 17:58:15.831003 extend-filesystems[1987]: Resized partition /dev/nvme0n1p9 Jan 23 17:58:15.840569 extend-filesystems[2034]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 17:58:15.860424 dbus-daemon[1984]: [system] SELinux support is enabled Jan 23 17:58:15.861932 tar[2006]: linux-arm64/LICENSE Jan 23 17:58:15.866387 tar[2006]: linux-arm64/helm Jan 23 17:58:15.863739 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 17:58:15.878823 (ntainerd)[2026]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 17:58:15.879463 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 17:58:15.879930 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 17:58:15.889231 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 17:58:15.889306 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 17:58:15.895381 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 17:58:15.895431 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 17:58:15.959351 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 17:58:15.959516 jq[2033]: true Jan 23 17:58:15.913757 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 17:58:15.917034 dbus-daemon[1984]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1840 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 17:58:15.929686 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 17:58:15.966500 ntpd[1989]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:58:15.971015 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:58:15.971015 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:58:15.971015 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: ---------------------------------------------------- Jan 23 17:58:15.971015 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:58:15.971015 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:58:15.971015 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: corporation. Support and training for ntp-4 are Jan 23 17:58:15.971015 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: available at https://www.nwtime.org/support Jan 23 17:58:15.971015 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: ---------------------------------------------------- Jan 23 17:58:15.966620 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:58:15.966641 ntpd[1989]: ---------------------------------------------------- Jan 23 17:58:15.966658 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:58:15.966675 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:58:15.966691 ntpd[1989]: corporation. Support and training for ntp-4 are Jan 23 17:58:15.966707 ntpd[1989]: available at https://www.nwtime.org/support Jan 23 17:58:15.966723 ntpd[1989]: ---------------------------------------------------- Jan 23 17:58:15.975256 ntpd[1989]: proto: precision = 0.096 usec (-23) Jan 23 17:58:15.975735 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: proto: precision = 0.096 usec (-23) Jan 23 17:58:15.976879 ntpd[1989]: basedate set to 2026-01-11 Jan 23 17:58:15.979024 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: basedate set to 2026-01-11 Jan 23 17:58:15.979024 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: gps base set to 2026-01-11 (week 2401) Jan 23 17:58:15.979024 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:58:15.979024 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:58:15.979024 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:58:15.979024 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: Listen normally on 3 eth0 172.31.16.186:123 Jan 23 17:58:15.979024 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: Listen normally on 4 lo [::1]:123 Jan 23 17:58:15.979024 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: bind(21) AF_INET6 [fe80::4b9:92ff:fea1:542d%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 17:58:15.979024 ntpd[1989]: 23 Jan 17:58:15 ntpd[1989]: unable to create socket on eth0 (5) for [fe80::4b9:92ff:fea1:542d%2]:123 Jan 23 17:58:15.976921 ntpd[1989]: gps base set to 2026-01-11 (week 2401) Jan 23 17:58:15.978172 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:58:15.978265 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:58:15.978599 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:58:15.978648 ntpd[1989]: Listen normally on 3 eth0 172.31.16.186:123 Jan 23 17:58:15.978697 ntpd[1989]: Listen normally on 4 lo [::1]:123 Jan 23 17:58:15.978745 ntpd[1989]: bind(21) AF_INET6 [fe80::4b9:92ff:fea1:542d%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 17:58:15.978783 ntpd[1989]: unable to create socket on eth0 (5) for [fe80::4b9:92ff:fea1:542d%2]:123 Jan 23 17:58:15.992415 systemd-coredump[2046]: Process 1989 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 17:58:16.002860 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 17:58:16.013417 systemd[1]: Started systemd-coredump@0-2046-0.service - Process Core Dump (PID 2046/UID 0). Jan 23 17:58:16.022287 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 17:58:16.103856 update_engine[2001]: I20260123 17:58:16.097996 2001 main.cc:92] Flatcar Update Engine starting Jan 23 17:58:16.127234 systemd[1]: Started update-engine.service - Update Engine. Jan 23 17:58:16.132983 update_engine[2001]: I20260123 17:58:16.132846 2001 update_check_scheduler.cc:74] Next update check in 7m44s Jan 23 17:58:16.142084 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 17:58:16.181107 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 17:58:16.187594 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 17:58:16.213756 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 17:58:16.238656 extend-filesystems[2034]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 17:58:16.238656 extend-filesystems[2034]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 17:58:16.238656 extend-filesystems[2034]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 17:58:16.267354 extend-filesystems[1987]: Resized filesystem in /dev/nvme0n1p9 Jan 23 17:58:16.270938 bash[2072]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:58:16.247831 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 17:58:16.248852 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 17:58:16.277502 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 17:58:16.290626 systemd[1]: Starting sshkeys.service... Jan 23 17:58:16.419971 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 17:58:16.431759 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 17:58:16.464958 systemd-logind[1999]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 17:58:16.465025 systemd-logind[1999]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 17:58:16.467586 systemd-logind[1999]: New seat seat0. Jan 23 17:58:16.472284 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 17:58:16.722070 containerd[2026]: time="2026-01-23T17:58:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 17:58:16.727160 containerd[2026]: time="2026-01-23T17:58:16.724330706Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 17:58:16.903437 containerd[2026]: time="2026-01-23T17:58:16.902665239Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.132µs" Jan 23 17:58:16.903437 containerd[2026]: time="2026-01-23T17:58:16.902725251Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 17:58:16.903437 containerd[2026]: time="2026-01-23T17:58:16.902770683Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 17:58:16.912575 containerd[2026]: time="2026-01-23T17:58:16.911549667Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 17:58:16.912575 containerd[2026]: time="2026-01-23T17:58:16.911634927Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 17:58:16.912575 containerd[2026]: time="2026-01-23T17:58:16.911702955Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:58:16.912575 containerd[2026]: time="2026-01-23T17:58:16.911880843Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:58:16.912575 containerd[2026]: time="2026-01-23T17:58:16.911912355Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:58:16.914515 systemd-coredump[2047]: Process 1989 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1989: #0 0x0000aaaacd880b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaacd82fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaacd830240 n/a (ntpd + 0x10240) #3 0x0000aaaacd82be14 n/a (ntpd + 0xbe14) #4 0x0000aaaacd82d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaacd835a38 n/a (ntpd + 0x15a38) #6 0x0000aaaacd82738c n/a (ntpd + 0x738c) #7 0x0000ffff81a72034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff81a72118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaacd8273f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Jan 23 17:58:16.927223 containerd[2026]: time="2026-01-23T17:58:16.915262419Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:58:16.927223 containerd[2026]: time="2026-01-23T17:58:16.915316803Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:58:16.927223 containerd[2026]: time="2026-01-23T17:58:16.915352047Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:58:16.927223 containerd[2026]: time="2026-01-23T17:58:16.915375519Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 17:58:16.927223 containerd[2026]: time="2026-01-23T17:58:16.915716895Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 17:58:16.921823 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 17:58:16.935589 containerd[2026]: time="2026-01-23T17:58:16.930353031Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:58:16.935589 containerd[2026]: time="2026-01-23T17:58:16.930447447Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:58:16.935589 containerd[2026]: time="2026-01-23T17:58:16.930475323Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 17:58:16.935589 containerd[2026]: time="2026-01-23T17:58:16.930530583Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 17:58:16.935589 containerd[2026]: time="2026-01-23T17:58:16.931009779Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 17:58:16.922985 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 17:58:16.934402 systemd[1]: systemd-coredump@0-2046-0.service: Deactivated successfully. Jan 23 17:58:16.947548 containerd[2026]: time="2026-01-23T17:58:16.945925491Z" level=info msg="metadata content store policy set" policy=shared Jan 23 17:58:16.950969 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 17:58:16.954716 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.971623239Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.971723535Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.971762943Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.971796627Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.971826567Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.971853267Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.971886699Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.971918259Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.971965335Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.971994051Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.972019479Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.972056727Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.972352635Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 17:58:16.978652 containerd[2026]: time="2026-01-23T17:58:16.972422211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 17:58:16.973809 dbus-daemon[1984]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2042 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 17:58:16.979475 containerd[2026]: time="2026-01-23T17:58:16.972457659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 17:58:16.979475 containerd[2026]: time="2026-01-23T17:58:16.972490515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 17:58:16.979475 containerd[2026]: time="2026-01-23T17:58:16.972518367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 17:58:16.979475 containerd[2026]: time="2026-01-23T17:58:16.972544635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 17:58:16.979475 containerd[2026]: time="2026-01-23T17:58:16.972572871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 17:58:16.979475 containerd[2026]: time="2026-01-23T17:58:16.972598743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 17:58:16.979475 containerd[2026]: time="2026-01-23T17:58:16.972627435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 17:58:16.979475 containerd[2026]: time="2026-01-23T17:58:16.972654291Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 17:58:16.979475 containerd[2026]: time="2026-01-23T17:58:16.972681447Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 17:58:16.979475 containerd[2026]: time="2026-01-23T17:58:16.973057395Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 17:58:16.992216 containerd[2026]: time="2026-01-23T17:58:16.984409851Z" level=info msg="Start snapshots syncer" Jan 23 17:58:16.992216 containerd[2026]: time="2026-01-23T17:58:16.984711567Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 17:58:16.992216 containerd[2026]: time="2026-01-23T17:58:16.988547895Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 17:58:16.992538 containerd[2026]: time="2026-01-23T17:58:16.988670271Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 17:58:16.992538 containerd[2026]: time="2026-01-23T17:58:16.988777815Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 17:58:16.992538 containerd[2026]: time="2026-01-23T17:58:16.989024271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 17:58:16.992538 containerd[2026]: time="2026-01-23T17:58:16.989079591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 17:58:16.996080 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 17:58:16.999743 containerd[2026]: time="2026-01-23T17:58:16.989107599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 17:58:16.999743 containerd[2026]: time="2026-01-23T17:58:16.998476911Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 17:58:16.999743 containerd[2026]: time="2026-01-23T17:58:16.998519355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 17:58:16.999743 containerd[2026]: time="2026-01-23T17:58:16.998549751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 17:58:16.999743 containerd[2026]: time="2026-01-23T17:58:16.998578311Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 17:58:16.999743 containerd[2026]: time="2026-01-23T17:58:16.998653803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 17:58:16.999743 containerd[2026]: time="2026-01-23T17:58:16.998685735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 17:58:16.999743 containerd[2026]: time="2026-01-23T17:58:16.998715615Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 17:58:16.999743 containerd[2026]: time="2026-01-23T17:58:16.998805615Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:58:16.999743 containerd[2026]: time="2026-01-23T17:58:16.998843583Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:58:16.999743 containerd[2026]: time="2026-01-23T17:58:16.998867019Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:58:16.999743 containerd[2026]: time="2026-01-23T17:58:16.998892099Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:58:16.999743 containerd[2026]: time="2026-01-23T17:58:16.998916903Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 17:58:17.006318 containerd[2026]: time="2026-01-23T17:58:16.998944143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 17:58:17.006318 containerd[2026]: time="2026-01-23T17:58:17.004494443Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 17:58:17.006318 containerd[2026]: time="2026-01-23T17:58:17.004703219Z" level=info msg="runtime interface created" Jan 23 17:58:17.006318 containerd[2026]: time="2026-01-23T17:58:17.004726691Z" level=info msg="created NRI interface" Jan 23 17:58:17.006318 containerd[2026]: time="2026-01-23T17:58:17.004752587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 17:58:17.006318 containerd[2026]: time="2026-01-23T17:58:17.004786607Z" level=info msg="Connect containerd service" Jan 23 17:58:17.006318 containerd[2026]: time="2026-01-23T17:58:17.004848587Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 17:58:17.039212 coreos-metadata[2084]: Jan 23 17:58:17.029 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 17:58:17.039212 coreos-metadata[2084]: Jan 23 17:58:17.033 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 17:58:17.039212 coreos-metadata[2084]: Jan 23 17:58:17.036 INFO Fetch successful Jan 23 17:58:17.039212 coreos-metadata[2084]: Jan 23 17:58:17.036 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 17:58:17.035890 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 17:58:17.043667 containerd[2026]: time="2026-01-23T17:58:17.029609676Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:58:17.043731 coreos-metadata[2084]: Jan 23 17:58:17.041 INFO Fetch successful Jan 23 17:58:17.044510 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 17:58:17.066853 unknown[2084]: wrote ssh authorized keys file for user: core Jan 23 17:58:17.112367 systemd-networkd[1840]: eth0: Gained IPv6LL Jan 23 17:58:17.126493 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 17:58:17.133729 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 17:58:17.184441 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 17:58:17.198418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:17.209081 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 17:58:17.233503 update-ssh-keys[2184]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:58:17.256375 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 17:58:17.278318 systemd[1]: Finished sshkeys.service. Jan 23 17:58:17.430525 ntpd[2177]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:58:17.432625 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:58:17.432625 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:58:17.432625 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: ---------------------------------------------------- Jan 23 17:58:17.432625 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:58:17.432625 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:58:17.432625 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: corporation. Support and training for ntp-4 are Jan 23 17:58:17.432625 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: available at https://www.nwtime.org/support Jan 23 17:58:17.432625 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: ---------------------------------------------------- Jan 23 17:58:17.432625 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: proto: precision = 0.096 usec (-23) Jan 23 17:58:17.430672 ntpd[2177]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:58:17.430691 ntpd[2177]: ---------------------------------------------------- Jan 23 17:58:17.430709 ntpd[2177]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:58:17.430725 ntpd[2177]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:58:17.430740 ntpd[2177]: corporation. Support and training for ntp-4 are Jan 23 17:58:17.430757 ntpd[2177]: available at https://www.nwtime.org/support Jan 23 17:58:17.430773 ntpd[2177]: ---------------------------------------------------- Jan 23 17:58:17.431932 ntpd[2177]: proto: precision = 0.096 usec (-23) Jan 23 17:58:17.443705 ntpd[2177]: basedate set to 2026-01-11 Jan 23 17:58:17.444917 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: basedate set to 2026-01-11 Jan 23 17:58:17.444917 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: gps base set to 2026-01-11 (week 2401) Jan 23 17:58:17.444917 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:58:17.444917 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:58:17.444917 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:58:17.444917 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: Listen normally on 3 eth0 172.31.16.186:123 Jan 23 17:58:17.444917 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: Listen normally on 4 lo [::1]:123 Jan 23 17:58:17.444917 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: Listen normally on 5 eth0 [fe80::4b9:92ff:fea1:542d%2]:123 Jan 23 17:58:17.444917 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: Listening on routing socket on fd #22 for interface updates Jan 23 17:58:17.443755 ntpd[2177]: gps base set to 2026-01-11 (week 2401) Jan 23 17:58:17.443901 ntpd[2177]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:58:17.443947 ntpd[2177]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:58:17.444306 ntpd[2177]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:58:17.444359 ntpd[2177]: Listen normally on 3 eth0 172.31.16.186:123 Jan 23 17:58:17.444406 ntpd[2177]: Listen normally on 4 lo [::1]:123 Jan 23 17:58:17.444449 ntpd[2177]: Listen normally on 5 eth0 [fe80::4b9:92ff:fea1:542d%2]:123 Jan 23 17:58:17.444496 ntpd[2177]: Listening on routing socket on fd #22 for interface updates Jan 23 17:58:17.481365 ntpd[2177]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:58:17.484181 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:58:17.484181 ntpd[2177]: 23 Jan 17:58:17 ntpd[2177]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:58:17.481427 ntpd[2177]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:58:17.488280 locksmithd[2064]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 17:58:17.536080 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 17:58:17.702609 containerd[2026]: time="2026-01-23T17:58:17.702346179Z" level=info msg="Start subscribing containerd event" Jan 23 17:58:17.702609 containerd[2026]: time="2026-01-23T17:58:17.702528183Z" level=info msg="Start recovering state" Jan 23 17:58:17.702802 containerd[2026]: time="2026-01-23T17:58:17.702738147Z" level=info msg="Start event monitor" Jan 23 17:58:17.702865 containerd[2026]: time="2026-01-23T17:58:17.702802299Z" level=info msg="Start cni network conf syncer for default" Jan 23 17:58:17.702865 containerd[2026]: time="2026-01-23T17:58:17.702830163Z" level=info msg="Start streaming server" Jan 23 17:58:17.702959 containerd[2026]: time="2026-01-23T17:58:17.702879939Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 17:58:17.702959 containerd[2026]: time="2026-01-23T17:58:17.702900831Z" level=info msg="runtime interface starting up..." Jan 23 17:58:17.702959 containerd[2026]: time="2026-01-23T17:58:17.702916251Z" level=info msg="starting plugins..." Jan 23 17:58:17.703098 containerd[2026]: time="2026-01-23T17:58:17.702979623Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 17:58:17.705314 containerd[2026]: time="2026-01-23T17:58:17.705253839Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 17:58:17.717246 containerd[2026]: time="2026-01-23T17:58:17.706697151Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 17:58:17.717246 containerd[2026]: time="2026-01-23T17:58:17.706858911Z" level=info msg="containerd successfully booted in 0.986014s" Jan 23 17:58:17.707007 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 17:58:17.745844 amazon-ssm-agent[2188]: Initializing new seelog logger Jan 23 17:58:17.749292 amazon-ssm-agent[2188]: New Seelog Logger Creation Complete Jan 23 17:58:17.749734 amazon-ssm-agent[2188]: 2026/01/23 17:58:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:17.749868 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:17.752165 amazon-ssm-agent[2188]: 2026/01/23 17:58:17 processing appconfig overrides Jan 23 17:58:17.757157 amazon-ssm-agent[2188]: 2026/01/23 17:58:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:17.757157 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:17.757157 amazon-ssm-agent[2188]: 2026/01/23 17:58:17 processing appconfig overrides Jan 23 17:58:17.757157 amazon-ssm-agent[2188]: 2026/01/23 17:58:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:17.757157 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:17.757893 amazon-ssm-agent[2188]: 2026/01/23 17:58:17 processing appconfig overrides Jan 23 17:58:17.758209 amazon-ssm-agent[2188]: 2026-01-23 17:58:17.7564 INFO Proxy environment variables: Jan 23 17:58:17.769158 amazon-ssm-agent[2188]: 2026/01/23 17:58:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:17.769158 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:17.769158 amazon-ssm-agent[2188]: 2026/01/23 17:58:17 processing appconfig overrides Jan 23 17:58:17.818960 polkitd[2172]: Started polkitd version 126 Jan 23 17:58:17.859588 amazon-ssm-agent[2188]: 2026-01-23 17:58:17.7564 INFO https_proxy: Jan 23 17:58:17.866506 polkitd[2172]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 17:58:17.867252 polkitd[2172]: Loading rules from directory /run/polkit-1/rules.d Jan 23 17:58:17.867355 polkitd[2172]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 17:58:17.868066 polkitd[2172]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 17:58:17.874262 polkitd[2172]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 17:58:17.874390 polkitd[2172]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 17:58:17.877850 polkitd[2172]: Finished loading, compiling and executing 2 rules Jan 23 17:58:17.878453 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 17:58:17.889206 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 17:58:17.891724 polkitd[2172]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 17:58:17.955426 systemd-hostnamed[2042]: Hostname set to (transient) Jan 23 17:58:17.955868 systemd-resolved[1841]: System hostname changed to 'ip-172-31-16-186'. Jan 23 17:58:17.959370 amazon-ssm-agent[2188]: 2026-01-23 17:58:17.7564 INFO http_proxy: Jan 23 17:58:18.059900 amazon-ssm-agent[2188]: 2026-01-23 17:58:17.7564 INFO no_proxy: Jan 23 17:58:18.159159 amazon-ssm-agent[2188]: 2026-01-23 17:58:17.7567 INFO Checking if agent identity type OnPrem can be assumed Jan 23 17:58:18.257230 amazon-ssm-agent[2188]: 2026-01-23 17:58:17.7568 INFO Checking if agent identity type EC2 can be assumed Jan 23 17:58:18.356616 amazon-ssm-agent[2188]: 2026-01-23 17:58:18.0177 INFO Agent will take identity from EC2 Jan 23 17:58:18.457144 amazon-ssm-agent[2188]: 2026-01-23 17:58:18.0218 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 23 17:58:18.518161 tar[2006]: linux-arm64/README.md Jan 23 17:58:18.556022 amazon-ssm-agent[2188]: 2026-01-23 17:58:18.0218 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 17:58:18.557283 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 17:58:18.655301 amazon-ssm-agent[2188]: 2026-01-23 17:58:18.0218 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 17:58:18.755614 amazon-ssm-agent[2188]: 2026-01-23 17:58:18.0218 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 23 17:58:18.856153 amazon-ssm-agent[2188]: 2026-01-23 17:58:18.0218 INFO [Registrar] Starting registrar module Jan 23 17:58:18.925663 sshd_keygen[2029]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 17:58:18.957156 amazon-ssm-agent[2188]: 2026-01-23 17:58:18.0289 INFO [EC2Identity] Checking disk for registration info Jan 23 17:58:18.977238 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 17:58:18.987883 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 17:58:18.994666 systemd[1]: Started sshd@0-172.31.16.186:22-68.220.241.50:58658.service - OpenSSH per-connection server daemon (68.220.241.50:58658). Jan 23 17:58:19.035821 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 17:58:19.036355 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 17:58:19.050375 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 17:58:19.058360 amazon-ssm-agent[2188]: 2026-01-23 17:58:18.0289 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 23 17:58:19.097583 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 17:58:19.107712 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 17:58:19.117950 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 17:58:19.121602 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 17:58:19.158208 amazon-ssm-agent[2188]: 2026-01-23 17:58:18.0289 INFO [EC2Identity] Generating registration keypair Jan 23 17:58:19.234948 amazon-ssm-agent[2188]: 2026/01/23 17:58:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:19.235207 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:19.235504 amazon-ssm-agent[2188]: 2026/01/23 17:58:19 processing appconfig overrides Jan 23 17:58:19.258590 amazon-ssm-agent[2188]: 2026-01-23 17:58:19.1822 INFO [EC2Identity] Checking write access before registering Jan 23 17:58:19.269609 amazon-ssm-agent[2188]: 2026-01-23 17:58:19.1830 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 23 17:58:19.269609 amazon-ssm-agent[2188]: 2026-01-23 17:58:19.2345 INFO [EC2Identity] EC2 registration was successful. Jan 23 17:58:19.269609 amazon-ssm-agent[2188]: 2026-01-23 17:58:19.2345 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 23 17:58:19.269609 amazon-ssm-agent[2188]: 2026-01-23 17:58:19.2347 INFO [CredentialRefresher] credentialRefresher has started Jan 23 17:58:19.269609 amazon-ssm-agent[2188]: 2026-01-23 17:58:19.2347 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 17:58:19.269609 amazon-ssm-agent[2188]: 2026-01-23 17:58:19.2689 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 17:58:19.269609 amazon-ssm-agent[2188]: 2026-01-23 17:58:19.2693 INFO [CredentialRefresher] Credentials ready Jan 23 17:58:19.358633 amazon-ssm-agent[2188]: 2026-01-23 17:58:19.2695 INFO [CredentialRefresher] Next credential rotation will be in 29.999991283 minutes Jan 23 17:58:19.653212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:19.661734 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 17:58:19.662417 systemd[1]: Startup finished in 3.751s (kernel) + 10.467s (initrd) + 10.572s (userspace) = 24.791s. Jan 23 17:58:19.673099 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:58:19.688907 sshd[2251]: Accepted publickey for core from 68.220.241.50 port 58658 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:19.705659 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:19.733060 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 17:58:19.736191 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 17:58:19.768922 systemd-logind[1999]: New session 1 of user core. Jan 23 17:58:19.791570 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 17:58:19.798795 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 17:58:19.822607 (systemd)[2273]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 17:58:19.829242 systemd-logind[1999]: New session c1 of user core. Jan 23 17:58:20.155372 systemd[2273]: Queued start job for default target default.target. Jan 23 17:58:20.167056 systemd[2273]: Created slice app.slice - User Application Slice. Jan 23 17:58:20.167351 systemd[2273]: Reached target paths.target - Paths. Jan 23 17:58:20.167454 systemd[2273]: Reached target timers.target - Timers. Jan 23 17:58:20.172337 systemd[2273]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 17:58:20.203745 systemd[2273]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 17:58:20.203889 systemd[2273]: Reached target sockets.target - Sockets. Jan 23 17:58:20.204009 systemd[2273]: Reached target basic.target - Basic System. Jan 23 17:58:20.204108 systemd[2273]: Reached target default.target - Main User Target. Jan 23 17:58:20.204234 systemd[2273]: Startup finished in 358ms. Jan 23 17:58:20.204265 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 17:58:20.225454 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 17:58:20.308580 amazon-ssm-agent[2188]: 2026-01-23 17:58:20.3079 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 17:58:20.409700 amazon-ssm-agent[2188]: 2026-01-23 17:58:20.3162 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2289) started Jan 23 17:58:20.574831 amazon-ssm-agent[2188]: 2026-01-23 17:58:20.3163 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 17:58:20.682632 systemd[1]: Started sshd@1-172.31.16.186:22-68.220.241.50:58664.service - OpenSSH per-connection server daemon (68.220.241.50:58664). Jan 23 17:58:20.865986 kubelet[2266]: E0123 17:58:20.865822 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:58:20.871930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:58:20.872398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:58:20.875362 systemd[1]: kubelet.service: Consumed 1.580s CPU time, 258.6M memory peak. Jan 23 17:58:21.296866 sshd[2297]: Accepted publickey for core from 68.220.241.50 port 58664 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:21.299266 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:21.309360 systemd-logind[1999]: New session 2 of user core. Jan 23 17:58:21.318519 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 17:58:21.676954 sshd[2307]: Connection closed by 68.220.241.50 port 58664 Jan 23 17:58:21.675951 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:21.682818 systemd[1]: sshd@1-172.31.16.186:22-68.220.241.50:58664.service: Deactivated successfully. Jan 23 17:58:21.685828 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 17:58:21.689809 systemd-logind[1999]: Session 2 logged out. Waiting for processes to exit. Jan 23 17:58:21.692017 systemd-logind[1999]: Removed session 2. Jan 23 17:58:21.760162 systemd[1]: Started sshd@2-172.31.16.186:22-68.220.241.50:58676.service - OpenSSH per-connection server daemon (68.220.241.50:58676). Jan 23 17:58:22.273751 sshd[2313]: Accepted publickey for core from 68.220.241.50 port 58676 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:22.277237 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:22.285783 systemd-logind[1999]: New session 3 of user core. Jan 23 17:58:22.299693 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 17:58:22.620520 sshd[2316]: Connection closed by 68.220.241.50 port 58676 Jan 23 17:58:22.621605 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:22.628655 systemd[1]: sshd@2-172.31.16.186:22-68.220.241.50:58676.service: Deactivated successfully. Jan 23 17:58:22.633010 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 17:58:22.635294 systemd-logind[1999]: Session 3 logged out. Waiting for processes to exit. Jan 23 17:58:22.637909 systemd-logind[1999]: Removed session 3. Jan 23 17:58:22.714775 systemd[1]: Started sshd@3-172.31.16.186:22-68.220.241.50:51024.service - OpenSSH per-connection server daemon (68.220.241.50:51024). Jan 23 17:58:23.236213 sshd[2322]: Accepted publickey for core from 68.220.241.50 port 51024 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:23.238384 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:23.248215 systemd-logind[1999]: New session 4 of user core. Jan 23 17:58:23.254419 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 17:58:23.589214 sshd[2325]: Connection closed by 68.220.241.50 port 51024 Jan 23 17:58:23.588926 sshd-session[2322]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:23.599878 systemd-logind[1999]: Session 4 logged out. Waiting for processes to exit. Jan 23 17:58:23.600142 systemd[1]: sshd@3-172.31.16.186:22-68.220.241.50:51024.service: Deactivated successfully. Jan 23 17:58:23.606245 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 17:58:23.609988 systemd-logind[1999]: Removed session 4. Jan 23 17:58:23.690849 systemd[1]: Started sshd@4-172.31.16.186:22-68.220.241.50:51032.service - OpenSSH per-connection server daemon (68.220.241.50:51032). Jan 23 17:58:24.219382 sshd[2331]: Accepted publickey for core from 68.220.241.50 port 51032 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:24.221903 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:24.230070 systemd-logind[1999]: New session 5 of user core. Jan 23 17:58:24.242474 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 17:58:24.515508 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 17:58:24.516251 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:58:25.055520 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 17:58:25.085699 (dockerd)[2352]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 17:58:25.475185 dockerd[2352]: time="2026-01-23T17:58:25.472640469Z" level=info msg="Starting up" Jan 23 17:58:25.477168 dockerd[2352]: time="2026-01-23T17:58:25.476761108Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 17:58:25.499635 dockerd[2352]: time="2026-01-23T17:58:25.499578776Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 17:58:25.588468 systemd[1]: var-lib-docker-metacopy\x2dcheck3540484054-merged.mount: Deactivated successfully. Jan 23 17:58:25.603334 dockerd[2352]: time="2026-01-23T17:58:25.602897478Z" level=info msg="Loading containers: start." Jan 23 17:58:25.618165 kernel: Initializing XFRM netlink socket Jan 23 17:58:25.954091 (udev-worker)[2374]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:58:26.035659 systemd-networkd[1840]: docker0: Link UP Jan 23 17:58:26.042238 dockerd[2352]: time="2026-01-23T17:58:26.042189157Z" level=info msg="Loading containers: done." Jan 23 17:58:26.068524 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck466832122-merged.mount: Deactivated successfully. Jan 23 17:58:26.073175 dockerd[2352]: time="2026-01-23T17:58:26.072968447Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 17:58:26.073175 dockerd[2352]: time="2026-01-23T17:58:26.073081183Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 17:58:26.073450 dockerd[2352]: time="2026-01-23T17:58:26.073272727Z" level=info msg="Initializing buildkit" Jan 23 17:58:26.120980 dockerd[2352]: time="2026-01-23T17:58:26.120902474Z" level=info msg="Completed buildkit initialization" Jan 23 17:58:26.138587 dockerd[2352]: time="2026-01-23T17:58:26.138496115Z" level=info msg="Daemon has completed initialization" Jan 23 17:58:26.138956 dockerd[2352]: time="2026-01-23T17:58:26.138769372Z" level=info msg="API listen on /run/docker.sock" Jan 23 17:58:26.140554 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 17:58:27.375148 containerd[2026]: time="2026-01-23T17:58:27.375031522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 17:58:27.933049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1620687936.mount: Deactivated successfully. Jan 23 17:58:29.318526 containerd[2026]: time="2026-01-23T17:58:29.318469709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:29.320734 containerd[2026]: time="2026-01-23T17:58:29.320669221Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Jan 23 17:58:29.323871 containerd[2026]: time="2026-01-23T17:58:29.323099055Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:29.339629 containerd[2026]: time="2026-01-23T17:58:29.339575801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:29.343599 containerd[2026]: time="2026-01-23T17:58:29.343547878Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.96845964s" Jan 23 17:58:29.343776 containerd[2026]: time="2026-01-23T17:58:29.343748643Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 23 17:58:29.346894 containerd[2026]: time="2026-01-23T17:58:29.346838651Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 17:58:30.862152 containerd[2026]: time="2026-01-23T17:58:30.861307197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:30.863430 containerd[2026]: time="2026-01-23T17:58:30.863356670Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Jan 23 17:58:30.864970 containerd[2026]: time="2026-01-23T17:58:30.864926886Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:30.869518 containerd[2026]: time="2026-01-23T17:58:30.869445921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:30.872727 containerd[2026]: time="2026-01-23T17:58:30.871986198Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.524718369s" Jan 23 17:58:30.872727 containerd[2026]: time="2026-01-23T17:58:30.872059135Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 23 17:58:30.873380 containerd[2026]: time="2026-01-23T17:58:30.873328877Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 17:58:31.124175 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 17:58:31.128070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:31.542495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:31.565982 (kubelet)[2633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:58:31.680897 kubelet[2633]: E0123 17:58:31.680811 2633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:58:31.692200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:58:31.692878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:58:31.694179 systemd[1]: kubelet.service: Consumed 355ms CPU time, 105.4M memory peak. Jan 23 17:58:32.190159 containerd[2026]: time="2026-01-23T17:58:32.190066574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:32.193355 containerd[2026]: time="2026-01-23T17:58:32.193287592Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Jan 23 17:58:32.194980 containerd[2026]: time="2026-01-23T17:58:32.194895243Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:32.199690 containerd[2026]: time="2026-01-23T17:58:32.199633003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:32.206283 containerd[2026]: time="2026-01-23T17:58:32.206199037Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.332660403s" Jan 23 17:58:32.206436 containerd[2026]: time="2026-01-23T17:58:32.206287749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 23 17:58:32.209252 containerd[2026]: time="2026-01-23T17:58:32.208776821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 17:58:33.475472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725564675.mount: Deactivated successfully. Jan 23 17:58:34.091871 containerd[2026]: time="2026-01-23T17:58:34.091359967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:34.092793 containerd[2026]: time="2026-01-23T17:58:34.092729251Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Jan 23 17:58:34.094560 containerd[2026]: time="2026-01-23T17:58:34.094483304Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:34.099038 containerd[2026]: time="2026-01-23T17:58:34.098959345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:34.100563 containerd[2026]: time="2026-01-23T17:58:34.100505177Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.891661639s" Jan 23 17:58:34.100764 containerd[2026]: time="2026-01-23T17:58:34.100729401Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 23 17:58:34.101765 containerd[2026]: time="2026-01-23T17:58:34.101703652Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 17:58:34.627813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457322800.mount: Deactivated successfully. Jan 23 17:58:35.815071 containerd[2026]: time="2026-01-23T17:58:35.815001477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:35.816882 containerd[2026]: time="2026-01-23T17:58:35.816828538Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jan 23 17:58:35.817910 containerd[2026]: time="2026-01-23T17:58:35.817854775Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:35.823855 containerd[2026]: time="2026-01-23T17:58:35.823757813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:35.827261 containerd[2026]: time="2026-01-23T17:58:35.826106955Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.724334833s" Jan 23 17:58:35.827261 containerd[2026]: time="2026-01-23T17:58:35.826199930Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 23 17:58:35.827938 containerd[2026]: time="2026-01-23T17:58:35.827869868Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 17:58:36.293923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1413920959.mount: Deactivated successfully. Jan 23 17:58:36.303220 containerd[2026]: time="2026-01-23T17:58:36.302254605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:58:36.304920 containerd[2026]: time="2026-01-23T17:58:36.304856209Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 17:58:36.306659 containerd[2026]: time="2026-01-23T17:58:36.306563919Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:58:36.315153 containerd[2026]: time="2026-01-23T17:58:36.314179469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:58:36.315568 containerd[2026]: time="2026-01-23T17:58:36.315527851Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 486.023293ms" Jan 23 17:58:36.315696 containerd[2026]: time="2026-01-23T17:58:36.315670578Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 17:58:36.316489 containerd[2026]: time="2026-01-23T17:58:36.316427388Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 17:58:36.847078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126275835.mount: Deactivated successfully. Jan 23 17:58:39.155176 containerd[2026]: time="2026-01-23T17:58:39.154555571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:39.157521 containerd[2026]: time="2026-01-23T17:58:39.157463592Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Jan 23 17:58:39.158470 containerd[2026]: time="2026-01-23T17:58:39.158417301Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:39.165197 containerd[2026]: time="2026-01-23T17:58:39.164291224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:39.167424 containerd[2026]: time="2026-01-23T17:58:39.167094589Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.850607375s" Jan 23 17:58:39.167424 containerd[2026]: time="2026-01-23T17:58:39.167212476Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 23 17:58:41.942836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 17:58:41.949485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:42.283363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:42.297907 (kubelet)[2792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:58:42.370094 kubelet[2792]: E0123 17:58:42.370002 2792 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:58:42.376506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:58:42.376967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:58:42.378018 systemd[1]: kubelet.service: Consumed 289ms CPU time, 104.7M memory peak. Jan 23 17:58:47.991681 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 17:58:49.695365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:49.696338 systemd[1]: kubelet.service: Consumed 289ms CPU time, 104.7M memory peak. Jan 23 17:58:49.703292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:49.765719 systemd[1]: Reload requested from client PID 2809 ('systemctl') (unit session-5.scope)... Jan 23 17:58:49.765750 systemd[1]: Reloading... Jan 23 17:58:49.982173 zram_generator::config[2853]: No configuration found. Jan 23 17:58:50.472666 systemd[1]: Reloading finished in 706 ms. Jan 23 17:58:50.565798 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 17:58:50.566205 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 17:58:50.566906 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:50.567146 systemd[1]: kubelet.service: Consumed 224ms CPU time, 95M memory peak. Jan 23 17:58:50.570725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:51.210020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:51.225666 (kubelet)[2917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:58:51.293128 kubelet[2917]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:58:51.293593 kubelet[2917]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:58:51.293679 kubelet[2917]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:58:51.293899 kubelet[2917]: I0123 17:58:51.293848 2917 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:58:53.041146 kubelet[2917]: I0123 17:58:53.040352 2917 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 17:58:53.041146 kubelet[2917]: I0123 17:58:53.040402 2917 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:58:53.041146 kubelet[2917]: I0123 17:58:53.040782 2917 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:58:53.099521 kubelet[2917]: I0123 17:58:53.099471 2917 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:58:53.099788 kubelet[2917]: E0123 17:58:53.099721 2917 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.186:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.186:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 17:58:53.116275 kubelet[2917]: I0123 17:58:53.116228 2917 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:58:53.122784 kubelet[2917]: I0123 17:58:53.122704 2917 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:58:53.123677 kubelet[2917]: I0123 17:58:53.123562 2917 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:58:53.123961 kubelet[2917]: I0123 17:58:53.123653 2917 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-186","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:58:53.124233 kubelet[2917]: I0123 17:58:53.124157 2917 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:58:53.124233 kubelet[2917]: I0123 17:58:53.124191 2917 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 17:58:53.126069 kubelet[2917]: I0123 17:58:53.125969 2917 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:58:53.133141 kubelet[2917]: I0123 17:58:53.132867 2917 kubelet.go:480] "Attempting to sync node with API server" Jan 23 17:58:53.133284 kubelet[2917]: I0123 17:58:53.133151 2917 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:58:53.133284 kubelet[2917]: I0123 17:58:53.133199 2917 kubelet.go:386] "Adding apiserver pod source" Jan 23 17:58:53.140146 kubelet[2917]: I0123 17:58:53.140012 2917 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:58:53.142442 kubelet[2917]: E0123 17:58:53.142187 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-186&limit=500&resourceVersion=0\": dial tcp 172.31.16.186:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 17:58:53.142602 kubelet[2917]: E0123 17:58:53.142553 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.186:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:58:53.144787 kubelet[2917]: I0123 17:58:53.144716 2917 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:58:53.146203 kubelet[2917]: I0123 17:58:53.146099 2917 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:58:53.146441 kubelet[2917]: W0123 17:58:53.146409 2917 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 17:58:53.153171 kubelet[2917]: I0123 17:58:53.152797 2917 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:58:53.153171 kubelet[2917]: I0123 17:58:53.152895 2917 server.go:1289] "Started kubelet" Jan 23 17:58:53.158183 kubelet[2917]: I0123 17:58:53.158098 2917 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:58:53.161644 kubelet[2917]: I0123 17:58:53.161595 2917 server.go:317] "Adding debug handlers to kubelet server" Jan 23 17:58:53.165095 kubelet[2917]: I0123 17:58:53.163427 2917 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:58:53.165095 kubelet[2917]: I0123 17:58:53.164355 2917 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:58:53.174279 kubelet[2917]: E0123 17:58:53.171871 2917 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.186:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.186:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-186.188d6df6475890c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-186,UID:ip-172-31-16-186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-186,},FirstTimestamp:2026-01-23 17:58:53.152841922 +0000 UTC m=+1.919950514,LastTimestamp:2026-01-23 17:58:53.152841922 +0000 UTC m=+1.919950514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-186,}" Jan 23 17:58:53.174553 kubelet[2917]: I0123 17:58:53.174530 2917 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:58:53.176969 kubelet[2917]: E0123 17:58:53.176905 2917 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:58:53.177267 kubelet[2917]: I0123 17:58:53.177223 2917 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:58:53.181357 kubelet[2917]: E0123 17:58:53.181287 2917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-186\" not found" Jan 23 17:58:53.181357 kubelet[2917]: I0123 17:58:53.181376 2917 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:58:53.182037 kubelet[2917]: I0123 17:58:53.181980 2917 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:58:53.182255 kubelet[2917]: I0123 17:58:53.182141 2917 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:58:53.182983 kubelet[2917]: E0123 17:58:53.182903 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.186:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 17:58:53.184801 kubelet[2917]: E0123 17:58:53.184687 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-186?timeout=10s\": dial tcp 172.31.16.186:6443: connect: connection refused" interval="200ms" Jan 23 17:58:53.186560 kubelet[2917]: I0123 17:58:53.186473 2917 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:58:53.186864 kubelet[2917]: I0123 17:58:53.186690 2917 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:58:53.189412 kubelet[2917]: I0123 17:58:53.189244 2917 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:58:53.222199 kubelet[2917]: I0123 17:58:53.222103 2917 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:58:53.222361 kubelet[2917]: I0123 17:58:53.222229 2917 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:58:53.222361 kubelet[2917]: I0123 17:58:53.222267 2917 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:58:53.229063 kubelet[2917]: I0123 17:58:53.228980 2917 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 17:58:53.232269 kubelet[2917]: I0123 17:58:53.231275 2917 policy_none.go:49] "None policy: Start" Jan 23 17:58:53.232269 kubelet[2917]: I0123 17:58:53.231323 2917 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:58:53.232269 kubelet[2917]: I0123 17:58:53.231347 2917 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:58:53.234274 kubelet[2917]: I0123 17:58:53.234198 2917 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 17:58:53.234274 kubelet[2917]: I0123 17:58:53.234262 2917 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 17:58:53.234472 kubelet[2917]: I0123 17:58:53.234303 2917 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:58:53.234472 kubelet[2917]: I0123 17:58:53.234327 2917 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 17:58:53.234472 kubelet[2917]: E0123 17:58:53.234403 2917 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:58:53.237586 kubelet[2917]: E0123 17:58:53.237515 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.186:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 17:58:53.250931 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 17:58:53.274855 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 17:58:53.282768 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 17:58:53.284724 kubelet[2917]: E0123 17:58:53.284275 2917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-186\" not found" Jan 23 17:58:53.299009 kubelet[2917]: E0123 17:58:53.298828 2917 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:58:53.300048 kubelet[2917]: I0123 17:58:53.299880 2917 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:58:53.302459 kubelet[2917]: I0123 17:58:53.300303 2917 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:58:53.303218 kubelet[2917]: I0123 17:58:53.303162 2917 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:58:53.305943 kubelet[2917]: E0123 17:58:53.305902 2917 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:58:53.306230 kubelet[2917]: E0123 17:58:53.306202 2917 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-186\" not found" Jan 23 17:58:53.358078 systemd[1]: Created slice kubepods-burstable-poda3eb0277bd56ce0a130e88da83e13bad.slice - libcontainer container kubepods-burstable-poda3eb0277bd56ce0a130e88da83e13bad.slice. Jan 23 17:58:53.373303 kubelet[2917]: E0123 17:58:53.373260 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-186\" not found" node="ip-172-31-16-186" Jan 23 17:58:53.382180 systemd[1]: Created slice kubepods-burstable-pod83599c17b72526028036287b78a4052a.slice - libcontainer container kubepods-burstable-pod83599c17b72526028036287b78a4052a.slice. Jan 23 17:58:53.385897 kubelet[2917]: I0123 17:58:53.385849 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab9261b9f28f3bcaf69b59e7054759cb-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-186\" (UID: \"ab9261b9f28f3bcaf69b59e7054759cb\") " pod="kube-system/kube-scheduler-ip-172-31-16-186" Jan 23 17:58:53.387329 kubelet[2917]: I0123 17:58:53.387266 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3eb0277bd56ce0a130e88da83e13bad-ca-certs\") pod \"kube-apiserver-ip-172-31-16-186\" (UID: \"a3eb0277bd56ce0a130e88da83e13bad\") " pod="kube-system/kube-apiserver-ip-172-31-16-186" Jan 23 17:58:53.387572 kubelet[2917]: I0123 17:58:53.387538 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3eb0277bd56ce0a130e88da83e13bad-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-186\" (UID: \"a3eb0277bd56ce0a130e88da83e13bad\") " pod="kube-system/kube-apiserver-ip-172-31-16-186" Jan 23 17:58:53.387937 kubelet[2917]: I0123 17:58:53.387895 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/83599c17b72526028036287b78a4052a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-186\" (UID: \"83599c17b72526028036287b78a4052a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:58:53.388243 kubelet[2917]: I0123 17:58:53.388199 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83599c17b72526028036287b78a4052a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-186\" (UID: \"83599c17b72526028036287b78a4052a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:58:53.388779 kubelet[2917]: I0123 17:58:53.388476 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3eb0277bd56ce0a130e88da83e13bad-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-186\" (UID: \"a3eb0277bd56ce0a130e88da83e13bad\") " pod="kube-system/kube-apiserver-ip-172-31-16-186" Jan 23 17:58:53.388779 kubelet[2917]: I0123 17:58:53.388525 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83599c17b72526028036287b78a4052a-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-186\" (UID: \"83599c17b72526028036287b78a4052a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:58:53.388779 kubelet[2917]: I0123 17:58:53.388562 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83599c17b72526028036287b78a4052a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-186\" (UID: \"83599c17b72526028036287b78a4052a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:58:53.388779 kubelet[2917]: I0123 17:58:53.388604 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83599c17b72526028036287b78a4052a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-186\" (UID: \"83599c17b72526028036287b78a4052a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:58:53.388779 kubelet[2917]: E0123 17:58:53.388218 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-186?timeout=10s\": dial tcp 172.31.16.186:6443: connect: connection refused" interval="400ms" Jan 23 17:58:53.389999 kubelet[2917]: E0123 17:58:53.389924 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-186\" not found" node="ip-172-31-16-186" Jan 23 17:58:53.403570 systemd[1]: Created slice kubepods-burstable-podab9261b9f28f3bcaf69b59e7054759cb.slice - libcontainer container kubepods-burstable-podab9261b9f28f3bcaf69b59e7054759cb.slice. Jan 23 17:58:53.408380 kubelet[2917]: I0123 17:58:53.408331 2917 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-186" Jan 23 17:58:53.409734 kubelet[2917]: E0123 17:58:53.409684 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-186\" not found" node="ip-172-31-16-186" Jan 23 17:58:53.410613 kubelet[2917]: E0123 17:58:53.410452 2917 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.186:6443/api/v1/nodes\": dial tcp 172.31.16.186:6443: connect: connection refused" node="ip-172-31-16-186" Jan 23 17:58:53.615027 kubelet[2917]: I0123 17:58:53.613545 2917 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-186" Jan 23 17:58:53.615027 kubelet[2917]: E0123 17:58:53.613978 2917 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.186:6443/api/v1/nodes\": dial tcp 172.31.16.186:6443: connect: connection refused" node="ip-172-31-16-186" Jan 23 17:58:53.675914 containerd[2026]: time="2026-01-23T17:58:53.675780976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-186,Uid:a3eb0277bd56ce0a130e88da83e13bad,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:53.692182 containerd[2026]: time="2026-01-23T17:58:53.691879462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-186,Uid:83599c17b72526028036287b78a4052a,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:53.713809 containerd[2026]: time="2026-01-23T17:58:53.713710368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-186,Uid:ab9261b9f28f3bcaf69b59e7054759cb,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:53.729732 containerd[2026]: time="2026-01-23T17:58:53.729678672Z" level=info msg="connecting to shim 5c05cc12ee761d24c54f6fb0e10ddd4034c2a7082c12469a13fce82c490c747d" address="unix:///run/containerd/s/bd4a92f82502d5bfde7fec3d77c760b9209c0803ad240904db00447f15f0fb2e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:53.774723 containerd[2026]: time="2026-01-23T17:58:53.773599898Z" level=info msg="connecting to shim 02231008d986ca54421cd22347fea1b332fee17ac5d2ccbacafcc9235910116c" address="unix:///run/containerd/s/3eed601f7aa901e3c8757d3ee26d38bf4ce733cc921fd17236055694e0a58f78" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:53.789167 containerd[2026]: time="2026-01-23T17:58:53.789083004Z" level=info msg="connecting to shim 25ec5ce9a230d3464e46da7abbc496095ccce54c7b4f2773911fedcfc58cd7c5" address="unix:///run/containerd/s/5dfe1d4a419568627485f1466c651c8183f66ad26440861291d4deebaa68e9ff" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:53.790497 kubelet[2917]: E0123 17:58:53.790439 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-186?timeout=10s\": dial tcp 172.31.16.186:6443: connect: connection refused" interval="800ms" Jan 23 17:58:53.846566 systemd[1]: Started cri-containerd-5c05cc12ee761d24c54f6fb0e10ddd4034c2a7082c12469a13fce82c490c747d.scope - libcontainer container 5c05cc12ee761d24c54f6fb0e10ddd4034c2a7082c12469a13fce82c490c747d. Jan 23 17:58:53.886429 systemd[1]: Started cri-containerd-02231008d986ca54421cd22347fea1b332fee17ac5d2ccbacafcc9235910116c.scope - libcontainer container 02231008d986ca54421cd22347fea1b332fee17ac5d2ccbacafcc9235910116c. Jan 23 17:58:53.889956 systemd[1]: Started cri-containerd-25ec5ce9a230d3464e46da7abbc496095ccce54c7b4f2773911fedcfc58cd7c5.scope - libcontainer container 25ec5ce9a230d3464e46da7abbc496095ccce54c7b4f2773911fedcfc58cd7c5. Jan 23 17:58:54.016516 containerd[2026]: time="2026-01-23T17:58:54.016308741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-186,Uid:83599c17b72526028036287b78a4052a,Namespace:kube-system,Attempt:0,} returns sandbox id \"02231008d986ca54421cd22347fea1b332fee17ac5d2ccbacafcc9235910116c\"" Jan 23 17:58:54.017202 kubelet[2917]: I0123 17:58:54.017167 2917 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-186" Jan 23 17:58:54.018065 kubelet[2917]: E0123 17:58:54.018013 2917 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.186:6443/api/v1/nodes\": dial tcp 172.31.16.186:6443: connect: connection refused" node="ip-172-31-16-186" Jan 23 17:58:54.030885 containerd[2026]: time="2026-01-23T17:58:54.030799467Z" level=info msg="CreateContainer within sandbox \"02231008d986ca54421cd22347fea1b332fee17ac5d2ccbacafcc9235910116c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 17:58:54.034167 containerd[2026]: time="2026-01-23T17:58:54.034004025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-186,Uid:a3eb0277bd56ce0a130e88da83e13bad,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c05cc12ee761d24c54f6fb0e10ddd4034c2a7082c12469a13fce82c490c747d\"" Jan 23 17:58:54.047070 containerd[2026]: time="2026-01-23T17:58:54.047012418Z" level=info msg="CreateContainer within sandbox \"5c05cc12ee761d24c54f6fb0e10ddd4034c2a7082c12469a13fce82c490c747d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 17:58:54.053430 containerd[2026]: time="2026-01-23T17:58:54.053330647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-186,Uid:ab9261b9f28f3bcaf69b59e7054759cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"25ec5ce9a230d3464e46da7abbc496095ccce54c7b4f2773911fedcfc58cd7c5\"" Jan 23 17:58:54.063241 containerd[2026]: time="2026-01-23T17:58:54.063101610Z" level=info msg="Container 399035c3626c751d1babd893d6f5f432d28464bb7674718fbbef1801064e3f1c: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:54.072096 containerd[2026]: time="2026-01-23T17:58:54.071786787Z" level=info msg="CreateContainer within sandbox \"25ec5ce9a230d3464e46da7abbc496095ccce54c7b4f2773911fedcfc58cd7c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 17:58:54.082678 containerd[2026]: time="2026-01-23T17:58:54.082624255Z" level=info msg="CreateContainer within sandbox \"02231008d986ca54421cd22347fea1b332fee17ac5d2ccbacafcc9235910116c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"399035c3626c751d1babd893d6f5f432d28464bb7674718fbbef1801064e3f1c\"" Jan 23 17:58:54.085022 containerd[2026]: time="2026-01-23T17:58:54.084974874Z" level=info msg="StartContainer for \"399035c3626c751d1babd893d6f5f432d28464bb7674718fbbef1801064e3f1c\"" Jan 23 17:58:54.086795 containerd[2026]: time="2026-01-23T17:58:54.085767582Z" level=info msg="Container 27ccbecfea31212b6753d4de1d60532b22b6077aeaf85bbd66bbedb5e287d201: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:54.087980 containerd[2026]: time="2026-01-23T17:58:54.087932444Z" level=info msg="connecting to shim 399035c3626c751d1babd893d6f5f432d28464bb7674718fbbef1801064e3f1c" address="unix:///run/containerd/s/3eed601f7aa901e3c8757d3ee26d38bf4ce733cc921fd17236055694e0a58f78" protocol=ttrpc version=3 Jan 23 17:58:54.106139 containerd[2026]: time="2026-01-23T17:58:54.105945310Z" level=info msg="Container ee7c6832d907b69b3e673ef5e688375bc372993f2795f4aecd1875b091408c3d: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:54.111204 containerd[2026]: time="2026-01-23T17:58:54.111106497Z" level=info msg="CreateContainer within sandbox \"5c05cc12ee761d24c54f6fb0e10ddd4034c2a7082c12469a13fce82c490c747d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"27ccbecfea31212b6753d4de1d60532b22b6077aeaf85bbd66bbedb5e287d201\"" Jan 23 17:58:54.113056 containerd[2026]: time="2026-01-23T17:58:54.112937208Z" level=info msg="StartContainer for \"27ccbecfea31212b6753d4de1d60532b22b6077aeaf85bbd66bbedb5e287d201\"" Jan 23 17:58:54.117088 containerd[2026]: time="2026-01-23T17:58:54.116996605Z" level=info msg="connecting to shim 27ccbecfea31212b6753d4de1d60532b22b6077aeaf85bbd66bbedb5e287d201" address="unix:///run/containerd/s/bd4a92f82502d5bfde7fec3d77c760b9209c0803ad240904db00447f15f0fb2e" protocol=ttrpc version=3 Jan 23 17:58:54.125428 containerd[2026]: time="2026-01-23T17:58:54.125321205Z" level=info msg="CreateContainer within sandbox \"25ec5ce9a230d3464e46da7abbc496095ccce54c7b4f2773911fedcfc58cd7c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ee7c6832d907b69b3e673ef5e688375bc372993f2795f4aecd1875b091408c3d\"" Jan 23 17:58:54.126774 containerd[2026]: time="2026-01-23T17:58:54.126720901Z" level=info msg="StartContainer for \"ee7c6832d907b69b3e673ef5e688375bc372993f2795f4aecd1875b091408c3d\"" Jan 23 17:58:54.128635 systemd[1]: Started cri-containerd-399035c3626c751d1babd893d6f5f432d28464bb7674718fbbef1801064e3f1c.scope - libcontainer container 399035c3626c751d1babd893d6f5f432d28464bb7674718fbbef1801064e3f1c. Jan 23 17:58:54.131526 containerd[2026]: time="2026-01-23T17:58:54.130748722Z" level=info msg="connecting to shim ee7c6832d907b69b3e673ef5e688375bc372993f2795f4aecd1875b091408c3d" address="unix:///run/containerd/s/5dfe1d4a419568627485f1466c651c8183f66ad26440861291d4deebaa68e9ff" protocol=ttrpc version=3 Jan 23 17:58:54.181422 systemd[1]: Started cri-containerd-27ccbecfea31212b6753d4de1d60532b22b6077aeaf85bbd66bbedb5e287d201.scope - libcontainer container 27ccbecfea31212b6753d4de1d60532b22b6077aeaf85bbd66bbedb5e287d201. Jan 23 17:58:54.198665 systemd[1]: Started cri-containerd-ee7c6832d907b69b3e673ef5e688375bc372993f2795f4aecd1875b091408c3d.scope - libcontainer container ee7c6832d907b69b3e673ef5e688375bc372993f2795f4aecd1875b091408c3d. Jan 23 17:58:54.334905 containerd[2026]: time="2026-01-23T17:58:54.334698823Z" level=info msg="StartContainer for \"399035c3626c751d1babd893d6f5f432d28464bb7674718fbbef1801064e3f1c\" returns successfully" Jan 23 17:58:54.375546 containerd[2026]: time="2026-01-23T17:58:54.375487023Z" level=info msg="StartContainer for \"ee7c6832d907b69b3e673ef5e688375bc372993f2795f4aecd1875b091408c3d\" returns successfully" Jan 23 17:58:54.375957 containerd[2026]: time="2026-01-23T17:58:54.375876810Z" level=info msg="StartContainer for \"27ccbecfea31212b6753d4de1d60532b22b6077aeaf85bbd66bbedb5e287d201\" returns successfully" Jan 23 17:58:54.381076 kubelet[2917]: E0123 17:58:54.380999 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-186&limit=500&resourceVersion=0\": dial tcp 172.31.16.186:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 17:58:54.386347 kubelet[2917]: E0123 17:58:54.386263 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.186:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:58:54.591811 kubelet[2917]: E0123 17:58:54.591745 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-186?timeout=10s\": dial tcp 172.31.16.186:6443: connect: connection refused" interval="1.6s" Jan 23 17:58:54.823334 kubelet[2917]: I0123 17:58:54.823267 2917 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-186" Jan 23 17:58:55.304143 kubelet[2917]: E0123 17:58:55.303782 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-186\" not found" node="ip-172-31-16-186" Jan 23 17:58:55.312230 kubelet[2917]: E0123 17:58:55.311370 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-186\" not found" node="ip-172-31-16-186" Jan 23 17:58:55.318258 kubelet[2917]: E0123 17:58:55.318177 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-186\" not found" node="ip-172-31-16-186" Jan 23 17:58:56.322136 kubelet[2917]: E0123 17:58:56.322066 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-186\" not found" node="ip-172-31-16-186" Jan 23 17:58:56.322633 kubelet[2917]: E0123 17:58:56.322296 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-186\" not found" node="ip-172-31-16-186" Jan 23 17:58:56.322977 kubelet[2917]: E0123 17:58:56.322930 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-186\" not found" node="ip-172-31-16-186" Jan 23 17:58:57.321462 kubelet[2917]: E0123 17:58:57.321390 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-186\" not found" node="ip-172-31-16-186" Jan 23 17:58:57.321987 kubelet[2917]: E0123 17:58:57.321962 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-186\" not found" node="ip-172-31-16-186" Jan 23 17:58:59.122741 kubelet[2917]: I0123 17:58:59.122680 2917 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-186" Jan 23 17:58:59.145138 kubelet[2917]: I0123 17:58:59.144899 2917 apiserver.go:52] "Watching apiserver" Jan 23 17:58:59.170145 kubelet[2917]: E0123 17:58:59.169633 2917 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-186.188d6df6475890c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-186,UID:ip-172-31-16-186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-186,},FirstTimestamp:2026-01-23 17:58:53.152841922 +0000 UTC m=+1.919950514,LastTimestamp:2026-01-23 17:58:53.152841922 +0000 UTC m=+1.919950514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-186,}" Jan 23 17:58:59.182178 kubelet[2917]: I0123 17:58:59.182097 2917 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:58:59.184254 kubelet[2917]: I0123 17:58:59.184187 2917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-186" Jan 23 17:58:59.225066 kubelet[2917]: E0123 17:58:59.224992 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 23 17:58:59.234746 kubelet[2917]: E0123 17:58:59.234435 2917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-186\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-186" Jan 23 17:58:59.234746 kubelet[2917]: I0123 17:58:59.234476 2917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-186" Jan 23 17:58:59.245151 kubelet[2917]: E0123 17:58:59.242881 2917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-186\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-186" Jan 23 17:58:59.245423 kubelet[2917]: I0123 17:58:59.245389 2917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:58:59.261849 kubelet[2917]: E0123 17:58:59.261804 2917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-186\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:59:00.370048 kubelet[2917]: I0123 17:59:00.369897 2917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-186" Jan 23 17:59:01.723962 systemd[1]: Reload requested from client PID 3194 ('systemctl') (unit session-5.scope)... Jan 23 17:59:01.723986 systemd[1]: Reloading... Jan 23 17:59:01.878920 update_engine[2001]: I20260123 17:59:01.878180 2001 update_attempter.cc:509] Updating boot flags... Jan 23 17:59:01.944164 zram_generator::config[3246]: No configuration found. Jan 23 17:59:02.023675 kubelet[2917]: I0123 17:59:02.018812 2917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:59:02.810456 systemd[1]: Reloading finished in 1085 ms. Jan 23 17:59:03.109528 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:59:03.152939 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:59:03.156361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:59:03.156462 systemd[1]: kubelet.service: Consumed 2.739s CPU time, 128.1M memory peak. Jan 23 17:59:03.162759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:59:03.807569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:59:03.827556 (kubelet)[3568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:59:03.940154 kubelet[3568]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:59:03.940154 kubelet[3568]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:59:03.940154 kubelet[3568]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:59:03.940154 kubelet[3568]: I0123 17:59:03.939374 3568 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:59:03.959730 kubelet[3568]: I0123 17:59:03.958365 3568 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 17:59:03.959730 kubelet[3568]: I0123 17:59:03.958425 3568 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:59:03.959730 kubelet[3568]: I0123 17:59:03.958880 3568 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:59:03.967019 kubelet[3568]: I0123 17:59:03.966860 3568 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 17:59:03.972826 kubelet[3568]: I0123 17:59:03.972743 3568 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:59:03.989329 kubelet[3568]: I0123 17:59:03.989276 3568 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:59:04.003702 kubelet[3568]: I0123 17:59:04.003293 3568 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:59:04.006570 kubelet[3568]: I0123 17:59:04.006389 3568 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:59:04.008726 kubelet[3568]: I0123 17:59:04.006467 3568 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-186","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:59:04.008726 kubelet[3568]: I0123 17:59:04.008381 3568 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:59:04.008726 kubelet[3568]: I0123 17:59:04.008408 3568 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 17:59:04.008726 kubelet[3568]: I0123 17:59:04.008501 3568 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:59:04.010858 kubelet[3568]: I0123 17:59:04.008792 3568 kubelet.go:480] "Attempting to sync node with API server" Jan 23 17:59:04.010858 kubelet[3568]: I0123 17:59:04.008833 3568 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:59:04.010858 kubelet[3568]: I0123 17:59:04.008882 3568 kubelet.go:386] "Adding apiserver pod source" Jan 23 17:59:04.010858 kubelet[3568]: I0123 17:59:04.008911 3568 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:59:04.025959 kubelet[3568]: I0123 17:59:04.025917 3568 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:59:04.027175 kubelet[3568]: I0123 17:59:04.027098 3568 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:59:04.034154 kubelet[3568]: I0123 17:59:04.033491 3568 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:59:04.034451 kubelet[3568]: I0123 17:59:04.034418 3568 server.go:1289] "Started kubelet" Jan 23 17:59:04.043155 kubelet[3568]: I0123 17:59:04.039414 3568 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:59:04.057208 kubelet[3568]: I0123 17:59:04.056478 3568 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:59:04.071360 kubelet[3568]: I0123 17:59:04.039957 3568 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:59:04.077870 kubelet[3568]: I0123 17:59:04.077803 3568 server.go:317] "Adding debug handlers to kubelet server" Jan 23 17:59:04.083352 kubelet[3568]: I0123 17:59:04.081634 3568 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:59:04.083352 kubelet[3568]: E0123 17:59:04.082065 3568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-186\" not found" Jan 23 17:59:04.085313 kubelet[3568]: I0123 17:59:04.040041 3568 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:59:04.087320 kubelet[3568]: I0123 17:59:04.087279 3568 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:59:04.089366 kubelet[3568]: I0123 17:59:04.088571 3568 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:59:04.089869 kubelet[3568]: I0123 17:59:04.089834 3568 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:59:04.134342 kubelet[3568]: I0123 17:59:04.130539 3568 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:59:04.139788 kubelet[3568]: I0123 17:59:04.139738 3568 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:59:04.184825 kubelet[3568]: E0123 17:59:04.183830 3568 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:59:04.185724 kubelet[3568]: I0123 17:59:04.185431 3568 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:59:04.217689 kubelet[3568]: I0123 17:59:04.217554 3568 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 17:59:04.237647 kubelet[3568]: I0123 17:59:04.237379 3568 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 17:59:04.239398 kubelet[3568]: I0123 17:59:04.239362 3568 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 17:59:04.239398 kubelet[3568]: I0123 17:59:04.239457 3568 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:59:04.239398 kubelet[3568]: I0123 17:59:04.239474 3568 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 17:59:04.239942 kubelet[3568]: E0123 17:59:04.239895 3568 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:59:04.341395 kubelet[3568]: E0123 17:59:04.341199 3568 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 17:59:04.379924 kubelet[3568]: I0123 17:59:04.379878 3568 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:59:04.380242 kubelet[3568]: I0123 17:59:04.380031 3568 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:59:04.380509 kubelet[3568]: I0123 17:59:04.380340 3568 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:59:04.381056 kubelet[3568]: I0123 17:59:04.381002 3568 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 17:59:04.381506 kubelet[3568]: I0123 17:59:04.381222 3568 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 17:59:04.381506 kubelet[3568]: I0123 17:59:04.381264 3568 policy_none.go:49] "None policy: Start" Jan 23 17:59:04.381506 kubelet[3568]: I0123 17:59:04.381306 3568 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:59:04.381506 kubelet[3568]: I0123 17:59:04.381335 3568 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:59:04.382616 kubelet[3568]: I0123 17:59:04.382548 3568 state_mem.go:75] "Updated machine memory state" Jan 23 17:59:04.398677 kubelet[3568]: E0123 17:59:04.398590 3568 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:59:04.399952 kubelet[3568]: I0123 17:59:04.399915 3568 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:59:04.400311 kubelet[3568]: I0123 17:59:04.400226 3568 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:59:04.402202 kubelet[3568]: I0123 17:59:04.402025 3568 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:59:04.414737 kubelet[3568]: E0123 17:59:04.414101 3568 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:59:04.531064 kubelet[3568]: I0123 17:59:04.530973 3568 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-186" Jan 23 17:59:04.544699 kubelet[3568]: I0123 17:59:04.544539 3568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:59:04.547668 kubelet[3568]: I0123 17:59:04.545263 3568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-186" Jan 23 17:59:04.547668 kubelet[3568]: I0123 17:59:04.547569 3568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-186" Jan 23 17:59:04.556327 kubelet[3568]: I0123 17:59:04.556260 3568 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-186" Jan 23 17:59:04.556488 kubelet[3568]: I0123 17:59:04.556387 3568 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-186" Jan 23 17:59:04.565819 kubelet[3568]: E0123 17:59:04.564507 3568 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-186\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:59:04.573612 kubelet[3568]: E0123 17:59:04.573441 3568 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-186\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-186" Jan 23 17:59:04.610172 kubelet[3568]: I0123 17:59:04.609338 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83599c17b72526028036287b78a4052a-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-186\" (UID: \"83599c17b72526028036287b78a4052a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:59:04.610627 kubelet[3568]: I0123 17:59:04.610550 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/83599c17b72526028036287b78a4052a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-186\" (UID: \"83599c17b72526028036287b78a4052a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:59:04.610916 kubelet[3568]: I0123 17:59:04.610635 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83599c17b72526028036287b78a4052a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-186\" (UID: \"83599c17b72526028036287b78a4052a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:59:04.610916 kubelet[3568]: I0123 17:59:04.610677 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83599c17b72526028036287b78a4052a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-186\" (UID: \"83599c17b72526028036287b78a4052a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:59:04.610916 kubelet[3568]: I0123 17:59:04.610743 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83599c17b72526028036287b78a4052a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-186\" (UID: \"83599c17b72526028036287b78a4052a\") " pod="kube-system/kube-controller-manager-ip-172-31-16-186" Jan 23 17:59:04.610916 kubelet[3568]: I0123 17:59:04.610784 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3eb0277bd56ce0a130e88da83e13bad-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-186\" (UID: \"a3eb0277bd56ce0a130e88da83e13bad\") " pod="kube-system/kube-apiserver-ip-172-31-16-186" Jan 23 17:59:04.610916 kubelet[3568]: I0123 17:59:04.610843 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab9261b9f28f3bcaf69b59e7054759cb-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-186\" (UID: \"ab9261b9f28f3bcaf69b59e7054759cb\") " pod="kube-system/kube-scheduler-ip-172-31-16-186" Jan 23 17:59:04.612271 kubelet[3568]: I0123 17:59:04.610900 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3eb0277bd56ce0a130e88da83e13bad-ca-certs\") pod \"kube-apiserver-ip-172-31-16-186\" (UID: \"a3eb0277bd56ce0a130e88da83e13bad\") " pod="kube-system/kube-apiserver-ip-172-31-16-186" Jan 23 17:59:04.612271 kubelet[3568]: I0123 17:59:04.610964 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3eb0277bd56ce0a130e88da83e13bad-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-186\" (UID: \"a3eb0277bd56ce0a130e88da83e13bad\") " pod="kube-system/kube-apiserver-ip-172-31-16-186" Jan 23 17:59:05.016972 kubelet[3568]: I0123 17:59:05.016796 3568 apiserver.go:52] "Watching apiserver" Jan 23 17:59:05.090023 kubelet[3568]: I0123 17:59:05.089943 3568 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:59:05.325491 kubelet[3568]: I0123 17:59:05.325045 3568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-186" Jan 23 17:59:05.343208 kubelet[3568]: E0123 17:59:05.342999 3568 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-186\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-186" Jan 23 17:59:05.344616 kubelet[3568]: I0123 17:59:05.344527 3568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-186" podStartSLOduration=3.344503391 podStartE2EDuration="3.344503391s" podCreationTimestamp="2026-01-23 17:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:05.310397946 +0000 UTC m=+1.465906919" watchObservedRunningTime="2026-01-23 17:59:05.344503391 +0000 UTC m=+1.500012363" Jan 23 17:59:05.345108 kubelet[3568]: I0123 17:59:05.345034 3568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-186" podStartSLOduration=5.345010632 podStartE2EDuration="5.345010632s" podCreationTimestamp="2026-01-23 17:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:05.342746973 +0000 UTC m=+1.498255945" watchObservedRunningTime="2026-01-23 17:59:05.345010632 +0000 UTC m=+1.500519592" Jan 23 17:59:05.395145 kubelet[3568]: I0123 17:59:05.394316 3568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-186" podStartSLOduration=1.394292525 podStartE2EDuration="1.394292525s" podCreationTimestamp="2026-01-23 17:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:05.369800177 +0000 UTC m=+1.525309161" watchObservedRunningTime="2026-01-23 17:59:05.394292525 +0000 UTC m=+1.549801497" Jan 23 17:59:06.002456 sudo[2335]: pam_unix(sudo:session): session closed for user root Jan 23 17:59:06.082936 sshd[2334]: Connection closed by 68.220.241.50 port 51032 Jan 23 17:59:06.082520 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:06.094741 systemd-logind[1999]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:59:06.096451 systemd[1]: sshd@4-172.31.16.186:22-68.220.241.50:51032.service: Deactivated successfully. Jan 23 17:59:06.104033 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 17:59:06.107351 systemd[1]: session-5.scope: Consumed 12.701s CPU time, 231.2M memory peak. Jan 23 17:59:06.113291 systemd-logind[1999]: Removed session 5. Jan 23 17:59:07.136386 kubelet[3568]: I0123 17:59:07.136253 3568 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 17:59:07.137391 containerd[2026]: time="2026-01-23T17:59:07.137188466Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 17:59:07.138548 kubelet[3568]: I0123 17:59:07.137714 3568 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 17:59:08.069819 systemd[1]: Created slice kubepods-besteffort-pod01504d03_647f_45a7_a62a_0c3ee1f51d71.slice - libcontainer container kubepods-besteffort-pod01504d03_647f_45a7_a62a_0c3ee1f51d71.slice. Jan 23 17:59:08.102975 kubelet[3568]: E0123 17:59:08.102895 3568 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-flannel-cfg\" is forbidden: User \"system:node:ip-172-31-16-186\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'ip-172-31-16-186' and this object" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Jan 23 17:59:08.103352 kubelet[3568]: E0123 17:59:08.103280 3568 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-16-186\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'ip-172-31-16-186' and this object" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Jan 23 17:59:08.110979 systemd[1]: Created slice kubepods-burstable-pod143650d6_ee96_4158_aaf1_6a6b3efb1091.slice - libcontainer container kubepods-burstable-pod143650d6_ee96_4158_aaf1_6a6b3efb1091.slice. Jan 23 17:59:08.135262 kubelet[3568]: I0123 17:59:08.135188 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/01504d03-647f-45a7-a62a-0c3ee1f51d71-kube-proxy\") pod \"kube-proxy-8f9xf\" (UID: \"01504d03-647f-45a7-a62a-0c3ee1f51d71\") " pod="kube-system/kube-proxy-8f9xf" Jan 23 17:59:08.135516 kubelet[3568]: I0123 17:59:08.135290 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/143650d6-ee96-4158-aaf1-6a6b3efb1091-flannel-cfg\") pod \"kube-flannel-ds-pbn28\" (UID: \"143650d6-ee96-4158-aaf1-6a6b3efb1091\") " pod="kube-flannel/kube-flannel-ds-pbn28" Jan 23 17:59:08.135516 kubelet[3568]: I0123 17:59:08.135343 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/143650d6-ee96-4158-aaf1-6a6b3efb1091-xtables-lock\") pod \"kube-flannel-ds-pbn28\" (UID: \"143650d6-ee96-4158-aaf1-6a6b3efb1091\") " pod="kube-flannel/kube-flannel-ds-pbn28" Jan 23 17:59:08.135516 kubelet[3568]: I0123 17:59:08.135402 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01504d03-647f-45a7-a62a-0c3ee1f51d71-lib-modules\") pod \"kube-proxy-8f9xf\" (UID: \"01504d03-647f-45a7-a62a-0c3ee1f51d71\") " pod="kube-system/kube-proxy-8f9xf" Jan 23 17:59:08.135516 kubelet[3568]: I0123 17:59:08.135455 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01504d03-647f-45a7-a62a-0c3ee1f51d71-xtables-lock\") pod \"kube-proxy-8f9xf\" (UID: \"01504d03-647f-45a7-a62a-0c3ee1f51d71\") " pod="kube-system/kube-proxy-8f9xf" Jan 23 17:59:08.135516 kubelet[3568]: I0123 17:59:08.135497 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/143650d6-ee96-4158-aaf1-6a6b3efb1091-run\") pod \"kube-flannel-ds-pbn28\" (UID: \"143650d6-ee96-4158-aaf1-6a6b3efb1091\") " pod="kube-flannel/kube-flannel-ds-pbn28" Jan 23 17:59:08.135787 kubelet[3568]: I0123 17:59:08.135532 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/143650d6-ee96-4158-aaf1-6a6b3efb1091-cni-plugin\") pod \"kube-flannel-ds-pbn28\" (UID: \"143650d6-ee96-4158-aaf1-6a6b3efb1091\") " pod="kube-flannel/kube-flannel-ds-pbn28" Jan 23 17:59:08.135787 kubelet[3568]: I0123 17:59:08.135567 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/143650d6-ee96-4158-aaf1-6a6b3efb1091-cni\") pod \"kube-flannel-ds-pbn28\" (UID: \"143650d6-ee96-4158-aaf1-6a6b3efb1091\") " pod="kube-flannel/kube-flannel-ds-pbn28" Jan 23 17:59:08.135787 kubelet[3568]: I0123 17:59:08.135619 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25kcv\" (UniqueName: \"kubernetes.io/projected/143650d6-ee96-4158-aaf1-6a6b3efb1091-kube-api-access-25kcv\") pod \"kube-flannel-ds-pbn28\" (UID: \"143650d6-ee96-4158-aaf1-6a6b3efb1091\") " pod="kube-flannel/kube-flannel-ds-pbn28" Jan 23 17:59:08.135787 kubelet[3568]: I0123 17:59:08.135657 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhmhj\" (UniqueName: \"kubernetes.io/projected/01504d03-647f-45a7-a62a-0c3ee1f51d71-kube-api-access-bhmhj\") pod \"kube-proxy-8f9xf\" (UID: \"01504d03-647f-45a7-a62a-0c3ee1f51d71\") " pod="kube-system/kube-proxy-8f9xf" Jan 23 17:59:08.385989 containerd[2026]: time="2026-01-23T17:59:08.385786578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8f9xf,Uid:01504d03-647f-45a7-a62a-0c3ee1f51d71,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:08.426581 containerd[2026]: time="2026-01-23T17:59:08.426504195Z" level=info msg="connecting to shim 85d8da33e168ee7eb198c008e40118c4ca55ab296e31d3deda8cc588c8a7ff28" address="unix:///run/containerd/s/173cfbb6d72fc6405c68aacfc9c904186fa6fffec39e4375ac252e45c5e77f2b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:08.486465 systemd[1]: Started cri-containerd-85d8da33e168ee7eb198c008e40118c4ca55ab296e31d3deda8cc588c8a7ff28.scope - libcontainer container 85d8da33e168ee7eb198c008e40118c4ca55ab296e31d3deda8cc588c8a7ff28. Jan 23 17:59:08.545457 containerd[2026]: time="2026-01-23T17:59:08.545315211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8f9xf,Uid:01504d03-647f-45a7-a62a-0c3ee1f51d71,Namespace:kube-system,Attempt:0,} returns sandbox id \"85d8da33e168ee7eb198c008e40118c4ca55ab296e31d3deda8cc588c8a7ff28\"" Jan 23 17:59:08.556311 containerd[2026]: time="2026-01-23T17:59:08.555562849Z" level=info msg="CreateContainer within sandbox \"85d8da33e168ee7eb198c008e40118c4ca55ab296e31d3deda8cc588c8a7ff28\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 17:59:08.579994 containerd[2026]: time="2026-01-23T17:59:08.577504258Z" level=info msg="Container 634e675884c2f7dc92e01a5616364d18fb048e6f038fcc3aa2376c6508ff013d: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:08.581988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235587473.mount: Deactivated successfully. Jan 23 17:59:08.598853 containerd[2026]: time="2026-01-23T17:59:08.598778950Z" level=info msg="CreateContainer within sandbox \"85d8da33e168ee7eb198c008e40118c4ca55ab296e31d3deda8cc588c8a7ff28\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"634e675884c2f7dc92e01a5616364d18fb048e6f038fcc3aa2376c6508ff013d\"" Jan 23 17:59:08.601705 containerd[2026]: time="2026-01-23T17:59:08.601576432Z" level=info msg="StartContainer for \"634e675884c2f7dc92e01a5616364d18fb048e6f038fcc3aa2376c6508ff013d\"" Jan 23 17:59:08.606764 containerd[2026]: time="2026-01-23T17:59:08.606681323Z" level=info msg="connecting to shim 634e675884c2f7dc92e01a5616364d18fb048e6f038fcc3aa2376c6508ff013d" address="unix:///run/containerd/s/173cfbb6d72fc6405c68aacfc9c904186fa6fffec39e4375ac252e45c5e77f2b" protocol=ttrpc version=3 Jan 23 17:59:08.641825 systemd[1]: Started cri-containerd-634e675884c2f7dc92e01a5616364d18fb048e6f038fcc3aa2376c6508ff013d.scope - libcontainer container 634e675884c2f7dc92e01a5616364d18fb048e6f038fcc3aa2376c6508ff013d. Jan 23 17:59:08.770546 containerd[2026]: time="2026-01-23T17:59:08.770432239Z" level=info msg="StartContainer for \"634e675884c2f7dc92e01a5616364d18fb048e6f038fcc3aa2376c6508ff013d\" returns successfully" Jan 23 17:59:09.321571 containerd[2026]: time="2026-01-23T17:59:09.320953434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pbn28,Uid:143650d6-ee96-4158-aaf1-6a6b3efb1091,Namespace:kube-flannel,Attempt:0,}" Jan 23 17:59:09.382608 containerd[2026]: time="2026-01-23T17:59:09.382550734Z" level=info msg="connecting to shim c7b4bed746c069c758165c8d0f5f8e3051b886c0b9a8f9858f6662141995ca46" address="unix:///run/containerd/s/549b1ae5d63721acaab7fff36c9d2389780b8d38c72850ca6b8e2f170153d111" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:09.387955 kubelet[3568]: I0123 17:59:09.387862 3568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8f9xf" podStartSLOduration=1.3878409010000001 podStartE2EDuration="1.387840901s" podCreationTimestamp="2026-01-23 17:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:09.386991561 +0000 UTC m=+5.542500521" watchObservedRunningTime="2026-01-23 17:59:09.387840901 +0000 UTC m=+5.543349861" Jan 23 17:59:09.457632 systemd[1]: Started cri-containerd-c7b4bed746c069c758165c8d0f5f8e3051b886c0b9a8f9858f6662141995ca46.scope - libcontainer container c7b4bed746c069c758165c8d0f5f8e3051b886c0b9a8f9858f6662141995ca46. Jan 23 17:59:09.558535 containerd[2026]: time="2026-01-23T17:59:09.558378017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pbn28,Uid:143650d6-ee96-4158-aaf1-6a6b3efb1091,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"c7b4bed746c069c758165c8d0f5f8e3051b886c0b9a8f9858f6662141995ca46\"" Jan 23 17:59:09.565242 containerd[2026]: time="2026-01-23T17:59:09.564673099Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 23 17:59:10.945870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4288038938.mount: Deactivated successfully. Jan 23 17:59:11.009098 containerd[2026]: time="2026-01-23T17:59:11.008998967Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:11.010869 containerd[2026]: time="2026-01-23T17:59:11.010785243Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=5125564" Jan 23 17:59:11.013031 containerd[2026]: time="2026-01-23T17:59:11.012951054Z" level=info msg="ImageCreate event name:\"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:11.018766 containerd[2026]: time="2026-01-23T17:59:11.018704857Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:11.021370 containerd[2026]: time="2026-01-23T17:59:11.021230307Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"5125394\" in 1.456477993s" Jan 23 17:59:11.021370 containerd[2026]: time="2026-01-23T17:59:11.021301635Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\"" Jan 23 17:59:11.028529 containerd[2026]: time="2026-01-23T17:59:11.028455554Z" level=info msg="CreateContainer within sandbox \"c7b4bed746c069c758165c8d0f5f8e3051b886c0b9a8f9858f6662141995ca46\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 23 17:59:11.044032 containerd[2026]: time="2026-01-23T17:59:11.042369308Z" level=info msg="Container af738065a167525663e03c40ae0e781aac7c3169ac49571a61f55d9f7e334838: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:11.056987 containerd[2026]: time="2026-01-23T17:59:11.056917194Z" level=info msg="CreateContainer within sandbox \"c7b4bed746c069c758165c8d0f5f8e3051b886c0b9a8f9858f6662141995ca46\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"af738065a167525663e03c40ae0e781aac7c3169ac49571a61f55d9f7e334838\"" Jan 23 17:59:11.058455 containerd[2026]: time="2026-01-23T17:59:11.058375023Z" level=info msg="StartContainer for \"af738065a167525663e03c40ae0e781aac7c3169ac49571a61f55d9f7e334838\"" Jan 23 17:59:11.060277 containerd[2026]: time="2026-01-23T17:59:11.060198230Z" level=info msg="connecting to shim af738065a167525663e03c40ae0e781aac7c3169ac49571a61f55d9f7e334838" address="unix:///run/containerd/s/549b1ae5d63721acaab7fff36c9d2389780b8d38c72850ca6b8e2f170153d111" protocol=ttrpc version=3 Jan 23 17:59:11.109547 systemd[1]: Started cri-containerd-af738065a167525663e03c40ae0e781aac7c3169ac49571a61f55d9f7e334838.scope - libcontainer container af738065a167525663e03c40ae0e781aac7c3169ac49571a61f55d9f7e334838. Jan 23 17:59:11.177774 containerd[2026]: time="2026-01-23T17:59:11.177673063Z" level=info msg="StartContainer for \"af738065a167525663e03c40ae0e781aac7c3169ac49571a61f55d9f7e334838\" returns successfully" Jan 23 17:59:11.178346 systemd[1]: cri-containerd-af738065a167525663e03c40ae0e781aac7c3169ac49571a61f55d9f7e334838.scope: Deactivated successfully. Jan 23 17:59:11.189153 containerd[2026]: time="2026-01-23T17:59:11.189001852Z" level=info msg="received container exit event container_id:\"af738065a167525663e03c40ae0e781aac7c3169ac49571a61f55d9f7e334838\" id:\"af738065a167525663e03c40ae0e781aac7c3169ac49571a61f55d9f7e334838\" pid:3912 exited_at:{seconds:1769191151 nanos:187417132}" Jan 23 17:59:11.359509 containerd[2026]: time="2026-01-23T17:59:11.359395268Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 23 17:59:11.716775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af738065a167525663e03c40ae0e781aac7c3169ac49571a61f55d9f7e334838-rootfs.mount: Deactivated successfully. Jan 23 17:59:13.872306 containerd[2026]: time="2026-01-23T17:59:13.872223447Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:13.877065 containerd[2026]: time="2026-01-23T17:59:13.876977667Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=28419854" Jan 23 17:59:13.880270 containerd[2026]: time="2026-01-23T17:59:13.880174841Z" level=info msg="ImageCreate event name:\"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:13.893098 containerd[2026]: time="2026-01-23T17:59:13.892995989Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:13.896994 containerd[2026]: time="2026-01-23T17:59:13.896793126Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32412118\" in 2.53733119s" Jan 23 17:59:13.896994 containerd[2026]: time="2026-01-23T17:59:13.896857743Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\"" Jan 23 17:59:13.906513 containerd[2026]: time="2026-01-23T17:59:13.905666533Z" level=info msg="CreateContainer within sandbox \"c7b4bed746c069c758165c8d0f5f8e3051b886c0b9a8f9858f6662141995ca46\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 17:59:13.935105 containerd[2026]: time="2026-01-23T17:59:13.933990704Z" level=info msg="Container ad6caee82dcb956677970581921e5b25586238ea8de0c675476845ca1796e24c: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:13.935710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount708842769.mount: Deactivated successfully. Jan 23 17:59:13.951051 containerd[2026]: time="2026-01-23T17:59:13.950900051Z" level=info msg="CreateContainer within sandbox \"c7b4bed746c069c758165c8d0f5f8e3051b886c0b9a8f9858f6662141995ca46\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ad6caee82dcb956677970581921e5b25586238ea8de0c675476845ca1796e24c\"" Jan 23 17:59:13.953553 containerd[2026]: time="2026-01-23T17:59:13.953446427Z" level=info msg="StartContainer for \"ad6caee82dcb956677970581921e5b25586238ea8de0c675476845ca1796e24c\"" Jan 23 17:59:13.956530 containerd[2026]: time="2026-01-23T17:59:13.956378112Z" level=info msg="connecting to shim ad6caee82dcb956677970581921e5b25586238ea8de0c675476845ca1796e24c" address="unix:///run/containerd/s/549b1ae5d63721acaab7fff36c9d2389780b8d38c72850ca6b8e2f170153d111" protocol=ttrpc version=3 Jan 23 17:59:13.999470 systemd[1]: Started cri-containerd-ad6caee82dcb956677970581921e5b25586238ea8de0c675476845ca1796e24c.scope - libcontainer container ad6caee82dcb956677970581921e5b25586238ea8de0c675476845ca1796e24c. Jan 23 17:59:14.059712 systemd[1]: cri-containerd-ad6caee82dcb956677970581921e5b25586238ea8de0c675476845ca1796e24c.scope: Deactivated successfully. Jan 23 17:59:14.063368 containerd[2026]: time="2026-01-23T17:59:14.062501860Z" level=info msg="received container exit event container_id:\"ad6caee82dcb956677970581921e5b25586238ea8de0c675476845ca1796e24c\" id:\"ad6caee82dcb956677970581921e5b25586238ea8de0c675476845ca1796e24c\" pid:3987 exited_at:{seconds:1769191154 nanos:61610571}" Jan 23 17:59:14.067032 containerd[2026]: time="2026-01-23T17:59:14.066945713Z" level=info msg="StartContainer for \"ad6caee82dcb956677970581921e5b25586238ea8de0c675476845ca1796e24c\" returns successfully" Jan 23 17:59:14.108662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad6caee82dcb956677970581921e5b25586238ea8de0c675476845ca1796e24c-rootfs.mount: Deactivated successfully. Jan 23 17:59:14.147918 kubelet[3568]: I0123 17:59:14.147672 3568 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 17:59:14.249027 systemd[1]: Created slice kubepods-burstable-pod757f7502_e4a9_4e9c_b9a3_fb712a81926c.slice - libcontainer container kubepods-burstable-pod757f7502_e4a9_4e9c_b9a3_fb712a81926c.slice. Jan 23 17:59:14.280833 kubelet[3568]: I0123 17:59:14.280767 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq7vk\" (UniqueName: \"kubernetes.io/projected/757f7502-e4a9-4e9c-b9a3-fb712a81926c-kube-api-access-zq7vk\") pod \"coredns-674b8bbfcf-ft9mm\" (UID: \"757f7502-e4a9-4e9c-b9a3-fb712a81926c\") " pod="kube-system/coredns-674b8bbfcf-ft9mm" Jan 23 17:59:14.281292 kubelet[3568]: I0123 17:59:14.281230 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/757f7502-e4a9-4e9c-b9a3-fb712a81926c-config-volume\") pod \"coredns-674b8bbfcf-ft9mm\" (UID: \"757f7502-e4a9-4e9c-b9a3-fb712a81926c\") " pod="kube-system/coredns-674b8bbfcf-ft9mm" Jan 23 17:59:14.281642 kubelet[3568]: I0123 17:59:14.281608 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7a50ecf-d05e-41aa-a401-aa92de907fd9-config-volume\") pod \"coredns-674b8bbfcf-d5hwj\" (UID: \"b7a50ecf-d05e-41aa-a401-aa92de907fd9\") " pod="kube-system/coredns-674b8bbfcf-d5hwj" Jan 23 17:59:14.281872 kubelet[3568]: I0123 17:59:14.281824 3568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmx58\" (UniqueName: \"kubernetes.io/projected/b7a50ecf-d05e-41aa-a401-aa92de907fd9-kube-api-access-tmx58\") pod \"coredns-674b8bbfcf-d5hwj\" (UID: \"b7a50ecf-d05e-41aa-a401-aa92de907fd9\") " pod="kube-system/coredns-674b8bbfcf-d5hwj" Jan 23 17:59:14.291180 systemd[1]: Created slice kubepods-burstable-podb7a50ecf_d05e_41aa_a401_aa92de907fd9.slice - libcontainer container kubepods-burstable-podb7a50ecf_d05e_41aa_a401_aa92de907fd9.slice. Jan 23 17:59:14.381544 containerd[2026]: time="2026-01-23T17:59:14.381013551Z" level=info msg="CreateContainer within sandbox \"c7b4bed746c069c758165c8d0f5f8e3051b886c0b9a8f9858f6662141995ca46\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 23 17:59:14.409734 containerd[2026]: time="2026-01-23T17:59:14.409586714Z" level=info msg="Container db3ede2a9f13185f15f52ec55758add648498afa6b70afe8347118883c43e926: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:14.435165 containerd[2026]: time="2026-01-23T17:59:14.433697608Z" level=info msg="CreateContainer within sandbox \"c7b4bed746c069c758165c8d0f5f8e3051b886c0b9a8f9858f6662141995ca46\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"db3ede2a9f13185f15f52ec55758add648498afa6b70afe8347118883c43e926\"" Jan 23 17:59:14.440609 containerd[2026]: time="2026-01-23T17:59:14.440080706Z" level=info msg="StartContainer for \"db3ede2a9f13185f15f52ec55758add648498afa6b70afe8347118883c43e926\"" Jan 23 17:59:14.445172 containerd[2026]: time="2026-01-23T17:59:14.443057834Z" level=info msg="connecting to shim db3ede2a9f13185f15f52ec55758add648498afa6b70afe8347118883c43e926" address="unix:///run/containerd/s/549b1ae5d63721acaab7fff36c9d2389780b8d38c72850ca6b8e2f170153d111" protocol=ttrpc version=3 Jan 23 17:59:14.538930 systemd[1]: Started cri-containerd-db3ede2a9f13185f15f52ec55758add648498afa6b70afe8347118883c43e926.scope - libcontainer container db3ede2a9f13185f15f52ec55758add648498afa6b70afe8347118883c43e926. Jan 23 17:59:14.582020 containerd[2026]: time="2026-01-23T17:59:14.581930588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft9mm,Uid:757f7502-e4a9-4e9c-b9a3-fb712a81926c,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:14.602804 containerd[2026]: time="2026-01-23T17:59:14.602283484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d5hwj,Uid:b7a50ecf-d05e-41aa-a401-aa92de907fd9,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:14.688545 containerd[2026]: time="2026-01-23T17:59:14.688374044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft9mm,Uid:757f7502-e4a9-4e9c-b9a3-fb712a81926c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ae50fee6a9801b221a6ffda23298b041953fcfb7fc555307975d9072bf1480a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 17:59:14.693026 kubelet[3568]: E0123 17:59:14.692688 3568 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ae50fee6a9801b221a6ffda23298b041953fcfb7fc555307975d9072bf1480a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 17:59:14.693026 kubelet[3568]: E0123 17:59:14.692802 3568 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ae50fee6a9801b221a6ffda23298b041953fcfb7fc555307975d9072bf1480a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-ft9mm" Jan 23 17:59:14.693026 kubelet[3568]: E0123 17:59:14.692840 3568 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ae50fee6a9801b221a6ffda23298b041953fcfb7fc555307975d9072bf1480a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-ft9mm" Jan 23 17:59:14.694858 kubelet[3568]: E0123 17:59:14.694749 3568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft9mm_kube-system(757f7502-e4a9-4e9c-b9a3-fb712a81926c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft9mm_kube-system(757f7502-e4a9-4e9c-b9a3-fb712a81926c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ae50fee6a9801b221a6ffda23298b041953fcfb7fc555307975d9072bf1480a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-ft9mm" podUID="757f7502-e4a9-4e9c-b9a3-fb712a81926c" Jan 23 17:59:14.696976 containerd[2026]: time="2026-01-23T17:59:14.695699769Z" level=info msg="StartContainer for \"db3ede2a9f13185f15f52ec55758add648498afa6b70afe8347118883c43e926\" returns successfully" Jan 23 17:59:14.703249 containerd[2026]: time="2026-01-23T17:59:14.702779047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d5hwj,Uid:b7a50ecf-d05e-41aa-a401-aa92de907fd9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"317e5d8e3aa5a4d70f83236cc5ec9f3319384f840233afd4d2d33778a085db4c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 17:59:14.704814 kubelet[3568]: E0123 17:59:14.704641 3568 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317e5d8e3aa5a4d70f83236cc5ec9f3319384f840233afd4d2d33778a085db4c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 17:59:14.705508 kubelet[3568]: E0123 17:59:14.705417 3568 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317e5d8e3aa5a4d70f83236cc5ec9f3319384f840233afd4d2d33778a085db4c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-d5hwj" Jan 23 17:59:14.705891 kubelet[3568]: E0123 17:59:14.705697 3568 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317e5d8e3aa5a4d70f83236cc5ec9f3319384f840233afd4d2d33778a085db4c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-d5hwj" Jan 23 17:59:14.706886 kubelet[3568]: E0123 17:59:14.706422 3568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-d5hwj_kube-system(b7a50ecf-d05e-41aa-a401-aa92de907fd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-d5hwj_kube-system(b7a50ecf-d05e-41aa-a401-aa92de907fd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"317e5d8e3aa5a4d70f83236cc5ec9f3319384f840233afd4d2d33778a085db4c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-d5hwj" podUID="b7a50ecf-d05e-41aa-a401-aa92de907fd9" Jan 23 17:59:15.814603 (udev-worker)[4107]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:59:15.841653 systemd-networkd[1840]: flannel.1: Link UP Jan 23 17:59:15.841680 systemd-networkd[1840]: flannel.1: Gained carrier Jan 23 17:59:16.887175 systemd-networkd[1840]: flannel.1: Gained IPv6LL Jan 23 17:59:19.431323 ntpd[2177]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 23 17:59:19.431426 ntpd[2177]: Listen normally on 7 flannel.1 [fe80::6064:1aff:feb7:133e%4]:123 Jan 23 17:59:19.433016 ntpd[2177]: 23 Jan 17:59:19 ntpd[2177]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 23 17:59:19.433016 ntpd[2177]: 23 Jan 17:59:19 ntpd[2177]: Listen normally on 7 flannel.1 [fe80::6064:1aff:feb7:133e%4]:123 Jan 23 17:59:25.241079 containerd[2026]: time="2026-01-23T17:59:25.241009495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d5hwj,Uid:b7a50ecf-d05e-41aa-a401-aa92de907fd9,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:25.276281 systemd-networkd[1840]: cni0: Link UP Jan 23 17:59:25.276300 systemd-networkd[1840]: cni0: Gained carrier Jan 23 17:59:25.287000 (udev-worker)[4200]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:59:25.288259 systemd-networkd[1840]: cni0: Lost carrier Jan 23 17:59:25.292276 systemd-networkd[1840]: vethd0dfe26e: Link UP Jan 23 17:59:25.293251 (udev-worker)[4201]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:59:25.299597 kernel: cni0: port 1(vethd0dfe26e) entered blocking state Jan 23 17:59:25.299761 kernel: cni0: port 1(vethd0dfe26e) entered disabled state Jan 23 17:59:25.303376 kernel: vethd0dfe26e: entered allmulticast mode Jan 23 17:59:25.307321 kernel: vethd0dfe26e: entered promiscuous mode Jan 23 17:59:25.329700 kernel: cni0: port 1(vethd0dfe26e) entered blocking state Jan 23 17:59:25.329860 kernel: cni0: port 1(vethd0dfe26e) entered forwarding state Jan 23 17:59:25.329847 systemd-networkd[1840]: vethd0dfe26e: Gained carrier Jan 23 17:59:25.331060 systemd-networkd[1840]: cni0: Gained carrier Jan 23 17:59:25.336873 containerd[2026]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400000e9a0), "name":"cbr0", "type":"bridge"} Jan 23 17:59:25.336873 containerd[2026]: delegateAdd: netconf sent to delegate plugin: Jan 23 17:59:25.383669 containerd[2026]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-23T17:59:25.383585872Z" level=info msg="connecting to shim 10d33eed6c0297b7ee8197d4b6f79b0c854745f4542ddfe4abc9464cbafe7c01" address="unix:///run/containerd/s/a1ad5f5569e0d1e03cf5ca343ceaa2a0b98dcfd92932399068f3a7af2689c1c6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:25.436491 systemd[1]: Started cri-containerd-10d33eed6c0297b7ee8197d4b6f79b0c854745f4542ddfe4abc9464cbafe7c01.scope - libcontainer container 10d33eed6c0297b7ee8197d4b6f79b0c854745f4542ddfe4abc9464cbafe7c01. Jan 23 17:59:25.516426 containerd[2026]: time="2026-01-23T17:59:25.515688167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d5hwj,Uid:b7a50ecf-d05e-41aa-a401-aa92de907fd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"10d33eed6c0297b7ee8197d4b6f79b0c854745f4542ddfe4abc9464cbafe7c01\"" Jan 23 17:59:25.525616 containerd[2026]: time="2026-01-23T17:59:25.525514658Z" level=info msg="CreateContainer within sandbox \"10d33eed6c0297b7ee8197d4b6f79b0c854745f4542ddfe4abc9464cbafe7c01\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:59:25.544234 containerd[2026]: time="2026-01-23T17:59:25.543479356Z" level=info msg="Container 83886e5216f6ba6fe016690aac35befc7c18cc1d635db24b10b0a6f767dd0fd4: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:25.554329 containerd[2026]: time="2026-01-23T17:59:25.554248066Z" level=info msg="CreateContainer within sandbox \"10d33eed6c0297b7ee8197d4b6f79b0c854745f4542ddfe4abc9464cbafe7c01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83886e5216f6ba6fe016690aac35befc7c18cc1d635db24b10b0a6f767dd0fd4\"" Jan 23 17:59:25.555845 containerd[2026]: time="2026-01-23T17:59:25.555257122Z" level=info msg="StartContainer for \"83886e5216f6ba6fe016690aac35befc7c18cc1d635db24b10b0a6f767dd0fd4\"" Jan 23 17:59:25.558512 containerd[2026]: time="2026-01-23T17:59:25.558439276Z" level=info msg="connecting to shim 83886e5216f6ba6fe016690aac35befc7c18cc1d635db24b10b0a6f767dd0fd4" address="unix:///run/containerd/s/a1ad5f5569e0d1e03cf5ca343ceaa2a0b98dcfd92932399068f3a7af2689c1c6" protocol=ttrpc version=3 Jan 23 17:59:25.594423 systemd[1]: Started cri-containerd-83886e5216f6ba6fe016690aac35befc7c18cc1d635db24b10b0a6f767dd0fd4.scope - libcontainer container 83886e5216f6ba6fe016690aac35befc7c18cc1d635db24b10b0a6f767dd0fd4. Jan 23 17:59:25.661522 containerd[2026]: time="2026-01-23T17:59:25.661364554Z" level=info msg="StartContainer for \"83886e5216f6ba6fe016690aac35befc7c18cc1d635db24b10b0a6f767dd0fd4\" returns successfully" Jan 23 17:59:26.437558 kubelet[3568]: I0123 17:59:26.437426 3568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-pbn28" podStartSLOduration=14.102121273 podStartE2EDuration="18.43737638s" podCreationTimestamp="2026-01-23 17:59:08 +0000 UTC" firstStartedPulling="2026-01-23 17:59:09.563500112 +0000 UTC m=+5.719009060" lastFinishedPulling="2026-01-23 17:59:13.898755207 +0000 UTC m=+10.054264167" observedRunningTime="2026-01-23 17:59:15.40145079 +0000 UTC m=+11.556959786" watchObservedRunningTime="2026-01-23 17:59:26.43737638 +0000 UTC m=+22.592885328" Jan 23 17:59:26.469494 kubelet[3568]: I0123 17:59:26.468083 3568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d5hwj" podStartSLOduration=18.468061002 podStartE2EDuration="18.468061002s" podCreationTimestamp="2026-01-23 17:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:26.439369916 +0000 UTC m=+22.594878888" watchObservedRunningTime="2026-01-23 17:59:26.468061002 +0000 UTC m=+22.623569974" Jan 23 17:59:26.487358 systemd-networkd[1840]: vethd0dfe26e: Gained IPv6LL Jan 23 17:59:27.063293 systemd-networkd[1840]: cni0: Gained IPv6LL Jan 23 17:59:29.431420 ntpd[2177]: Listen normally on 8 cni0 192.168.0.1:123 Jan 23 17:59:29.431592 ntpd[2177]: Listen normally on 9 cni0 [fe80::c32:68ff:fe61:783e%5]:123 Jan 23 17:59:29.432328 ntpd[2177]: 23 Jan 17:59:29 ntpd[2177]: Listen normally on 8 cni0 192.168.0.1:123 Jan 23 17:59:29.432328 ntpd[2177]: 23 Jan 17:59:29 ntpd[2177]: Listen normally on 9 cni0 [fe80::c32:68ff:fe61:783e%5]:123 Jan 23 17:59:29.432328 ntpd[2177]: 23 Jan 17:59:29 ntpd[2177]: Listen normally on 10 vethd0dfe26e [fe80::a024:aff:fe87:4a0e%6]:123 Jan 23 17:59:29.431649 ntpd[2177]: Listen normally on 10 vethd0dfe26e [fe80::a024:aff:fe87:4a0e%6]:123 Jan 23 17:59:30.241347 containerd[2026]: time="2026-01-23T17:59:30.240877686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft9mm,Uid:757f7502-e4a9-4e9c-b9a3-fb712a81926c,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:30.277570 systemd-networkd[1840]: vethf7605745: Link UP Jan 23 17:59:30.283750 kernel: cni0: port 2(vethf7605745) entered blocking state Jan 23 17:59:30.283853 kernel: cni0: port 2(vethf7605745) entered disabled state Jan 23 17:59:30.283895 kernel: vethf7605745: entered allmulticast mode Jan 23 17:59:30.286231 kernel: vethf7605745: entered promiscuous mode Jan 23 17:59:30.286678 (udev-worker)[4343]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:59:30.296813 kernel: cni0: port 2(vethf7605745) entered blocking state Jan 23 17:59:30.296914 kernel: cni0: port 2(vethf7605745) entered forwarding state Jan 23 17:59:30.297165 systemd-networkd[1840]: vethf7605745: Gained carrier Jan 23 17:59:30.309396 containerd[2026]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400000e9a0), "name":"cbr0", "type":"bridge"} Jan 23 17:59:30.309396 containerd[2026]: delegateAdd: netconf sent to delegate plugin: Jan 23 17:59:30.354934 containerd[2026]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-23T17:59:30.354788526Z" level=info msg="connecting to shim 87af042bc407ae1153ccb9acf13438f56f8329498d437dcdef85f38ae6d849c6" address="unix:///run/containerd/s/b1e29c7594fb3729c070501944ccd8f61013faba3954ffc8171939dac28a3830" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:30.410424 systemd[1]: Started cri-containerd-87af042bc407ae1153ccb9acf13438f56f8329498d437dcdef85f38ae6d849c6.scope - libcontainer container 87af042bc407ae1153ccb9acf13438f56f8329498d437dcdef85f38ae6d849c6. Jan 23 17:59:30.488154 containerd[2026]: time="2026-01-23T17:59:30.488045270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft9mm,Uid:757f7502-e4a9-4e9c-b9a3-fb712a81926c,Namespace:kube-system,Attempt:0,} returns sandbox id \"87af042bc407ae1153ccb9acf13438f56f8329498d437dcdef85f38ae6d849c6\"" Jan 23 17:59:30.502971 containerd[2026]: time="2026-01-23T17:59:30.502794329Z" level=info msg="CreateContainer within sandbox \"87af042bc407ae1153ccb9acf13438f56f8329498d437dcdef85f38ae6d849c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:59:30.517147 containerd[2026]: time="2026-01-23T17:59:30.516800961Z" level=info msg="Container e9977c208c929b6354705683c2282b59a8ea828321a10c23a894739585ba3636: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:30.521926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365432913.mount: Deactivated successfully. Jan 23 17:59:30.537078 containerd[2026]: time="2026-01-23T17:59:30.537008104Z" level=info msg="CreateContainer within sandbox \"87af042bc407ae1153ccb9acf13438f56f8329498d437dcdef85f38ae6d849c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e9977c208c929b6354705683c2282b59a8ea828321a10c23a894739585ba3636\"" Jan 23 17:59:30.538101 containerd[2026]: time="2026-01-23T17:59:30.538043849Z" level=info msg="StartContainer for \"e9977c208c929b6354705683c2282b59a8ea828321a10c23a894739585ba3636\"" Jan 23 17:59:30.540683 containerd[2026]: time="2026-01-23T17:59:30.540574642Z" level=info msg="connecting to shim e9977c208c929b6354705683c2282b59a8ea828321a10c23a894739585ba3636" address="unix:///run/containerd/s/b1e29c7594fb3729c070501944ccd8f61013faba3954ffc8171939dac28a3830" protocol=ttrpc version=3 Jan 23 17:59:30.579441 systemd[1]: Started cri-containerd-e9977c208c929b6354705683c2282b59a8ea828321a10c23a894739585ba3636.scope - libcontainer container e9977c208c929b6354705683c2282b59a8ea828321a10c23a894739585ba3636. Jan 23 17:59:30.641197 containerd[2026]: time="2026-01-23T17:59:30.641094433Z" level=info msg="StartContainer for \"e9977c208c929b6354705683c2282b59a8ea828321a10c23a894739585ba3636\" returns successfully" Jan 23 17:59:31.484976 kubelet[3568]: I0123 17:59:31.484832 3568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ft9mm" podStartSLOduration=23.484806088 podStartE2EDuration="23.484806088s" podCreationTimestamp="2026-01-23 17:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:31.455936917 +0000 UTC m=+27.611445889" watchObservedRunningTime="2026-01-23 17:59:31.484806088 +0000 UTC m=+27.640315060" Jan 23 17:59:32.118362 systemd-networkd[1840]: vethf7605745: Gained IPv6LL Jan 23 17:59:34.431746 ntpd[2177]: Listen normally on 11 vethf7605745 [fe80::948a:b4ff:fead:9919%7]:123 Jan 23 17:59:34.432547 ntpd[2177]: 23 Jan 17:59:34 ntpd[2177]: Listen normally on 11 vethf7605745 [fe80::948a:b4ff:fead:9919%7]:123 Jan 23 17:59:46.355664 systemd[1]: Started sshd@5-172.31.16.186:22-68.220.241.50:49382.service - OpenSSH per-connection server daemon (68.220.241.50:49382). Jan 23 17:59:46.891855 sshd[4534]: Accepted publickey for core from 68.220.241.50 port 49382 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:46.894618 sshd-session[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:46.903175 systemd-logind[1999]: New session 6 of user core. Jan 23 17:59:46.910433 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 17:59:47.406866 sshd[4537]: Connection closed by 68.220.241.50 port 49382 Jan 23 17:59:47.405809 sshd-session[4534]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:47.414779 systemd[1]: sshd@5-172.31.16.186:22-68.220.241.50:49382.service: Deactivated successfully. Jan 23 17:59:47.418712 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 17:59:47.422295 systemd-logind[1999]: Session 6 logged out. Waiting for processes to exit. Jan 23 17:59:47.426510 systemd-logind[1999]: Removed session 6. Jan 23 17:59:52.513501 systemd[1]: Started sshd@6-172.31.16.186:22-68.220.241.50:51686.service - OpenSSH per-connection server daemon (68.220.241.50:51686). Jan 23 17:59:53.077468 sshd[4574]: Accepted publickey for core from 68.220.241.50 port 51686 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:53.080040 sshd-session[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:53.090652 systemd-logind[1999]: New session 7 of user core. Jan 23 17:59:53.098499 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 17:59:53.578354 sshd[4577]: Connection closed by 68.220.241.50 port 51686 Jan 23 17:59:53.578851 sshd-session[4574]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:53.588635 systemd[1]: sshd@6-172.31.16.186:22-68.220.241.50:51686.service: Deactivated successfully. Jan 23 17:59:53.593980 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 17:59:53.597643 systemd-logind[1999]: Session 7 logged out. Waiting for processes to exit. Jan 23 17:59:53.601977 systemd-logind[1999]: Removed session 7. Jan 23 17:59:58.673264 systemd[1]: Started sshd@7-172.31.16.186:22-68.220.241.50:51694.service - OpenSSH per-connection server daemon (68.220.241.50:51694). Jan 23 17:59:59.208153 sshd[4613]: Accepted publickey for core from 68.220.241.50 port 51694 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:59.210841 sshd-session[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:59.222702 systemd-logind[1999]: New session 8 of user core. Jan 23 17:59:59.231515 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 17:59:59.705818 sshd[4616]: Connection closed by 68.220.241.50 port 51694 Jan 23 17:59:59.706932 sshd-session[4613]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:59.715526 systemd[1]: sshd@7-172.31.16.186:22-68.220.241.50:51694.service: Deactivated successfully. Jan 23 17:59:59.720288 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 17:59:59.723878 systemd-logind[1999]: Session 8 logged out. Waiting for processes to exit. Jan 23 17:59:59.728959 systemd-logind[1999]: Removed session 8. Jan 23 17:59:59.799633 systemd[1]: Started sshd@8-172.31.16.186:22-68.220.241.50:51704.service - OpenSSH per-connection server daemon (68.220.241.50:51704). Jan 23 18:00:00.321738 sshd[4629]: Accepted publickey for core from 68.220.241.50 port 51704 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:00.324322 sshd-session[4629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:00.333640 systemd-logind[1999]: New session 9 of user core. Jan 23 18:00:00.346487 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:00:00.931163 sshd[4632]: Connection closed by 68.220.241.50 port 51704 Jan 23 18:00:00.929804 sshd-session[4629]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:00.937523 systemd[1]: sshd@8-172.31.16.186:22-68.220.241.50:51704.service: Deactivated successfully. Jan 23 18:00:00.942887 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:00:00.948039 systemd-logind[1999]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:00:00.951320 systemd-logind[1999]: Removed session 9. Jan 23 18:00:01.034616 systemd[1]: Started sshd@9-172.31.16.186:22-68.220.241.50:51710.service - OpenSSH per-connection server daemon (68.220.241.50:51710). Jan 23 18:00:01.597380 sshd[4641]: Accepted publickey for core from 68.220.241.50 port 51710 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:01.600893 sshd-session[4641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:01.611264 systemd-logind[1999]: New session 10 of user core. Jan 23 18:00:01.622495 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:00:02.122681 sshd[4664]: Connection closed by 68.220.241.50 port 51710 Jan 23 18:00:02.123279 sshd-session[4641]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:02.132264 systemd[1]: sshd@9-172.31.16.186:22-68.220.241.50:51710.service: Deactivated successfully. Jan 23 18:00:02.137232 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:00:02.140230 systemd-logind[1999]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:00:02.144068 systemd-logind[1999]: Removed session 10. Jan 23 18:00:07.212096 systemd[1]: Started sshd@10-172.31.16.186:22-68.220.241.50:53794.service - OpenSSH per-connection server daemon (68.220.241.50:53794). Jan 23 18:00:07.753435 sshd[4698]: Accepted publickey for core from 68.220.241.50 port 53794 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:07.755916 sshd-session[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:07.764186 systemd-logind[1999]: New session 11 of user core. Jan 23 18:00:07.779402 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:00:08.229693 sshd[4701]: Connection closed by 68.220.241.50 port 53794 Jan 23 18:00:08.229486 sshd-session[4698]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:08.238297 systemd[1]: sshd@10-172.31.16.186:22-68.220.241.50:53794.service: Deactivated successfully. Jan 23 18:00:08.247763 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:00:08.252993 systemd-logind[1999]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:00:08.256567 systemd-logind[1999]: Removed session 11. Jan 23 18:00:13.324399 systemd[1]: Started sshd@11-172.31.16.186:22-68.220.241.50:59064.service - OpenSSH per-connection server daemon (68.220.241.50:59064). Jan 23 18:00:13.844748 sshd[4735]: Accepted publickey for core from 68.220.241.50 port 59064 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:13.848155 sshd-session[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:13.858056 systemd-logind[1999]: New session 12 of user core. Jan 23 18:00:13.866475 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:00:14.327936 sshd[4738]: Connection closed by 68.220.241.50 port 59064 Jan 23 18:00:14.328830 sshd-session[4735]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:14.336378 systemd[1]: sshd@11-172.31.16.186:22-68.220.241.50:59064.service: Deactivated successfully. Jan 23 18:00:14.342016 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:00:14.344162 systemd-logind[1999]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:00:14.347023 systemd-logind[1999]: Removed session 12. Jan 23 18:00:19.422602 systemd[1]: Started sshd@12-172.31.16.186:22-68.220.241.50:59074.service - OpenSSH per-connection server daemon (68.220.241.50:59074). Jan 23 18:00:19.942041 sshd[4770]: Accepted publickey for core from 68.220.241.50 port 59074 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:19.944433 sshd-session[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:19.953732 systemd-logind[1999]: New session 13 of user core. Jan 23 18:00:19.967477 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:00:20.412670 sshd[4773]: Connection closed by 68.220.241.50 port 59074 Jan 23 18:00:20.413687 sshd-session[4770]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:20.421328 systemd-logind[1999]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:00:20.421470 systemd[1]: sshd@12-172.31.16.186:22-68.220.241.50:59074.service: Deactivated successfully. Jan 23 18:00:20.426493 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:00:20.432108 systemd-logind[1999]: Removed session 13. Jan 23 18:00:20.505401 systemd[1]: Started sshd@13-172.31.16.186:22-68.220.241.50:59080.service - OpenSSH per-connection server daemon (68.220.241.50:59080). Jan 23 18:00:21.037690 sshd[4785]: Accepted publickey for core from 68.220.241.50 port 59080 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:21.039881 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:21.049198 systemd-logind[1999]: New session 14 of user core. Jan 23 18:00:21.056424 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:00:21.599572 sshd[4788]: Connection closed by 68.220.241.50 port 59080 Jan 23 18:00:21.598716 sshd-session[4785]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:21.605558 systemd-logind[1999]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:00:21.607148 systemd[1]: sshd@13-172.31.16.186:22-68.220.241.50:59080.service: Deactivated successfully. Jan 23 18:00:21.613030 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:00:21.617075 systemd-logind[1999]: Removed session 14. Jan 23 18:00:21.703519 systemd[1]: Started sshd@14-172.31.16.186:22-68.220.241.50:59088.service - OpenSSH per-connection server daemon (68.220.241.50:59088). Jan 23 18:00:22.270184 sshd[4818]: Accepted publickey for core from 68.220.241.50 port 59088 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:22.273678 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:22.284317 systemd-logind[1999]: New session 15 of user core. Jan 23 18:00:22.293426 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:00:23.667153 sshd[4821]: Connection closed by 68.220.241.50 port 59088 Jan 23 18:00:23.668309 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:23.677426 systemd[1]: sshd@14-172.31.16.186:22-68.220.241.50:59088.service: Deactivated successfully. Jan 23 18:00:23.680890 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:00:23.683310 systemd-logind[1999]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:00:23.687436 systemd-logind[1999]: Removed session 15. Jan 23 18:00:23.757233 systemd[1]: Started sshd@15-172.31.16.186:22-68.220.241.50:60262.service - OpenSSH per-connection server daemon (68.220.241.50:60262). Jan 23 18:00:24.287203 sshd[4840]: Accepted publickey for core from 68.220.241.50 port 60262 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:24.289818 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:24.298856 systemd-logind[1999]: New session 16 of user core. Jan 23 18:00:24.305408 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:00:24.996308 sshd[4843]: Connection closed by 68.220.241.50 port 60262 Jan 23 18:00:24.997397 sshd-session[4840]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:25.004142 systemd[1]: sshd@15-172.31.16.186:22-68.220.241.50:60262.service: Deactivated successfully. Jan 23 18:00:25.007533 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:00:25.011245 systemd-logind[1999]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:00:25.014599 systemd-logind[1999]: Removed session 16. Jan 23 18:00:25.091480 systemd[1]: Started sshd@16-172.31.16.186:22-68.220.241.50:60278.service - OpenSSH per-connection server daemon (68.220.241.50:60278). Jan 23 18:00:25.609472 sshd[4853]: Accepted publickey for core from 68.220.241.50 port 60278 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:25.611911 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:25.620433 systemd-logind[1999]: New session 17 of user core. Jan 23 18:00:25.632426 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:00:26.102667 sshd[4856]: Connection closed by 68.220.241.50 port 60278 Jan 23 18:00:26.103678 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:26.113795 systemd[1]: sshd@16-172.31.16.186:22-68.220.241.50:60278.service: Deactivated successfully. Jan 23 18:00:26.119105 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:00:26.124281 systemd-logind[1999]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:00:26.127894 systemd-logind[1999]: Removed session 17. Jan 23 18:00:31.198523 systemd[1]: Started sshd@17-172.31.16.186:22-68.220.241.50:60292.service - OpenSSH per-connection server daemon (68.220.241.50:60292). Jan 23 18:00:31.714498 sshd[4891]: Accepted publickey for core from 68.220.241.50 port 60292 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:31.717284 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:31.728267 systemd-logind[1999]: New session 18 of user core. Jan 23 18:00:31.733479 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:00:32.181154 sshd[4913]: Connection closed by 68.220.241.50 port 60292 Jan 23 18:00:32.181299 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:32.189907 systemd-logind[1999]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:00:32.190404 systemd[1]: sshd@17-172.31.16.186:22-68.220.241.50:60292.service: Deactivated successfully. Jan 23 18:00:32.195903 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:00:32.199960 systemd-logind[1999]: Removed session 18. Jan 23 18:00:37.283866 systemd[1]: Started sshd@18-172.31.16.186:22-68.220.241.50:58388.service - OpenSSH per-connection server daemon (68.220.241.50:58388). Jan 23 18:00:37.802199 sshd[4945]: Accepted publickey for core from 68.220.241.50 port 58388 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:37.803965 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:37.812527 systemd-logind[1999]: New session 19 of user core. Jan 23 18:00:37.819431 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:00:38.278800 sshd[4948]: Connection closed by 68.220.241.50 port 58388 Jan 23 18:00:38.279496 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:38.287660 systemd-logind[1999]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:00:38.288200 systemd[1]: sshd@18-172.31.16.186:22-68.220.241.50:58388.service: Deactivated successfully. Jan 23 18:00:38.292653 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:00:38.299762 systemd-logind[1999]: Removed session 19. Jan 23 18:00:52.637655 systemd[1]: cri-containerd-399035c3626c751d1babd893d6f5f432d28464bb7674718fbbef1801064e3f1c.scope: Deactivated successfully. Jan 23 18:00:52.639151 systemd[1]: cri-containerd-399035c3626c751d1babd893d6f5f432d28464bb7674718fbbef1801064e3f1c.scope: Consumed 4.602s CPU time, 53.4M memory peak. Jan 23 18:00:52.645149 containerd[2026]: time="2026-01-23T18:00:52.645066792Z" level=info msg="received container exit event container_id:\"399035c3626c751d1babd893d6f5f432d28464bb7674718fbbef1801064e3f1c\" id:\"399035c3626c751d1babd893d6f5f432d28464bb7674718fbbef1801064e3f1c\" pid:3109 exit_status:1 exited_at:{seconds:1769191252 nanos:644508104}" Jan 23 18:00:52.693976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-399035c3626c751d1babd893d6f5f432d28464bb7674718fbbef1801064e3f1c-rootfs.mount: Deactivated successfully. Jan 23 18:00:53.667980 kubelet[3568]: I0123 18:00:53.667808 3568 scope.go:117] "RemoveContainer" containerID="399035c3626c751d1babd893d6f5f432d28464bb7674718fbbef1801064e3f1c" Jan 23 18:00:53.673167 containerd[2026]: time="2026-01-23T18:00:53.672513746Z" level=info msg="CreateContainer within sandbox \"02231008d986ca54421cd22347fea1b332fee17ac5d2ccbacafcc9235910116c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 18:00:53.692152 containerd[2026]: time="2026-01-23T18:00:53.691711004Z" level=info msg="Container 71a87c9f28cb3eb04dd61ca08abdf8ca6c0cfdbbc8b4c8b19b5cdd54888e22b6: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:00:53.713372 containerd[2026]: time="2026-01-23T18:00:53.713282916Z" level=info msg="CreateContainer within sandbox \"02231008d986ca54421cd22347fea1b332fee17ac5d2ccbacafcc9235910116c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"71a87c9f28cb3eb04dd61ca08abdf8ca6c0cfdbbc8b4c8b19b5cdd54888e22b6\"" Jan 23 18:00:53.714022 containerd[2026]: time="2026-01-23T18:00:53.713953427Z" level=info msg="StartContainer for \"71a87c9f28cb3eb04dd61ca08abdf8ca6c0cfdbbc8b4c8b19b5cdd54888e22b6\"" Jan 23 18:00:53.716832 containerd[2026]: time="2026-01-23T18:00:53.716734521Z" level=info msg="connecting to shim 71a87c9f28cb3eb04dd61ca08abdf8ca6c0cfdbbc8b4c8b19b5cdd54888e22b6" address="unix:///run/containerd/s/3eed601f7aa901e3c8757d3ee26d38bf4ce733cc921fd17236055694e0a58f78" protocol=ttrpc version=3 Jan 23 18:00:53.762436 systemd[1]: Started cri-containerd-71a87c9f28cb3eb04dd61ca08abdf8ca6c0cfdbbc8b4c8b19b5cdd54888e22b6.scope - libcontainer container 71a87c9f28cb3eb04dd61ca08abdf8ca6c0cfdbbc8b4c8b19b5cdd54888e22b6. Jan 23 18:00:53.864843 containerd[2026]: time="2026-01-23T18:00:53.864708784Z" level=info msg="StartContainer for \"71a87c9f28cb3eb04dd61ca08abdf8ca6c0cfdbbc8b4c8b19b5cdd54888e22b6\" returns successfully" Jan 23 18:00:56.470681 kubelet[3568]: E0123 18:00:56.470595 3568 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-16-186)" Jan 23 18:00:58.974517 systemd[1]: cri-containerd-ee7c6832d907b69b3e673ef5e688375bc372993f2795f4aecd1875b091408c3d.scope: Deactivated successfully. Jan 23 18:00:58.975633 systemd[1]: cri-containerd-ee7c6832d907b69b3e673ef5e688375bc372993f2795f4aecd1875b091408c3d.scope: Consumed 3.401s CPU time, 22.2M memory peak. Jan 23 18:00:58.981357 containerd[2026]: time="2026-01-23T18:00:58.981277686Z" level=info msg="received container exit event container_id:\"ee7c6832d907b69b3e673ef5e688375bc372993f2795f4aecd1875b091408c3d\" id:\"ee7c6832d907b69b3e673ef5e688375bc372993f2795f4aecd1875b091408c3d\" pid:3145 exit_status:1 exited_at:{seconds:1769191258 nanos:980791851}" Jan 23 18:00:59.024133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee7c6832d907b69b3e673ef5e688375bc372993f2795f4aecd1875b091408c3d-rootfs.mount: Deactivated successfully. Jan 23 18:00:59.698796 kubelet[3568]: I0123 18:00:59.698477 3568 scope.go:117] "RemoveContainer" containerID="ee7c6832d907b69b3e673ef5e688375bc372993f2795f4aecd1875b091408c3d" Jan 23 18:00:59.702541 containerd[2026]: time="2026-01-23T18:00:59.702469299Z" level=info msg="CreateContainer within sandbox \"25ec5ce9a230d3464e46da7abbc496095ccce54c7b4f2773911fedcfc58cd7c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 18:00:59.722226 containerd[2026]: time="2026-01-23T18:00:59.721674817Z" level=info msg="Container ed242d5a206b83cbea23d1abe56651eccaed28e7dcf5ab7a630a6294fedd0e9d: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:00:59.742319 containerd[2026]: time="2026-01-23T18:00:59.742259524Z" level=info msg="CreateContainer within sandbox \"25ec5ce9a230d3464e46da7abbc496095ccce54c7b4f2773911fedcfc58cd7c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ed242d5a206b83cbea23d1abe56651eccaed28e7dcf5ab7a630a6294fedd0e9d\"" Jan 23 18:00:59.743386 containerd[2026]: time="2026-01-23T18:00:59.743343210Z" level=info msg="StartContainer for \"ed242d5a206b83cbea23d1abe56651eccaed28e7dcf5ab7a630a6294fedd0e9d\"" Jan 23 18:00:59.745804 containerd[2026]: time="2026-01-23T18:00:59.745755983Z" level=info msg="connecting to shim ed242d5a206b83cbea23d1abe56651eccaed28e7dcf5ab7a630a6294fedd0e9d" address="unix:///run/containerd/s/5dfe1d4a419568627485f1466c651c8183f66ad26440861291d4deebaa68e9ff" protocol=ttrpc version=3 Jan 23 18:00:59.791676 systemd[1]: Started cri-containerd-ed242d5a206b83cbea23d1abe56651eccaed28e7dcf5ab7a630a6294fedd0e9d.scope - libcontainer container ed242d5a206b83cbea23d1abe56651eccaed28e7dcf5ab7a630a6294fedd0e9d. Jan 23 18:00:59.878970 containerd[2026]: time="2026-01-23T18:00:59.878914350Z" level=info msg="StartContainer for \"ed242d5a206b83cbea23d1abe56651eccaed28e7dcf5ab7a630a6294fedd0e9d\" returns successfully" Jan 23 18:01:06.471149 kubelet[3568]: E0123 18:01:06.470913 3568 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-186?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"