Jan 16 23:58:57.246368 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 16 23:58:57.246414 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 16 23:58:57.246438 kernel: KASLR disabled due to lack of seed Jan 16 23:58:57.246455 kernel: efi: EFI v2.7 by EDK II Jan 16 23:58:57.246472 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 16 23:58:57.246487 kernel: ACPI: Early table checksum verification disabled Jan 16 23:58:57.246505 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 16 23:58:57.246521 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 16 23:58:57.246537 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 16 23:58:57.246553 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 16 23:58:57.246574 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 16 23:58:57.246589 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 16 23:58:57.246606 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 16 23:58:57.246622 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 16 23:58:57.246640 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 16 23:58:57.246662 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 16 23:58:57.246680 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 16 23:58:57.246697 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 16 23:58:57.246713 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 16 23:58:57.246730 kernel: printk: bootconsole [uart0] enabled Jan 16 23:58:57.246746 kernel: NUMA: Failed to initialise from firmware Jan 16 23:58:57.246763 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 16 23:58:57.246780 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 16 23:58:57.246796 kernel: Zone ranges: Jan 16 23:58:57.246812 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 16 23:58:57.246829 kernel: DMA32 empty Jan 16 23:58:57.246850 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 16 23:58:57.246866 kernel: Movable zone start for each node Jan 16 23:58:57.246882 kernel: Early memory node ranges Jan 16 23:58:57.246899 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 16 23:58:57.246915 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 16 23:58:57.246932 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 16 23:58:57.246948 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 16 23:58:57.246965 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 16 23:58:57.246981 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 16 23:58:57.246998 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 16 23:58:57.247014 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 16 23:58:57.247030 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 16 23:58:57.247051 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 16 23:58:57.247068 kernel: psci: probing for conduit method from ACPI. Jan 16 23:58:57.247092 kernel: psci: PSCIv1.0 detected in firmware. Jan 16 23:58:57.247109 kernel: psci: Using standard PSCI v0.2 function IDs Jan 16 23:58:57.247127 kernel: psci: Trusted OS migration not required Jan 16 23:58:57.247149 kernel: psci: SMC Calling Convention v1.1 Jan 16 23:58:57.247167 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 16 23:58:57.247185 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 16 23:58:57.248321 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 16 23:58:57.248352 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 16 23:58:57.248370 kernel: Detected PIPT I-cache on CPU0 Jan 16 23:58:57.248389 kernel: CPU features: detected: GIC system register CPU interface Jan 16 23:58:57.248407 kernel: CPU features: detected: Spectre-v2 Jan 16 23:58:57.248424 kernel: CPU features: detected: Spectre-v3a Jan 16 23:58:57.248442 kernel: CPU features: detected: Spectre-BHB Jan 16 23:58:57.248459 kernel: CPU features: detected: ARM erratum 1742098 Jan 16 23:58:57.248484 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 16 23:58:57.248502 kernel: alternatives: applying boot alternatives Jan 16 23:58:57.248522 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:58:57.248540 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 23:58:57.248558 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 23:58:57.248575 kernel: Fallback order for Node 0: 0 Jan 16 23:58:57.248593 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 16 23:58:57.248610 kernel: Policy zone: Normal Jan 16 23:58:57.248628 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 23:58:57.248645 kernel: software IO TLB: area num 2. Jan 16 23:58:57.248662 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 16 23:58:57.248685 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 16 23:58:57.248703 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 23:58:57.248721 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 23:58:57.248739 kernel: rcu: RCU event tracing is enabled. Jan 16 23:58:57.248757 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 23:58:57.248775 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 23:58:57.248792 kernel: Tracing variant of Tasks RCU enabled. Jan 16 23:58:57.248810 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 23:58:57.248827 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 23:58:57.248845 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 16 23:58:57.248862 kernel: GICv3: 96 SPIs implemented Jan 16 23:58:57.248883 kernel: GICv3: 0 Extended SPIs implemented Jan 16 23:58:57.248901 kernel: Root IRQ handler: gic_handle_irq Jan 16 23:58:57.248917 kernel: GICv3: GICv3 features: 16 PPIs Jan 16 23:58:57.248935 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 16 23:58:57.248952 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 16 23:58:57.248969 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 16 23:58:57.248987 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 16 23:58:57.249005 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 16 23:58:57.249022 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 16 23:58:57.249039 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 16 23:58:57.249057 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 23:58:57.249074 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 16 23:58:57.249096 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 16 23:58:57.249114 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 16 23:58:57.249132 kernel: Console: colour dummy device 80x25 Jan 16 23:58:57.249150 kernel: printk: console [tty1] enabled Jan 16 23:58:57.249168 kernel: ACPI: Core revision 20230628 Jan 16 23:58:57.249186 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 16 23:58:57.250289 kernel: pid_max: default: 32768 minimum: 301 Jan 16 23:58:57.250316 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 23:58:57.250348 kernel: landlock: Up and running. Jan 16 23:58:57.250390 kernel: SELinux: Initializing. Jan 16 23:58:57.250411 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:58:57.250429 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:58:57.250448 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:58:57.250467 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:58:57.250485 kernel: rcu: Hierarchical SRCU implementation. Jan 16 23:58:57.250505 kernel: rcu: Max phase no-delay instances is 400. Jan 16 23:58:57.250523 kernel: Platform MSI: ITS@0x10080000 domain created Jan 16 23:58:57.250541 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 16 23:58:57.250566 kernel: Remapping and enabling EFI services. Jan 16 23:58:57.250584 kernel: smp: Bringing up secondary CPUs ... Jan 16 23:58:57.250602 kernel: Detected PIPT I-cache on CPU1 Jan 16 23:58:57.250619 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 16 23:58:57.250637 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 16 23:58:57.250655 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 16 23:58:57.250672 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 23:58:57.250690 kernel: SMP: Total of 2 processors activated. Jan 16 23:58:57.250708 kernel: CPU features: detected: 32-bit EL0 Support Jan 16 23:58:57.250730 kernel: CPU features: detected: 32-bit EL1 Support Jan 16 23:58:57.250748 kernel: CPU features: detected: CRC32 instructions Jan 16 23:58:57.250766 kernel: CPU: All CPU(s) started at EL1 Jan 16 23:58:57.250797 kernel: alternatives: applying system-wide alternatives Jan 16 23:58:57.250820 kernel: devtmpfs: initialized Jan 16 23:58:57.250839 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 23:58:57.250857 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 23:58:57.250876 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 23:58:57.250895 kernel: SMBIOS 3.0.0 present. Jan 16 23:58:57.250917 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 16 23:58:57.250936 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 23:58:57.250955 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 16 23:58:57.250973 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 16 23:58:57.250992 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 16 23:58:57.251011 kernel: audit: initializing netlink subsys (disabled) Jan 16 23:58:57.251029 kernel: audit: type=2000 audit(0.284:1): state=initialized audit_enabled=0 res=1 Jan 16 23:58:57.251047 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 23:58:57.251070 kernel: cpuidle: using governor menu Jan 16 23:58:57.251089 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 16 23:58:57.251108 kernel: ASID allocator initialised with 65536 entries Jan 16 23:58:57.251126 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 23:58:57.251144 kernel: Serial: AMBA PL011 UART driver Jan 16 23:58:57.251163 kernel: Modules: 17488 pages in range for non-PLT usage Jan 16 23:58:57.251181 kernel: Modules: 509008 pages in range for PLT usage Jan 16 23:58:57.251239 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 23:58:57.251264 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 16 23:58:57.251289 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 16 23:58:57.251308 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 16 23:58:57.251327 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 23:58:57.251345 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 23:58:57.251364 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 16 23:58:57.251383 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 16 23:58:57.251401 kernel: ACPI: Added _OSI(Module Device) Jan 16 23:58:57.251419 kernel: ACPI: Added _OSI(Processor Device) Jan 16 23:58:57.251438 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 23:58:57.251461 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 23:58:57.251480 kernel: ACPI: Interpreter enabled Jan 16 23:58:57.251498 kernel: ACPI: Using GIC for interrupt routing Jan 16 23:58:57.251516 kernel: ACPI: MCFG table detected, 1 entries Jan 16 23:58:57.251535 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 16 23:58:57.251857 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 16 23:58:57.252072 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 16 23:58:57.253409 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 16 23:58:57.253665 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 16 23:58:57.253910 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 16 23:58:57.253938 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 16 23:58:57.253958 kernel: acpiphp: Slot [1] registered Jan 16 23:58:57.253980 kernel: acpiphp: Slot [2] registered Jan 16 23:58:57.253999 kernel: acpiphp: Slot [3] registered Jan 16 23:58:57.254018 kernel: acpiphp: Slot [4] registered Jan 16 23:58:57.254037 kernel: acpiphp: Slot [5] registered Jan 16 23:58:57.254066 kernel: acpiphp: Slot [6] registered Jan 16 23:58:57.254085 kernel: acpiphp: Slot [7] registered Jan 16 23:58:57.254104 kernel: acpiphp: Slot [8] registered Jan 16 23:58:57.254123 kernel: acpiphp: Slot [9] registered Jan 16 23:58:57.254143 kernel: acpiphp: Slot [10] registered Jan 16 23:58:57.254162 kernel: acpiphp: Slot [11] registered Jan 16 23:58:57.254223 kernel: acpiphp: Slot [12] registered Jan 16 23:58:57.254250 kernel: acpiphp: Slot [13] registered Jan 16 23:58:57.254270 kernel: acpiphp: Slot [14] registered Jan 16 23:58:57.254288 kernel: acpiphp: Slot [15] registered Jan 16 23:58:57.254314 kernel: acpiphp: Slot [16] registered Jan 16 23:58:57.254332 kernel: acpiphp: Slot [17] registered Jan 16 23:58:57.254351 kernel: acpiphp: Slot [18] registered Jan 16 23:58:57.254370 kernel: acpiphp: Slot [19] registered Jan 16 23:58:57.254388 kernel: acpiphp: Slot [20] registered Jan 16 23:58:57.254407 kernel: acpiphp: Slot [21] registered Jan 16 23:58:57.254425 kernel: acpiphp: Slot [22] registered Jan 16 23:58:57.254443 kernel: acpiphp: Slot [23] registered Jan 16 23:58:57.254462 kernel: acpiphp: Slot [24] registered Jan 16 23:58:57.254485 kernel: acpiphp: Slot [25] registered Jan 16 23:58:57.254503 kernel: acpiphp: Slot [26] registered Jan 16 23:58:57.254521 kernel: acpiphp: Slot [27] registered Jan 16 23:58:57.254540 kernel: acpiphp: Slot [28] registered Jan 16 23:58:57.254558 kernel: acpiphp: Slot [29] registered Jan 16 23:58:57.254576 kernel: acpiphp: Slot [30] registered Jan 16 23:58:57.254595 kernel: acpiphp: Slot [31] registered Jan 16 23:58:57.254613 kernel: PCI host bridge to bus 0000:00 Jan 16 23:58:57.254889 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 16 23:58:57.255095 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 16 23:58:57.256530 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 16 23:58:57.256817 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 16 23:58:57.257098 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 16 23:58:57.257413 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 16 23:58:57.257666 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 16 23:58:57.257938 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 16 23:58:57.258152 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 16 23:58:57.259508 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 16 23:58:57.259768 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 16 23:58:57.259981 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 16 23:58:57.260192 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 16 23:58:57.263423 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 16 23:58:57.263701 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 16 23:58:57.263895 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 16 23:58:57.264078 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 16 23:58:57.266366 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 16 23:58:57.266411 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 16 23:58:57.266431 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 16 23:58:57.266460 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 16 23:58:57.266487 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 16 23:58:57.266517 kernel: iommu: Default domain type: Translated Jan 16 23:58:57.266536 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 16 23:58:57.266555 kernel: efivars: Registered efivars operations Jan 16 23:58:57.266573 kernel: vgaarb: loaded Jan 16 23:58:57.266592 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 16 23:58:57.266611 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 23:58:57.266630 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 23:58:57.266649 kernel: pnp: PnP ACPI init Jan 16 23:58:57.266897 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 16 23:58:57.266931 kernel: pnp: PnP ACPI: found 1 devices Jan 16 23:58:57.266950 kernel: NET: Registered PF_INET protocol family Jan 16 23:58:57.266969 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 23:58:57.266988 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 16 23:58:57.267007 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 23:58:57.267026 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 23:58:57.267044 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 16 23:58:57.267063 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 16 23:58:57.267086 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:58:57.267105 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:58:57.267124 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 23:58:57.267142 kernel: PCI: CLS 0 bytes, default 64 Jan 16 23:58:57.267160 kernel: kvm [1]: HYP mode not available Jan 16 23:58:57.267179 kernel: Initialise system trusted keyrings Jan 16 23:58:57.267213 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 16 23:58:57.267239 kernel: Key type asymmetric registered Jan 16 23:58:57.267259 kernel: Asymmetric key parser 'x509' registered Jan 16 23:58:57.267284 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 16 23:58:57.267304 kernel: io scheduler mq-deadline registered Jan 16 23:58:57.267325 kernel: io scheduler kyber registered Jan 16 23:58:57.267344 kernel: io scheduler bfq registered Jan 16 23:58:57.267572 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 16 23:58:57.267602 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 16 23:58:57.267622 kernel: ACPI: button: Power Button [PWRB] Jan 16 23:58:57.267641 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 16 23:58:57.267660 kernel: ACPI: button: Sleep Button [SLPB] Jan 16 23:58:57.267686 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 23:58:57.267706 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 16 23:58:57.268041 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 16 23:58:57.268073 kernel: printk: console [ttyS0] disabled Jan 16 23:58:57.268093 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 16 23:58:57.268112 kernel: printk: console [ttyS0] enabled Jan 16 23:58:57.268131 kernel: printk: bootconsole [uart0] disabled Jan 16 23:58:57.268150 kernel: thunder_xcv, ver 1.0 Jan 16 23:58:57.268168 kernel: thunder_bgx, ver 1.0 Jan 16 23:58:57.268194 kernel: nicpf, ver 1.0 Jan 16 23:58:57.268256 kernel: nicvf, ver 1.0 Jan 16 23:58:57.268488 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 16 23:58:57.268697 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-16T23:58:56 UTC (1768607936) Jan 16 23:58:57.268725 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 16 23:58:57.268745 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 16 23:58:57.268764 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 16 23:58:57.268783 kernel: watchdog: Hard watchdog permanently disabled Jan 16 23:58:57.268810 kernel: NET: Registered PF_INET6 protocol family Jan 16 23:58:57.268828 kernel: Segment Routing with IPv6 Jan 16 23:58:57.268847 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 23:58:57.268865 kernel: NET: Registered PF_PACKET protocol family Jan 16 23:58:57.268884 kernel: Key type dns_resolver registered Jan 16 23:58:57.268903 kernel: registered taskstats version 1 Jan 16 23:58:57.268921 kernel: Loading compiled-in X.509 certificates Jan 16 23:58:57.268940 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 16 23:58:57.268959 kernel: Key type .fscrypt registered Jan 16 23:58:57.268984 kernel: Key type fscrypt-provisioning registered Jan 16 23:58:57.269003 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 23:58:57.269022 kernel: ima: Allocated hash algorithm: sha1 Jan 16 23:58:57.269041 kernel: ima: No architecture policies found Jan 16 23:58:57.269061 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 16 23:58:57.269080 kernel: clk: Disabling unused clocks Jan 16 23:58:57.269100 kernel: Freeing unused kernel memory: 39424K Jan 16 23:58:57.269119 kernel: Run /init as init process Jan 16 23:58:57.269138 kernel: with arguments: Jan 16 23:58:57.269161 kernel: /init Jan 16 23:58:57.269180 kernel: with environment: Jan 16 23:58:57.269236 kernel: HOME=/ Jan 16 23:58:57.269262 kernel: TERM=linux Jan 16 23:58:57.269286 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 23:58:57.269309 systemd[1]: Detected virtualization amazon. Jan 16 23:58:57.269330 systemd[1]: Detected architecture arm64. Jan 16 23:58:57.269351 systemd[1]: Running in initrd. Jan 16 23:58:57.269379 systemd[1]: No hostname configured, using default hostname. Jan 16 23:58:57.269399 systemd[1]: Hostname set to . Jan 16 23:58:57.269422 systemd[1]: Initializing machine ID from VM UUID. Jan 16 23:58:57.269443 systemd[1]: Queued start job for default target initrd.target. Jan 16 23:58:57.269464 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:58:57.269486 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:58:57.269508 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 23:58:57.269530 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 23:58:57.269557 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 23:58:57.269579 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 23:58:57.269604 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 23:58:57.269625 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 23:58:57.269646 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:58:57.269666 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:58:57.269692 systemd[1]: Reached target paths.target - Path Units. Jan 16 23:58:57.269713 systemd[1]: Reached target slices.target - Slice Units. Jan 16 23:58:57.269733 systemd[1]: Reached target swap.target - Swaps. Jan 16 23:58:57.269754 systemd[1]: Reached target timers.target - Timer Units. Jan 16 23:58:57.269774 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:58:57.269795 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:58:57.269816 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 23:58:57.269860 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 23:58:57.269882 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:58:57.269909 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 23:58:57.269931 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:58:57.269951 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 23:58:57.269972 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 23:58:57.269992 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 23:58:57.270013 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 23:58:57.270033 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 23:58:57.270053 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 23:58:57.270074 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 23:58:57.270100 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:58:57.270120 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 23:58:57.270141 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:58:57.270161 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 23:58:57.270495 systemd-journald[251]: Collecting audit messages is disabled. Jan 16 23:58:57.270555 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 23:58:57.270577 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 23:58:57.270598 systemd-journald[251]: Journal started Jan 16 23:58:57.270640 systemd-journald[251]: Runtime Journal (/run/log/journal/ec258c455257bf177689c7fdfe7c2c50) is 8.0M, max 75.3M, 67.3M free. Jan 16 23:58:57.230530 systemd-modules-load[252]: Inserted module 'overlay' Jan 16 23:58:57.281262 kernel: Bridge firewalling registered Jan 16 23:58:57.281329 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 23:58:57.280662 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 16 23:58:57.286280 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 23:58:57.302627 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 23:58:57.308626 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 23:58:57.312929 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:58:57.328697 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:58:57.340444 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:58:57.354481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 23:58:57.361918 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:58:57.385744 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:58:57.403067 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 23:58:57.410339 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:58:57.418548 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:58:57.431516 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 23:58:57.473170 dracut-cmdline[292]: dracut-dracut-053 Jan 16 23:58:57.480271 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:58:57.513518 systemd-resolved[283]: Positive Trust Anchors: Jan 16 23:58:57.513546 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 23:58:57.513608 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 23:58:57.676293 kernel: SCSI subsystem initialized Jan 16 23:58:57.684323 kernel: Loading iSCSI transport class v2.0-870. Jan 16 23:58:57.697332 kernel: iscsi: registered transport (tcp) Jan 16 23:58:57.719965 kernel: iscsi: registered transport (qla4xxx) Jan 16 23:58:57.720038 kernel: QLogic iSCSI HBA Driver Jan 16 23:58:57.780275 kernel: random: crng init done Jan 16 23:58:57.780655 systemd-resolved[283]: Defaulting to hostname 'linux'. Jan 16 23:58:57.786838 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 23:58:57.799235 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:58:57.810301 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 23:58:57.819444 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 23:58:57.857093 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 23:58:57.857170 kernel: device-mapper: uevent: version 1.0.3 Jan 16 23:58:57.859244 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 23:58:57.925247 kernel: raid6: neonx8 gen() 6695 MB/s Jan 16 23:58:57.942236 kernel: raid6: neonx4 gen() 6568 MB/s Jan 16 23:58:57.959250 kernel: raid6: neonx2 gen() 5456 MB/s Jan 16 23:58:57.976239 kernel: raid6: neonx1 gen() 3965 MB/s Jan 16 23:58:57.993236 kernel: raid6: int64x8 gen() 3722 MB/s Jan 16 23:58:58.010236 kernel: raid6: int64x4 gen() 3692 MB/s Jan 16 23:58:58.027236 kernel: raid6: int64x2 gen() 3599 MB/s Jan 16 23:58:58.045288 kernel: raid6: int64x1 gen() 2752 MB/s Jan 16 23:58:58.045328 kernel: raid6: using algorithm neonx8 gen() 6695 MB/s Jan 16 23:58:58.064304 kernel: raid6: .... xor() 4778 MB/s, rmw enabled Jan 16 23:58:58.064379 kernel: raid6: using neon recovery algorithm Jan 16 23:58:58.072239 kernel: xor: measuring software checksum speed Jan 16 23:58:58.074649 kernel: 8regs : 10253 MB/sec Jan 16 23:58:58.074682 kernel: 32regs : 11915 MB/sec Jan 16 23:58:58.075944 kernel: arm64_neon : 9550 MB/sec Jan 16 23:58:58.075986 kernel: xor: using function: 32regs (11915 MB/sec) Jan 16 23:58:58.162256 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 23:58:58.181452 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:58:58.195545 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:58:58.246057 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jan 16 23:58:58.254063 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:58:58.271511 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 23:58:58.308662 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Jan 16 23:58:58.366101 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:58:58.377533 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 23:58:58.501322 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:58:58.519518 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 23:58:58.577791 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 23:58:58.586187 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:58:58.594048 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:58:58.596997 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 23:58:58.609550 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 23:58:58.662927 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:58:58.709376 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:58:58.709515 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:58:58.725002 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 16 23:58:58.725054 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 16 23:58:58.717058 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:58:58.721987 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:58:58.724987 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:58:58.746589 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 16 23:58:58.746900 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 16 23:58:58.727756 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:58:58.750899 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:58:58.764243 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:40:57:b9:05:c9 Jan 16 23:58:58.766677 (udev-worker)[521]: Network interface NamePolicy= disabled on kernel command line. Jan 16 23:58:58.791270 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 16 23:58:58.793238 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 16 23:58:58.799321 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:58:58.810249 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 16 23:58:58.813506 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:58:58.829865 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 23:58:58.829947 kernel: GPT:9289727 != 33554431 Jan 16 23:58:58.829973 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 23:58:58.831185 kernel: GPT:9289727 != 33554431 Jan 16 23:58:58.831904 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 23:58:58.834275 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:58:58.850900 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:58:58.938308 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (539) Jan 16 23:58:58.968901 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (522) Jan 16 23:58:59.032954 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 16 23:58:59.054491 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 16 23:58:59.095084 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 16 23:58:59.111601 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 16 23:58:59.114398 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 16 23:58:59.134572 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 23:58:59.148668 disk-uuid[668]: Primary Header is updated. Jan 16 23:58:59.148668 disk-uuid[668]: Secondary Entries is updated. Jan 16 23:58:59.148668 disk-uuid[668]: Secondary Header is updated. Jan 16 23:58:59.160278 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:58:59.168241 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:58:59.176242 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:59:00.180230 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:59:00.181617 disk-uuid[669]: The operation has completed successfully. Jan 16 23:59:00.360423 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 23:59:00.360670 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 23:59:00.422515 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 23:59:00.445239 sh[1013]: Success Jan 16 23:59:00.470263 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 16 23:59:00.583061 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 23:59:00.599408 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 23:59:00.612747 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 23:59:00.642919 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 16 23:59:00.642984 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:59:00.643010 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 23:59:00.644482 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 23:59:00.645783 kernel: BTRFS info (device dm-0): using free space tree Jan 16 23:59:00.725224 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 16 23:59:00.748097 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 23:59:00.755923 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 23:59:00.765559 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 23:59:00.774085 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 23:59:00.806506 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:00.806587 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:59:00.806614 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 16 23:59:00.822239 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 23:59:00.842972 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 23:59:00.846285 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:00.858263 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 23:59:00.872547 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 23:59:00.971868 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:59:00.985521 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 23:59:01.044801 systemd-networkd[1217]: lo: Link UP Jan 16 23:59:01.044821 systemd-networkd[1217]: lo: Gained carrier Jan 16 23:59:01.050572 systemd-networkd[1217]: Enumeration completed Jan 16 23:59:01.050746 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 23:59:01.053970 systemd-networkd[1217]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:59:01.053978 systemd-networkd[1217]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:59:01.056018 systemd[1]: Reached target network.target - Network. Jan 16 23:59:01.067439 systemd-networkd[1217]: eth0: Link UP Jan 16 23:59:01.067446 systemd-networkd[1217]: eth0: Gained carrier Jan 16 23:59:01.067463 systemd-networkd[1217]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:59:01.095304 systemd-networkd[1217]: eth0: DHCPv4 address 172.31.29.179/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 16 23:59:01.343486 ignition[1141]: Ignition 2.19.0 Jan 16 23:59:01.343514 ignition[1141]: Stage: fetch-offline Jan 16 23:59:01.347789 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:01.347829 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:01.350332 ignition[1141]: Ignition finished successfully Jan 16 23:59:01.352339 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:59:01.376846 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 23:59:01.402521 ignition[1228]: Ignition 2.19.0 Jan 16 23:59:01.402541 ignition[1228]: Stage: fetch Jan 16 23:59:01.403143 ignition[1228]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:01.403168 ignition[1228]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:01.403859 ignition[1228]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:01.432742 ignition[1228]: PUT result: OK Jan 16 23:59:01.436401 ignition[1228]: parsed url from cmdline: "" Jan 16 23:59:01.436423 ignition[1228]: no config URL provided Jan 16 23:59:01.436439 ignition[1228]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 23:59:01.436466 ignition[1228]: no config at "/usr/lib/ignition/user.ign" Jan 16 23:59:01.436510 ignition[1228]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:01.438731 ignition[1228]: PUT result: OK Jan 16 23:59:01.438807 ignition[1228]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 16 23:59:01.446725 ignition[1228]: GET result: OK Jan 16 23:59:01.449281 ignition[1228]: parsing config with SHA512: 04776f4de5f9ccc95aa4fc3d5d26495f2b64bc66a12d586a2270e817fd4ad1d22d461361ef03d6304a6d2e0fea73cf77261621e39bf59cc7cdeb3c84106ed9cb Jan 16 23:59:01.458074 unknown[1228]: fetched base config from "system" Jan 16 23:59:01.459031 unknown[1228]: fetched base config from "system" Jan 16 23:59:01.459762 ignition[1228]: fetch: fetch complete Jan 16 23:59:01.459046 unknown[1228]: fetched user config from "aws" Jan 16 23:59:01.459773 ignition[1228]: fetch: fetch passed Jan 16 23:59:01.459861 ignition[1228]: Ignition finished successfully Jan 16 23:59:01.473259 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 23:59:01.481585 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 23:59:01.509089 ignition[1234]: Ignition 2.19.0 Jan 16 23:59:01.509119 ignition[1234]: Stage: kargs Jan 16 23:59:01.509865 ignition[1234]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:01.509894 ignition[1234]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:01.510061 ignition[1234]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:01.512075 ignition[1234]: PUT result: OK Jan 16 23:59:01.518855 ignition[1234]: kargs: kargs passed Jan 16 23:59:01.518974 ignition[1234]: Ignition finished successfully Jan 16 23:59:01.533270 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 23:59:01.551680 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 23:59:01.580496 ignition[1241]: Ignition 2.19.0 Jan 16 23:59:01.580525 ignition[1241]: Stage: disks Jan 16 23:59:01.582486 ignition[1241]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:01.582520 ignition[1241]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:01.582744 ignition[1241]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:01.589673 ignition[1241]: PUT result: OK Jan 16 23:59:01.596414 ignition[1241]: disks: disks passed Jan 16 23:59:01.596792 ignition[1241]: Ignition finished successfully Jan 16 23:59:01.604860 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 23:59:01.611039 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 23:59:01.616116 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 23:59:01.619602 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 23:59:01.621904 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 23:59:01.624270 systemd[1]: Reached target basic.target - Basic System. Jan 16 23:59:01.640535 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 23:59:01.687424 systemd-fsck[1250]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 16 23:59:01.691700 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 23:59:01.704662 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 23:59:01.791248 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 16 23:59:01.793867 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 23:59:01.797627 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 23:59:01.811511 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:59:01.824029 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 23:59:01.833353 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 16 23:59:01.853093 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1269) Jan 16 23:59:01.853139 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:01.853189 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:59:01.853247 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 16 23:59:01.833440 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 23:59:01.833492 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:59:01.845112 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 23:59:01.868632 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 23:59:01.880247 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 23:59:01.882780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:59:02.343428 initrd-setup-root[1293]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 23:59:02.366375 initrd-setup-root[1300]: cut: /sysroot/etc/group: No such file or directory Jan 16 23:59:02.375476 initrd-setup-root[1307]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 23:59:02.386249 initrd-setup-root[1314]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 23:59:02.815423 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 23:59:02.825418 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 23:59:02.829524 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 23:59:02.858703 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 23:59:02.863132 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:02.900666 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 23:59:02.902922 systemd-networkd[1217]: eth0: Gained IPv6LL Jan 16 23:59:02.914877 ignition[1381]: INFO : Ignition 2.19.0 Jan 16 23:59:02.917248 ignition[1381]: INFO : Stage: mount Jan 16 23:59:02.917248 ignition[1381]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:02.917248 ignition[1381]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:02.917248 ignition[1381]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:02.927527 ignition[1381]: INFO : PUT result: OK Jan 16 23:59:02.930894 ignition[1381]: INFO : mount: mount passed Jan 16 23:59:02.930894 ignition[1381]: INFO : Ignition finished successfully Jan 16 23:59:02.937550 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 23:59:02.949584 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 23:59:02.968239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:59:02.997225 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1393) Jan 16 23:59:03.001041 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:03.001079 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:59:03.002406 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 16 23:59:03.008243 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 23:59:03.011946 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:59:03.058718 ignition[1410]: INFO : Ignition 2.19.0 Jan 16 23:59:03.058718 ignition[1410]: INFO : Stage: files Jan 16 23:59:03.064833 ignition[1410]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:03.064833 ignition[1410]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:03.064833 ignition[1410]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:03.064833 ignition[1410]: INFO : PUT result: OK Jan 16 23:59:03.075440 ignition[1410]: DEBUG : files: compiled without relabeling support, skipping Jan 16 23:59:03.078669 ignition[1410]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 23:59:03.078669 ignition[1410]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 23:59:03.100053 ignition[1410]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 23:59:03.103430 ignition[1410]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 23:59:03.106875 unknown[1410]: wrote ssh authorized keys file for user: core Jan 16 23:59:03.110252 ignition[1410]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 23:59:03.114180 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 16 23:59:03.114180 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 16 23:59:03.209603 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 23:59:03.366290 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 16 23:59:03.366290 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 23:59:03.366290 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 23:59:03.379052 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 23:59:03.379052 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 23:59:03.379052 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 23:59:03.379052 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 23:59:03.379052 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 23:59:03.379052 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 23:59:03.379052 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:59:03.379052 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:59:03.379052 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 16 23:59:03.379052 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 16 23:59:03.379052 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 16 23:59:03.379052 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 16 23:59:04.019322 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 16 23:59:04.424851 ignition[1410]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 16 23:59:04.430138 ignition[1410]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 16 23:59:04.430138 ignition[1410]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 23:59:04.430138 ignition[1410]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 23:59:04.430138 ignition[1410]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 16 23:59:04.430138 ignition[1410]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 16 23:59:04.430138 ignition[1410]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 23:59:04.430138 ignition[1410]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:59:04.430138 ignition[1410]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:59:04.430138 ignition[1410]: INFO : files: files passed Jan 16 23:59:04.430138 ignition[1410]: INFO : Ignition finished successfully Jan 16 23:59:04.464160 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 23:59:04.477646 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 23:59:04.487591 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 23:59:04.490855 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 23:59:04.491048 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 23:59:04.531866 initrd-setup-root-after-ignition[1438]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:59:04.536340 initrd-setup-root-after-ignition[1438]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:59:04.539879 initrd-setup-root-after-ignition[1442]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:59:04.546584 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:59:04.550659 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 23:59:04.564539 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 23:59:04.631570 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 23:59:04.631754 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 23:59:04.635358 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 23:59:04.639241 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 23:59:04.641654 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 23:59:04.658544 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 23:59:04.684061 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:59:04.696563 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 23:59:04.723767 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:59:04.724274 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:59:04.725122 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 23:59:04.725891 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 23:59:04.726225 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:59:04.727513 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 23:59:04.728361 systemd[1]: Stopped target basic.target - Basic System. Jan 16 23:59:04.729172 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 23:59:04.730540 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:59:04.730912 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 23:59:04.731305 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 23:59:04.731661 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:59:04.732050 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 23:59:04.732436 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 23:59:04.732783 systemd[1]: Stopped target swap.target - Swaps. Jan 16 23:59:04.733066 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 23:59:04.733376 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:59:04.734517 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:59:04.735452 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:59:04.736148 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 23:59:04.766881 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:59:04.794529 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 23:59:04.794841 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 23:59:04.813989 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 23:59:04.816546 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:59:04.824961 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 23:59:04.825172 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 23:59:04.859648 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 23:59:04.866793 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 23:59:04.870335 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 23:59:04.870614 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:59:04.873603 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 23:59:04.873892 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:59:04.895395 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 23:59:04.900658 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 23:59:04.917368 ignition[1462]: INFO : Ignition 2.19.0 Jan 16 23:59:04.920653 ignition[1462]: INFO : Stage: umount Jan 16 23:59:04.920653 ignition[1462]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:04.920653 ignition[1462]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:04.920653 ignition[1462]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:04.932090 ignition[1462]: INFO : PUT result: OK Jan 16 23:59:04.937995 ignition[1462]: INFO : umount: umount passed Jan 16 23:59:04.937995 ignition[1462]: INFO : Ignition finished successfully Jan 16 23:59:04.945512 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 23:59:04.946056 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 23:59:04.954240 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 23:59:04.954956 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 23:59:04.955038 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 23:59:04.957815 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 23:59:04.959589 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 23:59:04.966791 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 23:59:04.967900 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 23:59:04.971517 systemd[1]: Stopped target network.target - Network. Jan 16 23:59:04.975944 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 23:59:04.976065 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:59:04.980908 systemd[1]: Stopped target paths.target - Path Units. Jan 16 23:59:04.988412 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 23:59:04.992693 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:59:04.995514 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 23:59:04.997563 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 23:59:05.000069 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 23:59:05.000152 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:59:05.003013 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 23:59:05.003084 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:59:05.031335 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 23:59:05.031444 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 23:59:05.033820 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 23:59:05.033907 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 23:59:05.036683 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 23:59:05.041441 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 23:59:05.049265 systemd-networkd[1217]: eth0: DHCPv6 lease lost Jan 16 23:59:05.061470 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 23:59:05.063706 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 23:59:05.075364 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 23:59:05.077883 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 23:59:05.083625 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 23:59:05.083819 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 23:59:05.088458 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 23:59:05.088968 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:59:05.092959 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 23:59:05.093061 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 23:59:05.113527 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 23:59:05.116139 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 23:59:05.116274 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:59:05.126580 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 23:59:05.126690 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:59:05.129228 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 23:59:05.129314 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 23:59:05.131900 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 23:59:05.131997 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:59:05.135883 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:59:05.169480 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 23:59:05.170005 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:59:05.179731 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 23:59:05.179872 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 23:59:05.182787 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 23:59:05.182869 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:59:05.185420 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 23:59:05.190049 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:59:05.201587 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 23:59:05.201691 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 23:59:05.204985 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:59:05.205069 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:59:05.225599 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 23:59:05.228391 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 23:59:05.228510 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:59:05.239726 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 23:59:05.239836 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:59:05.244375 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 23:59:05.244474 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:59:05.247493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:59:05.247600 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:59:05.251410 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 23:59:05.251606 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 23:59:05.261850 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 23:59:05.262093 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 23:59:05.267731 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 23:59:05.285768 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 23:59:05.325519 systemd[1]: Switching root. Jan 16 23:59:05.371242 systemd-journald[251]: Journal stopped Jan 16 23:59:07.820432 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 16 23:59:07.820574 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 23:59:07.820620 kernel: SELinux: policy capability open_perms=1 Jan 16 23:59:07.820651 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 23:59:07.820683 kernel: SELinux: policy capability always_check_network=0 Jan 16 23:59:07.820718 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 23:59:07.820749 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 23:59:07.820778 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 23:59:07.820810 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 23:59:07.820837 kernel: audit: type=1403 audit(1768607945.889:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 23:59:07.820880 systemd[1]: Successfully loaded SELinux policy in 83.445ms. Jan 16 23:59:07.820931 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.225ms. Jan 16 23:59:07.820967 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 23:59:07.821001 systemd[1]: Detected virtualization amazon. Jan 16 23:59:07.821036 systemd[1]: Detected architecture arm64. Jan 16 23:59:07.821068 systemd[1]: Detected first boot. Jan 16 23:59:07.821102 systemd[1]: Initializing machine ID from VM UUID. Jan 16 23:59:07.821135 zram_generator::config[1504]: No configuration found. Jan 16 23:59:07.821178 systemd[1]: Populated /etc with preset unit settings. Jan 16 23:59:07.821234 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 23:59:07.821271 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 23:59:07.821306 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 23:59:07.821345 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 23:59:07.821379 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 23:59:07.821413 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 23:59:07.821447 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 23:59:07.821481 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 23:59:07.821514 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 23:59:07.821546 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 23:59:07.821578 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 23:59:07.821611 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:59:07.821648 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:59:07.821682 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 23:59:07.821715 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 23:59:07.821749 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 23:59:07.821834 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 23:59:07.821874 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 23:59:07.821907 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:59:07.821948 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 23:59:07.821978 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 23:59:07.822014 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 23:59:07.822044 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 23:59:07.822077 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:59:07.822109 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 23:59:07.822142 systemd[1]: Reached target slices.target - Slice Units. Jan 16 23:59:07.822179 systemd[1]: Reached target swap.target - Swaps. Jan 16 23:59:07.822420 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 23:59:07.822461 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 23:59:07.822499 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:59:07.822530 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 23:59:07.822561 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:59:07.822593 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 23:59:07.822623 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 23:59:07.822665 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 23:59:07.822695 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 23:59:07.822726 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 23:59:07.822756 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 23:59:07.822790 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 23:59:07.822823 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 23:59:07.822854 systemd[1]: Reached target machines.target - Containers. Jan 16 23:59:07.822884 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 23:59:07.822914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:59:07.822943 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 23:59:07.822976 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 23:59:07.823007 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:59:07.823044 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 23:59:07.823078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:59:07.823111 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 23:59:07.823141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:59:07.823171 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 23:59:07.823220 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 23:59:07.823259 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 23:59:07.823290 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 23:59:07.823326 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 23:59:07.823357 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 23:59:07.823387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 23:59:07.823419 kernel: ACPI: bus type drm_connector registered Jan 16 23:59:07.823455 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 23:59:07.823487 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 23:59:07.823519 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 23:59:07.823550 systemd[1]: verity-setup.service: Deactivated successfully. Jan 16 23:59:07.823583 systemd[1]: Stopped verity-setup.service. Jan 16 23:59:07.823612 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 23:59:07.823647 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 23:59:07.823676 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 23:59:07.823706 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 23:59:07.823737 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 23:59:07.823769 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 23:59:07.823802 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:59:07.823837 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 23:59:07.823866 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 23:59:07.823897 kernel: fuse: init (API version 7.39) Jan 16 23:59:07.823928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:59:07.823958 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:59:07.823987 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 23:59:07.824018 kernel: loop: module loaded Jan 16 23:59:07.824051 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 23:59:07.824083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:59:07.824118 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:59:07.824147 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 23:59:07.824180 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 23:59:07.824240 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:59:07.824278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:59:07.824309 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 23:59:07.824339 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 23:59:07.824371 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 23:59:07.824400 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 23:59:07.824430 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 23:59:07.824461 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:59:07.824538 systemd-journald[1582]: Collecting audit messages is disabled. Jan 16 23:59:07.824594 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 23:59:07.824626 systemd-journald[1582]: Journal started Jan 16 23:59:07.824675 systemd-journald[1582]: Runtime Journal (/run/log/journal/ec258c455257bf177689c7fdfe7c2c50) is 8.0M, max 75.3M, 67.3M free. Jan 16 23:59:07.125733 systemd[1]: Queued start job for default target multi-user.target. Jan 16 23:59:07.177536 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 16 23:59:07.178361 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 23:59:07.835025 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 23:59:07.843614 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 23:59:07.850366 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 23:59:07.856999 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 23:59:07.862381 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 23:59:07.901953 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 23:59:07.917836 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 23:59:07.917917 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 23:59:07.929187 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 23:59:07.941492 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 23:59:07.950011 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 16 23:59:07.950044 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 16 23:59:07.955535 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 23:59:07.958327 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:59:07.970054 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 23:59:07.983586 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 23:59:07.988515 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:59:07.992019 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 23:59:08.002874 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 23:59:08.009557 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:59:08.013933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:59:08.032142 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 23:59:08.036095 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 23:59:08.070017 systemd-journald[1582]: Time spent on flushing to /var/log/journal/ec258c455257bf177689c7fdfe7c2c50 is 86.932ms for 906 entries. Jan 16 23:59:08.070017 systemd-journald[1582]: System Journal (/var/log/journal/ec258c455257bf177689c7fdfe7c2c50) is 8.0M, max 195.6M, 187.6M free. Jan 16 23:59:08.181492 systemd-journald[1582]: Received client request to flush runtime journal. Jan 16 23:59:08.181561 kernel: loop0: detected capacity change from 0 to 114328 Jan 16 23:59:08.087343 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 23:59:08.090743 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 23:59:08.098579 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 23:59:08.185371 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:59:08.189088 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 23:59:08.199562 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 23:59:08.209392 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 23:59:08.210919 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 23:59:08.222699 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 23:59:08.252233 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 23:59:08.270062 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 23:59:08.293258 kernel: loop1: detected capacity change from 0 to 52536 Jan 16 23:59:08.316013 udevadm[1656]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 16 23:59:08.317122 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Jan 16 23:59:08.317147 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Jan 16 23:59:08.328739 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:59:08.363749 kernel: loop2: detected capacity change from 0 to 211168 Jan 16 23:59:08.424362 kernel: loop3: detected capacity change from 0 to 114432 Jan 16 23:59:08.560409 kernel: loop4: detected capacity change from 0 to 114328 Jan 16 23:59:08.579264 kernel: loop5: detected capacity change from 0 to 52536 Jan 16 23:59:08.598240 kernel: loop6: detected capacity change from 0 to 211168 Jan 16 23:59:08.631240 kernel: loop7: detected capacity change from 0 to 114432 Jan 16 23:59:08.641362 (sd-merge)[1661]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 16 23:59:08.642421 (sd-merge)[1661]: Merged extensions into '/usr'. Jan 16 23:59:08.650672 systemd[1]: Reloading requested from client PID 1639 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 23:59:08.650714 systemd[1]: Reloading... Jan 16 23:59:08.828262 zram_generator::config[1684]: No configuration found. Jan 16 23:59:09.122139 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:59:09.245170 systemd[1]: Reloading finished in 593 ms. Jan 16 23:59:09.291195 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 23:59:09.295583 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 23:59:09.309483 systemd[1]: Starting ensure-sysext.service... Jan 16 23:59:09.319669 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 23:59:09.327567 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:59:09.350822 systemd[1]: Reloading requested from client PID 1739 ('systemctl') (unit ensure-sysext.service)... Jan 16 23:59:09.350859 systemd[1]: Reloading... Jan 16 23:59:09.401602 systemd-tmpfiles[1740]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 23:59:09.404457 systemd-tmpfiles[1740]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 23:59:09.408416 systemd-tmpfiles[1740]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 23:59:09.409012 systemd-tmpfiles[1740]: ACLs are not supported, ignoring. Jan 16 23:59:09.409169 systemd-tmpfiles[1740]: ACLs are not supported, ignoring. Jan 16 23:59:09.427618 systemd-tmpfiles[1740]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 23:59:09.427643 systemd-tmpfiles[1740]: Skipping /boot Jan 16 23:59:09.477719 systemd-udevd[1741]: Using default interface naming scheme 'v255'. Jan 16 23:59:09.488030 systemd-tmpfiles[1740]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 23:59:09.488060 systemd-tmpfiles[1740]: Skipping /boot Jan 16 23:59:09.584736 zram_generator::config[1774]: No configuration found. Jan 16 23:59:09.702298 ldconfig[1635]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 23:59:09.849988 (udev-worker)[1775]: Network interface NamePolicy= disabled on kernel command line. Jan 16 23:59:09.990251 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1793) Jan 16 23:59:09.998709 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:59:10.162477 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 16 23:59:10.163597 systemd[1]: Reloading finished in 812 ms. Jan 16 23:59:10.196609 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:59:10.200666 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 23:59:10.247750 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:59:10.309755 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 23:59:10.352155 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 23:59:10.355497 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:59:10.361020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:59:10.375539 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:59:10.416094 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:59:10.419039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:59:10.431904 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 23:59:10.442997 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 23:59:10.454674 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 23:59:10.470634 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 23:59:10.478275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:59:10.479140 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:59:10.483255 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:59:10.483840 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:59:10.524867 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:59:10.534640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:59:10.549885 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:59:10.558108 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 23:59:10.566305 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:59:10.569396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:59:10.570673 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 23:59:10.578588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:59:10.579402 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:59:10.603434 systemd[1]: Finished ensure-sysext.service. Jan 16 23:59:10.623530 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:59:10.685052 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 23:59:10.689313 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 23:59:10.717280 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 23:59:10.730452 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 23:59:10.732350 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 23:59:10.740643 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 23:59:10.744599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:59:10.744952 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:59:10.748524 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:59:10.748889 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:59:10.754528 augenrules[1968]: No rules Jan 16 23:59:10.759413 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 23:59:10.789160 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 16 23:59:10.815431 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 23:59:10.818286 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:59:10.827746 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 23:59:10.846913 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:59:10.850413 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 23:59:10.853337 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 23:59:10.894105 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 23:59:10.905522 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 23:59:10.920883 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 23:59:10.963978 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 23:59:10.976792 lvm[1989]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 23:59:11.026051 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 23:59:11.029894 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:59:11.040441 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 23:59:11.052553 lvm[1997]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 23:59:11.066448 systemd-resolved[1948]: Positive Trust Anchors: Jan 16 23:59:11.066969 systemd-resolved[1948]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 23:59:11.067041 systemd-resolved[1948]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 23:59:11.074372 systemd-networkd[1947]: lo: Link UP Jan 16 23:59:11.074392 systemd-networkd[1947]: lo: Gained carrier Jan 16 23:59:11.077581 systemd-networkd[1947]: Enumeration completed Jan 16 23:59:11.077814 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 23:59:11.084077 systemd-networkd[1947]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:59:11.084099 systemd-networkd[1947]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:59:11.089415 systemd-networkd[1947]: eth0: Link UP Jan 16 23:59:11.089734 systemd-networkd[1947]: eth0: Gained carrier Jan 16 23:59:11.089791 systemd-networkd[1947]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:59:11.093644 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 23:59:11.103403 systemd-networkd[1947]: eth0: DHCPv4 address 172.31.29.179/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 16 23:59:11.104760 systemd-resolved[1948]: Defaulting to hostname 'linux'. Jan 16 23:59:11.112040 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 23:59:11.114568 systemd[1]: Reached target network.target - Network. Jan 16 23:59:11.115099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:59:11.120047 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 23:59:11.142282 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:59:11.145730 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 23:59:11.148537 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 23:59:11.151580 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 23:59:11.154841 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 23:59:11.157563 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 23:59:11.160400 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 23:59:11.163540 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 23:59:11.163597 systemd[1]: Reached target paths.target - Path Units. Jan 16 23:59:11.165651 systemd[1]: Reached target timers.target - Timer Units. Jan 16 23:59:11.168991 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 23:59:11.174770 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 23:59:11.182957 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 23:59:11.186498 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 23:59:11.189166 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 23:59:11.191440 systemd[1]: Reached target basic.target - Basic System. Jan 16 23:59:11.193683 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 23:59:11.193750 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 23:59:11.200396 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 23:59:11.208662 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 23:59:11.218596 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 23:59:11.224953 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 23:59:11.232588 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 23:59:11.235299 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 23:59:11.239473 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 23:59:11.257563 systemd[1]: Started ntpd.service - Network Time Service. Jan 16 23:59:11.266176 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 23:59:11.274433 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 16 23:59:11.281077 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 23:59:11.288773 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 23:59:11.311577 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 23:59:11.316542 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 23:59:11.319605 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 23:59:11.322772 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 23:59:11.331814 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 23:59:11.397021 jq[2008]: false Jan 16 23:59:11.407647 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 23:59:11.408024 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 23:59:11.422913 (ntainerd)[2027]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 23:59:11.469630 update_engine[2020]: I20260116 23:59:11.467537 2020 main.cc:92] Flatcar Update Engine starting Jan 16 23:59:11.493238 jq[2021]: true Jan 16 23:59:11.503341 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 23:59:11.505305 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 23:59:11.549385 tar[2039]: linux-arm64/LICENSE Jan 16 23:59:11.549385 tar[2039]: linux-arm64/helm Jan 16 23:59:11.541539 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 23:59:11.550285 extend-filesystems[2009]: Found loop4 Jan 16 23:59:11.550285 extend-filesystems[2009]: Found loop5 Jan 16 23:59:11.550285 extend-filesystems[2009]: Found loop6 Jan 16 23:59:11.550285 extend-filesystems[2009]: Found loop7 Jan 16 23:59:11.550285 extend-filesystems[2009]: Found nvme0n1 Jan 16 23:59:11.550285 extend-filesystems[2009]: Found nvme0n1p1 Jan 16 23:59:11.550285 extend-filesystems[2009]: Found nvme0n1p2 Jan 16 23:59:11.550285 extend-filesystems[2009]: Found nvme0n1p3 Jan 16 23:59:11.550285 extend-filesystems[2009]: Found usr Jan 16 23:59:11.550285 extend-filesystems[2009]: Found nvme0n1p4 Jan 16 23:59:11.542902 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 23:59:11.574862 ntpd[2011]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:20 UTC 2026 (1): Starting Jan 16 23:59:11.626084 extend-filesystems[2009]: Found nvme0n1p6 Jan 16 23:59:11.626084 extend-filesystems[2009]: Found nvme0n1p7 Jan 16 23:59:11.626084 extend-filesystems[2009]: Found nvme0n1p9 Jan 16 23:59:11.626084 extend-filesystems[2009]: Checking size of /dev/nvme0n1p9 Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:20 UTC 2026 (1): Starting Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: ---------------------------------------------------- Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: ntp-4 is maintained by Network Time Foundation, Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: corporation. Support and training for ntp-4 are Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: available at https://www.nwtime.org/support Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: ---------------------------------------------------- Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: proto: precision = 0.096 usec (-23) Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: basedate set to 2026-01-04 Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: gps base set to 2026-01-04 (week 2400) Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: Listen and drop on 0 v6wildcard [::]:123 Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: Listen normally on 2 lo 127.0.0.1:123 Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: Listen normally on 3 eth0 172.31.29.179:123 Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: Listen normally on 4 lo [::1]:123 Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: bind(21) AF_INET6 fe80::440:57ff:feb9:5c9%2#123 flags 0x11 failed: Cannot assign requested address Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: unable to create socket on eth0 (5) for fe80::440:57ff:feb9:5c9%2#123 Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: failed to init interface for address fe80::440:57ff:feb9:5c9%2 Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: Listening on routing socket on fd #21 for interface updates Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 16 23:59:11.683183 ntpd[2011]: 16 Jan 23:59:11 ntpd[2011]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 16 23:59:11.710824 jq[2044]: true Jan 16 23:59:11.580022 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 23:59:11.725078 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 16 23:59:11.574911 ntpd[2011]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 16 23:59:11.725603 extend-filesystems[2009]: Resized partition /dev/nvme0n1p9 Jan 16 23:59:11.761682 update_engine[2020]: I20260116 23:59:11.677090 2020 update_check_scheduler.cc:74] Next update check in 11m37s Jan 16 23:59:11.592828 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 23:59:11.574932 ntpd[2011]: ---------------------------------------------------- Jan 16 23:59:11.764560 extend-filesystems[2058]: resize2fs 1.47.1 (20-May-2024) Jan 16 23:59:11.592882 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 23:59:11.574952 ntpd[2011]: ntp-4 is maintained by Network Time Foundation, Jan 16 23:59:11.597624 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 23:59:11.574971 ntpd[2011]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 16 23:59:11.597660 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 23:59:11.574989 ntpd[2011]: corporation. Support and training for ntp-4 are Jan 16 23:59:11.646518 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 16 23:59:11.575008 ntpd[2011]: available at https://www.nwtime.org/support Jan 16 23:59:11.676739 systemd[1]: Started update-engine.service - Update Engine. Jan 16 23:59:11.575031 ntpd[2011]: ---------------------------------------------------- Jan 16 23:59:11.703821 systemd-logind[2017]: Watching system buttons on /dev/input/event0 (Power Button) Jan 16 23:59:11.579684 dbus-daemon[2007]: [system] SELinux support is enabled Jan 16 23:59:11.703857 systemd-logind[2017]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 16 23:59:11.603478 ntpd[2011]: proto: precision = 0.096 usec (-23) Jan 16 23:59:11.704719 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 23:59:11.609857 dbus-daemon[2007]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1947 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 16 23:59:11.734612 systemd-logind[2017]: New seat seat0. Jan 16 23:59:11.617567 ntpd[2011]: basedate set to 2026-01-04 Jan 16 23:59:11.776855 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 23:59:11.617601 ntpd[2011]: gps base set to 2026-01-04 (week 2400) Jan 16 23:59:11.817296 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 16 23:59:11.637861 dbus-daemon[2007]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 16 23:59:11.641323 ntpd[2011]: Listen and drop on 0 v6wildcard [::]:123 Jan 16 23:59:11.641402 ntpd[2011]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 16 23:59:11.641691 ntpd[2011]: Listen normally on 2 lo 127.0.0.1:123 Jan 16 23:59:11.641772 ntpd[2011]: Listen normally on 3 eth0 172.31.29.179:123 Jan 16 23:59:11.641847 ntpd[2011]: Listen normally on 4 lo [::1]:123 Jan 16 23:59:11.641924 ntpd[2011]: bind(21) AF_INET6 fe80::440:57ff:feb9:5c9%2#123 flags 0x11 failed: Cannot assign requested address Jan 16 23:59:11.642021 ntpd[2011]: unable to create socket on eth0 (5) for fe80::440:57ff:feb9:5c9%2#123 Jan 16 23:59:11.642051 ntpd[2011]: failed to init interface for address fe80::440:57ff:feb9:5c9%2 Jan 16 23:59:11.642106 ntpd[2011]: Listening on routing socket on fd #21 for interface updates Jan 16 23:59:11.670314 ntpd[2011]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 16 23:59:11.670368 ntpd[2011]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 16 23:59:11.848836 coreos-metadata[2006]: Jan 16 23:59:11.848 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 16 23:59:11.855291 coreos-metadata[2006]: Jan 16 23:59:11.852 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 16 23:59:11.855291 coreos-metadata[2006]: Jan 16 23:59:11.853 INFO Fetch successful Jan 16 23:59:11.855291 coreos-metadata[2006]: Jan 16 23:59:11.855 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 16 23:59:11.859251 coreos-metadata[2006]: Jan 16 23:59:11.856 INFO Fetch successful Jan 16 23:59:11.859251 coreos-metadata[2006]: Jan 16 23:59:11.856 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 16 23:59:11.860583 coreos-metadata[2006]: Jan 16 23:59:11.860 INFO Fetch successful Jan 16 23:59:11.860583 coreos-metadata[2006]: Jan 16 23:59:11.860 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 16 23:59:11.861125 coreos-metadata[2006]: Jan 16 23:59:11.861 INFO Fetch successful Jan 16 23:59:11.861650 coreos-metadata[2006]: Jan 16 23:59:11.861 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 16 23:59:11.863370 coreos-metadata[2006]: Jan 16 23:59:11.861 INFO Fetch failed with 404: resource not found Jan 16 23:59:11.863370 coreos-metadata[2006]: Jan 16 23:59:11.862 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 16 23:59:11.863370 coreos-metadata[2006]: Jan 16 23:59:11.862 INFO Fetch successful Jan 16 23:59:11.863370 coreos-metadata[2006]: Jan 16 23:59:11.863 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 16 23:59:11.866468 coreos-metadata[2006]: Jan 16 23:59:11.863 INFO Fetch successful Jan 16 23:59:11.866468 coreos-metadata[2006]: Jan 16 23:59:11.864 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 16 23:59:11.866468 coreos-metadata[2006]: Jan 16 23:59:11.864 INFO Fetch successful Jan 16 23:59:11.866468 coreos-metadata[2006]: Jan 16 23:59:11.865 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 16 23:59:11.866468 coreos-metadata[2006]: Jan 16 23:59:11.865 INFO Fetch successful Jan 16 23:59:11.866468 coreos-metadata[2006]: Jan 16 23:59:11.865 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 16 23:59:11.873023 coreos-metadata[2006]: Jan 16 23:59:11.866 INFO Fetch successful Jan 16 23:59:11.932340 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 16 23:59:11.938670 locksmithd[2059]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 23:59:11.950319 bash[2089]: Updated "/home/core/.ssh/authorized_keys" Jan 16 23:59:11.946939 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 23:59:11.950689 extend-filesystems[2058]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 16 23:59:11.950689 extend-filesystems[2058]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 16 23:59:11.950689 extend-filesystems[2058]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 16 23:59:11.947350 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 23:59:11.977243 extend-filesystems[2009]: Resized filesystem in /dev/nvme0n1p9 Jan 16 23:59:11.955326 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 23:59:11.997291 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1773) Jan 16 23:59:12.007801 systemd[1]: Starting sshkeys.service... Jan 16 23:59:12.016254 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 23:59:12.019974 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 23:59:12.061119 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 23:59:12.125131 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 23:59:12.320158 containerd[2027]: time="2026-01-16T23:59:12.319935070Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 16 23:59:12.401022 containerd[2027]: time="2026-01-16T23:59:12.400896106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:12.409391 containerd[2027]: time="2026-01-16T23:59:12.405925319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:59:12.409391 containerd[2027]: time="2026-01-16T23:59:12.405999671Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 23:59:12.409391 containerd[2027]: time="2026-01-16T23:59:12.406036463Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 23:59:12.409391 containerd[2027]: time="2026-01-16T23:59:12.407533199Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 23:59:12.409391 containerd[2027]: time="2026-01-16T23:59:12.407687867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:12.409391 containerd[2027]: time="2026-01-16T23:59:12.407881811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:59:12.409391 containerd[2027]: time="2026-01-16T23:59:12.407939207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:12.409391 containerd[2027]: time="2026-01-16T23:59:12.408720191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:59:12.409391 containerd[2027]: time="2026-01-16T23:59:12.408789083Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:12.410387 containerd[2027]: time="2026-01-16T23:59:12.410308859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:59:12.410465 containerd[2027]: time="2026-01-16T23:59:12.410401775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:12.412237 containerd[2027]: time="2026-01-16T23:59:12.410712107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:12.412237 containerd[2027]: time="2026-01-16T23:59:12.411565763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:12.412237 containerd[2027]: time="2026-01-16T23:59:12.411944087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:59:12.412237 containerd[2027]: time="2026-01-16T23:59:12.412002683Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 23:59:12.418381 containerd[2027]: time="2026-01-16T23:59:12.412282415Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 23:59:12.418381 containerd[2027]: time="2026-01-16T23:59:12.413970743Z" level=info msg="metadata content store policy set" policy=shared Jan 16 23:59:12.428900 containerd[2027]: time="2026-01-16T23:59:12.428651075Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 23:59:12.428900 containerd[2027]: time="2026-01-16T23:59:12.428781647Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.429403835Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.429476687Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.429511655Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.429797687Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.430224755Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.430444415Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.430478171Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.430508579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.430539731Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.430570427Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.430600739Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.430633079Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.430665443Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 23:59:12.432487 containerd[2027]: time="2026-01-16T23:59:12.430696115Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.430730903Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.430758995Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.430800155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.430835003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.430865279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.430896155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.430926407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.430957991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.430986671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.431035787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.431067239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.431115695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.431149835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.433168 containerd[2027]: time="2026-01-16T23:59:12.431180915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.440147 containerd[2027]: time="2026-01-16T23:59:12.439331243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.440147 containerd[2027]: time="2026-01-16T23:59:12.439417751Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 23:59:12.440147 containerd[2027]: time="2026-01-16T23:59:12.439479995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.440147 containerd[2027]: time="2026-01-16T23:59:12.439511651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.440147 containerd[2027]: time="2026-01-16T23:59:12.439543079Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 23:59:12.440147 containerd[2027]: time="2026-01-16T23:59:12.439809923Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 23:59:12.441520 containerd[2027]: time="2026-01-16T23:59:12.440687303Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 23:59:12.441520 containerd[2027]: time="2026-01-16T23:59:12.440736671Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 23:59:12.441520 containerd[2027]: time="2026-01-16T23:59:12.440771867Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 23:59:12.441520 containerd[2027]: time="2026-01-16T23:59:12.440797367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.441520 containerd[2027]: time="2026-01-16T23:59:12.440829815Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 23:59:12.441520 containerd[2027]: time="2026-01-16T23:59:12.440854295Z" level=info msg="NRI interface is disabled by configuration." Jan 16 23:59:12.441520 containerd[2027]: time="2026-01-16T23:59:12.440879879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 23:59:12.447295 containerd[2027]: time="2026-01-16T23:59:12.444621047Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 23:59:12.447295 containerd[2027]: time="2026-01-16T23:59:12.444766655Z" level=info msg="Connect containerd service" Jan 16 23:59:12.447295 containerd[2027]: time="2026-01-16T23:59:12.444834803Z" level=info msg="using legacy CRI server" Jan 16 23:59:12.447295 containerd[2027]: time="2026-01-16T23:59:12.444854039Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 23:59:12.447295 containerd[2027]: time="2026-01-16T23:59:12.445054295Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 23:59:12.449446 containerd[2027]: time="2026-01-16T23:59:12.448908395Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 23:59:12.452322 containerd[2027]: time="2026-01-16T23:59:12.451500791Z" level=info msg="Start subscribing containerd event" Jan 16 23:59:12.452322 containerd[2027]: time="2026-01-16T23:59:12.451616339Z" level=info msg="Start recovering state" Jan 16 23:59:12.452322 containerd[2027]: time="2026-01-16T23:59:12.451748903Z" level=info msg="Start event monitor" Jan 16 23:59:12.452322 containerd[2027]: time="2026-01-16T23:59:12.451773815Z" level=info msg="Start snapshots syncer" Jan 16 23:59:12.452322 containerd[2027]: time="2026-01-16T23:59:12.451798415Z" level=info msg="Start cni network conf syncer for default" Jan 16 23:59:12.452322 containerd[2027]: time="2026-01-16T23:59:12.451817099Z" level=info msg="Start streaming server" Jan 16 23:59:12.464105 containerd[2027]: time="2026-01-16T23:59:12.455741915Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 23:59:12.464105 containerd[2027]: time="2026-01-16T23:59:12.455867291Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 23:59:12.456088 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 23:59:12.466924 containerd[2027]: time="2026-01-16T23:59:12.466759379Z" level=info msg="containerd successfully booted in 0.151298s" Jan 16 23:59:12.534028 dbus-daemon[2007]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 16 23:59:12.539314 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 16 23:59:12.543907 coreos-metadata[2128]: Jan 16 23:59:12.543 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 16 23:59:12.543907 coreos-metadata[2128]: Jan 16 23:59:12.543 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 16 23:59:12.543907 coreos-metadata[2128]: Jan 16 23:59:12.543 INFO Fetch successful Jan 16 23:59:12.543907 coreos-metadata[2128]: Jan 16 23:59:12.543 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 16 23:59:12.545306 dbus-daemon[2007]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2052 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 16 23:59:12.550648 coreos-metadata[2128]: Jan 16 23:59:12.550 INFO Fetch successful Jan 16 23:59:12.558649 unknown[2128]: wrote ssh authorized keys file for user: core Jan 16 23:59:12.560004 systemd[1]: Starting polkit.service - Authorization Manager... Jan 16 23:59:12.579823 ntpd[2011]: bind(24) AF_INET6 fe80::440:57ff:feb9:5c9%2#123 flags 0x11 failed: Cannot assign requested address Jan 16 23:59:12.606513 ntpd[2011]: 16 Jan 23:59:12 ntpd[2011]: bind(24) AF_INET6 fe80::440:57ff:feb9:5c9%2#123 flags 0x11 failed: Cannot assign requested address Jan 16 23:59:12.606513 ntpd[2011]: 16 Jan 23:59:12 ntpd[2011]: unable to create socket on eth0 (6) for fe80::440:57ff:feb9:5c9%2#123 Jan 16 23:59:12.606513 ntpd[2011]: 16 Jan 23:59:12 ntpd[2011]: failed to init interface for address fe80::440:57ff:feb9:5c9%2 Jan 16 23:59:12.579895 ntpd[2011]: unable to create socket on eth0 (6) for fe80::440:57ff:feb9:5c9%2#123 Jan 16 23:59:12.579928 ntpd[2011]: failed to init interface for address fe80::440:57ff:feb9:5c9%2 Jan 16 23:59:12.626962 polkitd[2188]: Started polkitd version 121 Jan 16 23:59:12.643029 update-ssh-keys[2194]: Updated "/home/core/.ssh/authorized_keys" Jan 16 23:59:12.636768 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 23:59:12.651368 systemd[1]: Finished sshkeys.service. Jan 16 23:59:12.663855 polkitd[2188]: Loading rules from directory /etc/polkit-1/rules.d Jan 16 23:59:12.663984 polkitd[2188]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 16 23:59:12.667863 polkitd[2188]: Finished loading, compiling and executing 2 rules Jan 16 23:59:12.669433 dbus-daemon[2007]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 16 23:59:12.671327 systemd[1]: Started polkit.service - Authorization Manager. Jan 16 23:59:12.675953 polkitd[2188]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 16 23:59:12.722155 systemd-hostnamed[2052]: Hostname set to (transient) Jan 16 23:59:12.723288 systemd-resolved[1948]: System hostname changed to 'ip-172-31-29-179'. Jan 16 23:59:12.776694 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 23:59:12.907868 sshd_keygen[2032]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 23:59:12.957489 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 23:59:12.971661 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 23:59:12.987701 systemd[1]: Started sshd@0-172.31.29.179:22-68.220.241.50:40778.service - OpenSSH per-connection server daemon (68.220.241.50:40778). Jan 16 23:59:13.007162 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 23:59:13.007750 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 23:59:13.020975 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 23:59:13.073917 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 23:59:13.084829 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 23:59:13.095941 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 23:59:13.101067 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 23:59:13.140403 systemd-networkd[1947]: eth0: Gained IPv6LL Jan 16 23:59:13.146503 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 23:59:13.150829 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 23:59:13.162705 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 16 23:59:13.176242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:13.184708 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 23:59:13.281033 amazon-ssm-agent[2231]: Initializing new seelog logger Jan 16 23:59:13.281835 amazon-ssm-agent[2231]: New Seelog Logger Creation Complete Jan 16 23:59:13.282046 amazon-ssm-agent[2231]: 2026/01/16 23:59:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:13.282126 amazon-ssm-agent[2231]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:13.285383 amazon-ssm-agent[2231]: 2026/01/16 23:59:13 processing appconfig overrides Jan 16 23:59:13.286665 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 23:59:13.290757 amazon-ssm-agent[2231]: 2026/01/16 23:59:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:13.290923 amazon-ssm-agent[2231]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:13.291221 amazon-ssm-agent[2231]: 2026/01/16 23:59:13 processing appconfig overrides Jan 16 23:59:13.293094 amazon-ssm-agent[2231]: 2026/01/16 23:59:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:13.293094 amazon-ssm-agent[2231]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:13.293094 amazon-ssm-agent[2231]: 2026/01/16 23:59:13 processing appconfig overrides Jan 16 23:59:13.293094 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO Proxy environment variables: Jan 16 23:59:13.297263 amazon-ssm-agent[2231]: 2026/01/16 23:59:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:13.297263 amazon-ssm-agent[2231]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:13.297263 amazon-ssm-agent[2231]: 2026/01/16 23:59:13 processing appconfig overrides Jan 16 23:59:13.391311 tar[2039]: linux-arm64/README.md Jan 16 23:59:13.394233 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO http_proxy: Jan 16 23:59:13.414606 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 23:59:13.491389 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO no_proxy: Jan 16 23:59:13.590656 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO https_proxy: Jan 16 23:59:13.625113 sshd[2221]: Accepted publickey for core from 68.220.241.50 port 40778 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 16 23:59:13.631576 sshd[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:13.659924 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 23:59:13.673911 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 23:59:13.690247 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO Checking if agent identity type OnPrem can be assumed Jan 16 23:59:13.692367 systemd-logind[2017]: New session 1 of user core. Jan 16 23:59:13.714691 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 23:59:13.729953 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 23:59:13.754028 (systemd)[2255]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 23:59:13.786922 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO Checking if agent identity type EC2 can be assumed Jan 16 23:59:13.886246 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO Agent will take identity from EC2 Jan 16 23:59:13.993224 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 16 23:59:14.062476 systemd[2255]: Queued start job for default target default.target. Jan 16 23:59:14.068921 systemd[2255]: Created slice app.slice - User Application Slice. Jan 16 23:59:14.068984 systemd[2255]: Reached target paths.target - Paths. Jan 16 23:59:14.069018 systemd[2255]: Reached target timers.target - Timers. Jan 16 23:59:14.073454 systemd[2255]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 23:59:14.090633 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 16 23:59:14.101334 systemd[2255]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 23:59:14.101595 systemd[2255]: Reached target sockets.target - Sockets. Jan 16 23:59:14.101630 systemd[2255]: Reached target basic.target - Basic System. Jan 16 23:59:14.101880 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 23:59:14.103739 systemd[2255]: Reached target default.target - Main User Target. Jan 16 23:59:14.106305 systemd[2255]: Startup finished in 332ms. Jan 16 23:59:14.115518 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 23:59:14.190721 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 16 23:59:14.291641 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 16 23:59:14.395400 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 16 23:59:14.495661 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO [amazon-ssm-agent] Starting Core Agent Jan 16 23:59:14.525727 systemd[1]: Started sshd@1-172.31.29.179:22-68.220.241.50:40784.service - OpenSSH per-connection server daemon (68.220.241.50:40784). Jan 16 23:59:14.595987 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 16 23:59:14.696749 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO [Registrar] Starting registrar module Jan 16 23:59:14.796689 amazon-ssm-agent[2231]: 2026-01-16 23:59:13 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 16 23:59:14.938616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:14.943106 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 23:59:14.949607 systemd[1]: Startup finished in 1.197s (kernel) + 9.035s (initrd) + 9.143s (userspace) = 19.376s. Jan 16 23:59:14.954034 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:59:15.096236 sshd[2266]: Accepted publickey for core from 68.220.241.50 port 40784 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 16 23:59:15.101337 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:15.117661 systemd-logind[2017]: New session 2 of user core. Jan 16 23:59:15.122501 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 23:59:15.485493 sshd[2266]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:15.496685 systemd[1]: sshd@1-172.31.29.179:22-68.220.241.50:40784.service: Deactivated successfully. Jan 16 23:59:15.502328 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 23:59:15.507733 systemd-logind[2017]: Session 2 logged out. Waiting for processes to exit. Jan 16 23:59:15.511175 systemd-logind[2017]: Removed session 2. Jan 16 23:59:15.576993 ntpd[2011]: Listen normally on 7 eth0 [fe80::440:57ff:feb9:5c9%2]:123 Jan 16 23:59:15.579668 ntpd[2011]: 16 Jan 23:59:15 ntpd[2011]: Listen normally on 7 eth0 [fe80::440:57ff:feb9:5c9%2]:123 Jan 16 23:59:15.587940 systemd[1]: Started sshd@2-172.31.29.179:22-68.220.241.50:40798.service - OpenSSH per-connection server daemon (68.220.241.50:40798). Jan 16 23:59:15.821753 amazon-ssm-agent[2231]: 2026-01-16 23:59:15 INFO [EC2Identity] EC2 registration was successful. Jan 16 23:59:15.861823 amazon-ssm-agent[2231]: 2026-01-16 23:59:15 INFO [CredentialRefresher] credentialRefresher has started Jan 16 23:59:15.861823 amazon-ssm-agent[2231]: 2026-01-16 23:59:15 INFO [CredentialRefresher] Starting credentials refresher loop Jan 16 23:59:15.861823 amazon-ssm-agent[2231]: 2026-01-16 23:59:15 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 16 23:59:15.921610 kubelet[2273]: E0116 23:59:15.921527 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:59:15.922371 amazon-ssm-agent[2231]: 2026-01-16 23:59:15 INFO [CredentialRefresher] Next credential rotation will be in 32.166646114866666 minutes Jan 16 23:59:15.926282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:59:15.926652 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:59:15.928425 systemd[1]: kubelet.service: Consumed 1.392s CPU time. Jan 16 23:59:16.148320 sshd[2287]: Accepted publickey for core from 68.220.241.50 port 40798 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 16 23:59:16.151077 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:16.160515 systemd-logind[2017]: New session 3 of user core. Jan 16 23:59:16.170482 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 23:59:16.525884 sshd[2287]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:16.533577 systemd[1]: sshd@2-172.31.29.179:22-68.220.241.50:40798.service: Deactivated successfully. Jan 16 23:59:16.536714 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 23:59:16.537893 systemd-logind[2017]: Session 3 logged out. Waiting for processes to exit. Jan 16 23:59:16.540100 systemd-logind[2017]: Removed session 3. Jan 16 23:59:16.631666 systemd[1]: Started sshd@3-172.31.29.179:22-68.220.241.50:40804.service - OpenSSH per-connection server daemon (68.220.241.50:40804). Jan 16 23:59:16.888908 amazon-ssm-agent[2231]: 2026-01-16 23:59:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 16 23:59:16.988861 amazon-ssm-agent[2231]: 2026-01-16 23:59:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2299) started Jan 16 23:59:17.089900 amazon-ssm-agent[2231]: 2026-01-16 23:59:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 16 23:59:17.170381 sshd[2296]: Accepted publickey for core from 68.220.241.50 port 40804 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 16 23:59:17.174002 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:17.189091 systemd-logind[2017]: New session 4 of user core. Jan 16 23:59:17.203477 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 23:59:17.554429 sshd[2296]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:17.559492 systemd[1]: sshd@3-172.31.29.179:22-68.220.241.50:40804.service: Deactivated successfully. Jan 16 23:59:17.563013 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 23:59:17.566728 systemd-logind[2017]: Session 4 logged out. Waiting for processes to exit. Jan 16 23:59:17.568786 systemd-logind[2017]: Removed session 4. Jan 16 23:59:17.664649 systemd[1]: Started sshd@4-172.31.29.179:22-68.220.241.50:40806.service - OpenSSH per-connection server daemon (68.220.241.50:40806). Jan 16 23:59:18.199996 sshd[2313]: Accepted publickey for core from 68.220.241.50 port 40806 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 16 23:59:18.202620 sshd[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:18.210010 systemd-logind[2017]: New session 5 of user core. Jan 16 23:59:18.218455 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 23:59:18.557196 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 23:59:18.557908 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:59:19.432707 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 23:59:19.442714 (dockerd)[2331]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 23:59:19.983697 dockerd[2331]: time="2026-01-16T23:59:19.983600578Z" level=info msg="Starting up" Jan 16 23:59:20.175889 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1484662626-merged.mount: Deactivated successfully. Jan 16 23:59:20.263771 dockerd[2331]: time="2026-01-16T23:59:20.263400053Z" level=info msg="Loading containers: start." Jan 16 23:59:20.447282 kernel: Initializing XFRM netlink socket Jan 16 23:59:20.515173 (udev-worker)[2357]: Network interface NamePolicy= disabled on kernel command line. Jan 16 23:59:20.627792 systemd-networkd[1947]: docker0: Link UP Jan 16 23:59:20.650962 dockerd[2331]: time="2026-01-16T23:59:20.650912089Z" level=info msg="Loading containers: done." Jan 16 23:59:20.675446 dockerd[2331]: time="2026-01-16T23:59:20.675379765Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 23:59:20.676223 dockerd[2331]: time="2026-01-16T23:59:20.675794721Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 16 23:59:20.676223 dockerd[2331]: time="2026-01-16T23:59:20.675991903Z" level=info msg="Daemon has completed initialization" Jan 16 23:59:20.725490 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 23:59:20.726092 dockerd[2331]: time="2026-01-16T23:59:20.725148988Z" level=info msg="API listen on /run/docker.sock" Jan 16 23:59:21.875261 containerd[2027]: time="2026-01-16T23:59:21.875141196Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 16 23:59:22.633044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2701135829.mount: Deactivated successfully. Jan 16 23:59:24.113275 containerd[2027]: time="2026-01-16T23:59:24.112869123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:24.115236 containerd[2027]: time="2026-01-16T23:59:24.115078729Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Jan 16 23:59:24.116466 containerd[2027]: time="2026-01-16T23:59:24.116397937Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:24.123165 containerd[2027]: time="2026-01-16T23:59:24.122132593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:24.124842 containerd[2027]: time="2026-01-16T23:59:24.124777594Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.24957181s" Jan 16 23:59:24.124955 containerd[2027]: time="2026-01-16T23:59:24.124842361Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 16 23:59:24.128216 containerd[2027]: time="2026-01-16T23:59:24.128149911Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 16 23:59:25.632556 containerd[2027]: time="2026-01-16T23:59:25.632482864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:25.634171 containerd[2027]: time="2026-01-16T23:59:25.634109239Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Jan 16 23:59:25.634916 containerd[2027]: time="2026-01-16T23:59:25.634859691Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:25.642664 containerd[2027]: time="2026-01-16T23:59:25.642573129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:25.645587 containerd[2027]: time="2026-01-16T23:59:25.644872438Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.516459343s" Jan 16 23:59:25.645587 containerd[2027]: time="2026-01-16T23:59:25.644932312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 16 23:59:25.646029 containerd[2027]: time="2026-01-16T23:59:25.645990087Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 16 23:59:26.176990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 23:59:26.185567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:26.622583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:26.629744 (kubelet)[2546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:59:26.740784 kubelet[2546]: E0116 23:59:26.740628 2546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:59:26.750480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:59:26.750826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:59:27.176918 containerd[2027]: time="2026-01-16T23:59:27.176830045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:27.179972 containerd[2027]: time="2026-01-16T23:59:27.179902860Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Jan 16 23:59:27.182584 containerd[2027]: time="2026-01-16T23:59:27.182510499Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:27.188225 containerd[2027]: time="2026-01-16T23:59:27.188118054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:27.190678 containerd[2027]: time="2026-01-16T23:59:27.190618000Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.54447542s" Jan 16 23:59:27.191026 containerd[2027]: time="2026-01-16T23:59:27.190839457Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 16 23:59:27.191956 containerd[2027]: time="2026-01-16T23:59:27.191541778Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 16 23:59:28.543060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount662588707.mount: Deactivated successfully. Jan 16 23:59:29.131411 containerd[2027]: time="2026-01-16T23:59:29.131347475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:29.132355 containerd[2027]: time="2026-01-16T23:59:29.132310461Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Jan 16 23:59:29.133578 containerd[2027]: time="2026-01-16T23:59:29.133419583Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:29.137397 containerd[2027]: time="2026-01-16T23:59:29.137306622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:29.138730 containerd[2027]: time="2026-01-16T23:59:29.138684469Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.947093888s" Jan 16 23:59:29.139048 containerd[2027]: time="2026-01-16T23:59:29.138865686Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 16 23:59:29.140248 containerd[2027]: time="2026-01-16T23:59:29.139651689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 16 23:59:29.690299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094698165.mount: Deactivated successfully. Jan 16 23:59:30.868153 containerd[2027]: time="2026-01-16T23:59:30.868067574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:30.874069 containerd[2027]: time="2026-01-16T23:59:30.874004412Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jan 16 23:59:30.875778 containerd[2027]: time="2026-01-16T23:59:30.875585365Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:30.882240 containerd[2027]: time="2026-01-16T23:59:30.882067199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:30.884725 containerd[2027]: time="2026-01-16T23:59:30.884673687Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.744968492s" Jan 16 23:59:30.885054 containerd[2027]: time="2026-01-16T23:59:30.884875258Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 16 23:59:30.885791 containerd[2027]: time="2026-01-16T23:59:30.885500397Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 16 23:59:31.368274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4147655043.mount: Deactivated successfully. Jan 16 23:59:31.376265 containerd[2027]: time="2026-01-16T23:59:31.375774291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:31.377553 containerd[2027]: time="2026-01-16T23:59:31.377484456Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 16 23:59:31.380259 containerd[2027]: time="2026-01-16T23:59:31.378472281Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:31.383052 containerd[2027]: time="2026-01-16T23:59:31.382990685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:31.384859 containerd[2027]: time="2026-01-16T23:59:31.384797365Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 499.243079ms" Jan 16 23:59:31.384993 containerd[2027]: time="2026-01-16T23:59:31.384855980Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 16 23:59:31.386754 containerd[2027]: time="2026-01-16T23:59:31.386568447Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 16 23:59:31.910869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3099360927.mount: Deactivated successfully. Jan 16 23:59:34.451046 containerd[2027]: time="2026-01-16T23:59:34.450985214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:34.452754 containerd[2027]: time="2026-01-16T23:59:34.452657129Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Jan 16 23:59:34.457343 containerd[2027]: time="2026-01-16T23:59:34.457260330Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:34.470019 containerd[2027]: time="2026-01-16T23:59:34.469010852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:34.471713 containerd[2027]: time="2026-01-16T23:59:34.471649172Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.08471943s" Jan 16 23:59:34.471998 containerd[2027]: time="2026-01-16T23:59:34.471711997Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 16 23:59:37.001245 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 23:59:37.007727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:37.340658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:37.350673 (kubelet)[2700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:59:37.423380 kubelet[2700]: E0116 23:59:37.423305 2700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:59:37.428427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:59:37.428753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:59:42.759130 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 16 23:59:44.073667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:44.083715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:44.144542 systemd[1]: Reloading requested from client PID 2718 ('systemctl') (unit session-5.scope)... Jan 16 23:59:44.144567 systemd[1]: Reloading... Jan 16 23:59:44.407255 zram_generator::config[2761]: No configuration found. Jan 16 23:59:44.624703 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:59:44.802518 systemd[1]: Reloading finished in 656 ms. Jan 16 23:59:44.887628 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 23:59:44.887828 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 23:59:44.889346 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:44.902765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:45.228193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:45.244032 (kubelet)[2821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 23:59:45.318328 kubelet[2821]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:59:45.321230 kubelet[2821]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 23:59:45.321230 kubelet[2821]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:59:45.321230 kubelet[2821]: I0116 23:59:45.318928 2821 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 23:59:46.256769 kubelet[2821]: I0116 23:59:46.256714 2821 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 16 23:59:46.259260 kubelet[2821]: I0116 23:59:46.256961 2821 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 23:59:46.259260 kubelet[2821]: I0116 23:59:46.257354 2821 server.go:956] "Client rotation is on, will bootstrap in background" Jan 16 23:59:46.306977 kubelet[2821]: E0116 23:59:46.306893 2821 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.29.179:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.179:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 16 23:59:46.309548 kubelet[2821]: I0116 23:59:46.309467 2821 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 23:59:46.322822 kubelet[2821]: E0116 23:59:46.322763 2821 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 23:59:46.323764 kubelet[2821]: I0116 23:59:46.323729 2821 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 23:59:46.329923 kubelet[2821]: I0116 23:59:46.329863 2821 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 23:59:46.332868 kubelet[2821]: I0116 23:59:46.332776 2821 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 23:59:46.333178 kubelet[2821]: I0116 23:59:46.332857 2821 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-179","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 23:59:46.333403 kubelet[2821]: I0116 23:59:46.333360 2821 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 23:59:46.333403 kubelet[2821]: I0116 23:59:46.333403 2821 container_manager_linux.go:303] "Creating device plugin manager" Jan 16 23:59:46.333819 kubelet[2821]: I0116 23:59:46.333769 2821 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:59:46.340231 kubelet[2821]: I0116 23:59:46.340137 2821 kubelet.go:480] "Attempting to sync node with API server" Jan 16 23:59:46.340400 kubelet[2821]: I0116 23:59:46.340251 2821 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 23:59:46.340400 kubelet[2821]: I0116 23:59:46.340321 2821 kubelet.go:386] "Adding apiserver pod source" Jan 16 23:59:46.340400 kubelet[2821]: I0116 23:59:46.340354 2821 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 23:59:46.347250 kubelet[2821]: E0116 23:59:46.346832 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.179:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 16 23:59:46.348172 kubelet[2821]: I0116 23:59:46.348134 2821 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 23:59:46.349653 kubelet[2821]: I0116 23:59:46.349603 2821 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 16 23:59:46.350109 kubelet[2821]: W0116 23:59:46.350073 2821 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 23:59:46.353744 kubelet[2821]: E0116 23:59:46.353662 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.179:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-179&limit=500&resourceVersion=0\": dial tcp 172.31.29.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 16 23:59:46.359003 kubelet[2821]: I0116 23:59:46.358634 2821 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 23:59:46.359003 kubelet[2821]: I0116 23:59:46.358703 2821 server.go:1289] "Started kubelet" Jan 16 23:59:46.362501 kubelet[2821]: I0116 23:59:46.362380 2821 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 23:59:46.366260 kubelet[2821]: I0116 23:59:46.365937 2821 server.go:317] "Adding debug handlers to kubelet server" Jan 16 23:59:46.370088 kubelet[2821]: I0116 23:59:46.369121 2821 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 23:59:46.370088 kubelet[2821]: I0116 23:59:46.369670 2821 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 23:59:46.372352 kubelet[2821]: E0116 23:59:46.369925 2821 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.179:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.179:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-179.188b5b97d5d3b89f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-179,UID:ip-172-31-29-179,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-179,},FirstTimestamp:2026-01-16 23:59:46.358663327 +0000 UTC m=+1.107296555,LastTimestamp:2026-01-16 23:59:46.358663327 +0000 UTC m=+1.107296555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-179,}" Jan 16 23:59:46.382612 kubelet[2821]: I0116 23:59:46.382549 2821 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 23:59:46.383884 kubelet[2821]: I0116 23:59:46.383819 2821 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 23:59:46.388716 kubelet[2821]: E0116 23:59:46.388513 2821 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 23:59:46.389922 kubelet[2821]: E0116 23:59:46.389698 2821 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-179\" not found" Jan 16 23:59:46.389922 kubelet[2821]: I0116 23:59:46.389861 2821 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 23:59:46.391450 kubelet[2821]: I0116 23:59:46.390544 2821 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 23:59:46.391450 kubelet[2821]: I0116 23:59:46.390647 2821 reconciler.go:26] "Reconciler: start to sync state" Jan 16 23:59:46.392352 kubelet[2821]: E0116 23:59:46.392299 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.179:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 16 23:59:46.392858 kubelet[2821]: E0116 23:59:46.392777 2821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-179?timeout=10s\": dial tcp 172.31.29.179:6443: connect: connection refused" interval="200ms" Jan 16 23:59:46.393679 kubelet[2821]: I0116 23:59:46.393634 2821 factory.go:223] Registration of the systemd container factory successfully Jan 16 23:59:46.394125 kubelet[2821]: I0116 23:59:46.394085 2821 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 23:59:46.397820 kubelet[2821]: I0116 23:59:46.397774 2821 factory.go:223] Registration of the containerd container factory successfully Jan 16 23:59:46.425615 kubelet[2821]: I0116 23:59:46.424815 2821 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 23:59:46.425615 kubelet[2821]: I0116 23:59:46.424860 2821 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 23:59:46.425615 kubelet[2821]: I0116 23:59:46.424900 2821 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:59:46.430313 kubelet[2821]: I0116 23:59:46.430259 2821 policy_none.go:49] "None policy: Start" Jan 16 23:59:46.430313 kubelet[2821]: I0116 23:59:46.430311 2821 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 23:59:46.430524 kubelet[2821]: I0116 23:59:46.430342 2821 state_mem.go:35] "Initializing new in-memory state store" Jan 16 23:59:46.450179 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 23:59:46.455672 kubelet[2821]: I0116 23:59:46.455586 2821 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 16 23:59:46.459420 kubelet[2821]: I0116 23:59:46.458266 2821 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 16 23:59:46.459420 kubelet[2821]: I0116 23:59:46.458319 2821 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 16 23:59:46.459420 kubelet[2821]: I0116 23:59:46.458357 2821 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 23:59:46.459420 kubelet[2821]: I0116 23:59:46.458373 2821 kubelet.go:2436] "Starting kubelet main sync loop" Jan 16 23:59:46.459420 kubelet[2821]: E0116 23:59:46.458448 2821 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 23:59:46.464804 kubelet[2821]: E0116 23:59:46.464716 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.179:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 16 23:59:46.470996 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 23:59:46.482686 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 23:59:46.490632 kubelet[2821]: E0116 23:59:46.490536 2821 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-179\" not found" Jan 16 23:59:46.496100 kubelet[2821]: E0116 23:59:46.496048 2821 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 16 23:59:46.497331 kubelet[2821]: I0116 23:59:46.497281 2821 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 23:59:46.500271 kubelet[2821]: I0116 23:59:46.497605 2821 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 23:59:46.500271 kubelet[2821]: I0116 23:59:46.500182 2821 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 23:59:46.504141 kubelet[2821]: E0116 23:59:46.504078 2821 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 23:59:46.504491 kubelet[2821]: E0116 23:59:46.504153 2821 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-179\" not found" Jan 16 23:59:46.583337 systemd[1]: Created slice kubepods-burstable-pod3529ff057c1d29d4b0e2e7109861b50c.slice - libcontainer container kubepods-burstable-pod3529ff057c1d29d4b0e2e7109861b50c.slice. Jan 16 23:59:46.593959 kubelet[2821]: E0116 23:59:46.593889 2821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-179?timeout=10s\": dial tcp 172.31.29.179:6443: connect: connection refused" interval="400ms" Jan 16 23:59:46.602757 kubelet[2821]: I0116 23:59:46.602572 2821 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-179" Jan 16 23:59:46.604245 kubelet[2821]: E0116 23:59:46.603828 2821 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.179:6443/api/v1/nodes\": dial tcp 172.31.29.179:6443: connect: connection refused" node="ip-172-31-29-179" Jan 16 23:59:46.605347 kubelet[2821]: E0116 23:59:46.605145 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-179\" not found" node="ip-172-31-29-179" Jan 16 23:59:46.613033 systemd[1]: Created slice kubepods-burstable-podc9d22feb2fffdd178b61bafa6ee0f775.slice - libcontainer container kubepods-burstable-podc9d22feb2fffdd178b61bafa6ee0f775.slice. Jan 16 23:59:46.620008 kubelet[2821]: E0116 23:59:46.619407 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-179\" not found" node="ip-172-31-29-179" Jan 16 23:59:46.625444 systemd[1]: Created slice kubepods-burstable-pod8ff2921a7c0f0b18dc6b3c7bd862e00c.slice - libcontainer container kubepods-burstable-pod8ff2921a7c0f0b18dc6b3c7bd862e00c.slice. Jan 16 23:59:46.629132 kubelet[2821]: E0116 23:59:46.629073 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-179\" not found" node="ip-172-31-29-179" Jan 16 23:59:46.692938 kubelet[2821]: I0116 23:59:46.692875 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3529ff057c1d29d4b0e2e7109861b50c-ca-certs\") pod \"kube-apiserver-ip-172-31-29-179\" (UID: \"3529ff057c1d29d4b0e2e7109861b50c\") " pod="kube-system/kube-apiserver-ip-172-31-29-179" Jan 16 23:59:46.693053 kubelet[2821]: I0116 23:59:46.692942 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3529ff057c1d29d4b0e2e7109861b50c-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-179\" (UID: \"3529ff057c1d29d4b0e2e7109861b50c\") " pod="kube-system/kube-apiserver-ip-172-31-29-179" Jan 16 23:59:46.693053 kubelet[2821]: I0116 23:59:46.692999 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3529ff057c1d29d4b0e2e7109861b50c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-179\" (UID: \"3529ff057c1d29d4b0e2e7109861b50c\") " pod="kube-system/kube-apiserver-ip-172-31-29-179" Jan 16 23:59:46.693053 kubelet[2821]: I0116 23:59:46.693038 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9d22feb2fffdd178b61bafa6ee0f775-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-179\" (UID: \"c9d22feb2fffdd178b61bafa6ee0f775\") " pod="kube-system/kube-controller-manager-ip-172-31-29-179" Jan 16 23:59:46.693247 kubelet[2821]: I0116 23:59:46.693075 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9d22feb2fffdd178b61bafa6ee0f775-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-179\" (UID: \"c9d22feb2fffdd178b61bafa6ee0f775\") " pod="kube-system/kube-controller-manager-ip-172-31-29-179" Jan 16 23:59:46.693247 kubelet[2821]: I0116 23:59:46.693110 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ff2921a7c0f0b18dc6b3c7bd862e00c-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-179\" (UID: \"8ff2921a7c0f0b18dc6b3c7bd862e00c\") " pod="kube-system/kube-scheduler-ip-172-31-29-179" Jan 16 23:59:46.693247 kubelet[2821]: I0116 23:59:46.693144 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9d22feb2fffdd178b61bafa6ee0f775-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-179\" (UID: \"c9d22feb2fffdd178b61bafa6ee0f775\") " pod="kube-system/kube-controller-manager-ip-172-31-29-179" Jan 16 23:59:46.693247 kubelet[2821]: I0116 23:59:46.693179 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9d22feb2fffdd178b61bafa6ee0f775-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-179\" (UID: \"c9d22feb2fffdd178b61bafa6ee0f775\") " pod="kube-system/kube-controller-manager-ip-172-31-29-179" Jan 16 23:59:46.693457 kubelet[2821]: I0116 23:59:46.693248 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9d22feb2fffdd178b61bafa6ee0f775-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-179\" (UID: \"c9d22feb2fffdd178b61bafa6ee0f775\") " pod="kube-system/kube-controller-manager-ip-172-31-29-179" Jan 16 23:59:46.806853 kubelet[2821]: I0116 23:59:46.806795 2821 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-179" Jan 16 23:59:46.807431 kubelet[2821]: E0116 23:59:46.807354 2821 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.179:6443/api/v1/nodes\": dial tcp 172.31.29.179:6443: connect: connection refused" node="ip-172-31-29-179" Jan 16 23:59:46.908956 containerd[2027]: time="2026-01-16T23:59:46.907608955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-179,Uid:3529ff057c1d29d4b0e2e7109861b50c,Namespace:kube-system,Attempt:0,}" Jan 16 23:59:46.920620 containerd[2027]: time="2026-01-16T23:59:46.920557258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-179,Uid:c9d22feb2fffdd178b61bafa6ee0f775,Namespace:kube-system,Attempt:0,}" Jan 16 23:59:46.931052 containerd[2027]: time="2026-01-16T23:59:46.930981639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-179,Uid:8ff2921a7c0f0b18dc6b3c7bd862e00c,Namespace:kube-system,Attempt:0,}" Jan 16 23:59:46.995621 kubelet[2821]: E0116 23:59:46.995550 2821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-179?timeout=10s\": dial tcp 172.31.29.179:6443: connect: connection refused" interval="800ms" Jan 16 23:59:47.210665 kubelet[2821]: I0116 23:59:47.210137 2821 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-179" Jan 16 23:59:47.210665 kubelet[2821]: E0116 23:59:47.210631 2821 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.179:6443/api/v1/nodes\": dial tcp 172.31.29.179:6443: connect: connection refused" node="ip-172-31-29-179" Jan 16 23:59:47.215409 kubelet[2821]: E0116 23:59:47.215347 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.179:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 16 23:59:47.455297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3961273845.mount: Deactivated successfully. Jan 16 23:59:47.468059 containerd[2027]: time="2026-01-16T23:59:47.467898352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:47.473157 containerd[2027]: time="2026-01-16T23:59:47.473099754Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 23:59:47.475477 containerd[2027]: time="2026-01-16T23:59:47.475397145Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:47.478317 containerd[2027]: time="2026-01-16T23:59:47.478260292Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:47.480041 containerd[2027]: time="2026-01-16T23:59:47.479921917Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 16 23:59:47.482725 containerd[2027]: time="2026-01-16T23:59:47.482625928Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:47.484850 containerd[2027]: time="2026-01-16T23:59:47.484669406Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 23:59:47.489239 containerd[2027]: time="2026-01-16T23:59:47.489123414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:47.493689 containerd[2027]: time="2026-01-16T23:59:47.493023119Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.347517ms" Jan 16 23:59:47.496785 containerd[2027]: time="2026-01-16T23:59:47.496708875Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 588.965491ms" Jan 16 23:59:47.516619 containerd[2027]: time="2026-01-16T23:59:47.516535065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 585.438008ms" Jan 16 23:59:47.530597 kubelet[2821]: E0116 23:59:47.530420 2821 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.179:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.179:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-179.188b5b97d5d3b89f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-179,UID:ip-172-31-29-179,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-179,},FirstTimestamp:2026-01-16 23:59:46.358663327 +0000 UTC m=+1.107296555,LastTimestamp:2026-01-16 23:59:46.358663327 +0000 UTC m=+1.107296555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-179,}" Jan 16 23:59:47.702145 kubelet[2821]: E0116 23:59:47.702040 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.179:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-179&limit=500&resourceVersion=0\": dial tcp 172.31.29.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 16 23:59:47.719814 containerd[2027]: time="2026-01-16T23:59:47.719553210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:59:47.719814 containerd[2027]: time="2026-01-16T23:59:47.719659405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:59:47.720364 containerd[2027]: time="2026-01-16T23:59:47.720010601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:47.725546 containerd[2027]: time="2026-01-16T23:59:47.725430210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:47.730312 containerd[2027]: time="2026-01-16T23:59:47.729436374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:59:47.730312 containerd[2027]: time="2026-01-16T23:59:47.729544824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:59:47.730312 containerd[2027]: time="2026-01-16T23:59:47.729573981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:47.730312 containerd[2027]: time="2026-01-16T23:59:47.729752992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:47.739264 containerd[2027]: time="2026-01-16T23:59:47.738507736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:59:47.739264 containerd[2027]: time="2026-01-16T23:59:47.738612876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:59:47.739264 containerd[2027]: time="2026-01-16T23:59:47.738650933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:47.739264 containerd[2027]: time="2026-01-16T23:59:47.738819220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:47.785538 systemd[1]: Started cri-containerd-1c570939cbcb37ffd10d2c3afb783670ae0ecd7e4d3660b5dec9203ffde26e3c.scope - libcontainer container 1c570939cbcb37ffd10d2c3afb783670ae0ecd7e4d3660b5dec9203ffde26e3c. Jan 16 23:59:47.799144 kubelet[2821]: E0116 23:59:47.796348 2821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-179?timeout=10s\": dial tcp 172.31.29.179:6443: connect: connection refused" interval="1.6s" Jan 16 23:59:47.804562 systemd[1]: Started cri-containerd-a755cb338d787b1bf113a281b54e6394481140af849eef1147071d618ca0a730.scope - libcontainer container a755cb338d787b1bf113a281b54e6394481140af849eef1147071d618ca0a730. Jan 16 23:59:47.809483 systemd[1]: Started cri-containerd-a85398d0042b9a1d5b4ddc2d65be25692808d1a30c691320af5bdad8a6742903.scope - libcontainer container a85398d0042b9a1d5b4ddc2d65be25692808d1a30c691320af5bdad8a6742903. Jan 16 23:59:47.891330 kubelet[2821]: E0116 23:59:47.890346 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.179:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 16 23:59:47.911067 containerd[2027]: time="2026-01-16T23:59:47.911013313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-179,Uid:c9d22feb2fffdd178b61bafa6ee0f775,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c570939cbcb37ffd10d2c3afb783670ae0ecd7e4d3660b5dec9203ffde26e3c\"" Jan 16 23:59:47.925480 containerd[2027]: time="2026-01-16T23:59:47.925090335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-179,Uid:3529ff057c1d29d4b0e2e7109861b50c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a85398d0042b9a1d5b4ddc2d65be25692808d1a30c691320af5bdad8a6742903\"" Jan 16 23:59:47.932284 containerd[2027]: time="2026-01-16T23:59:47.931970213Z" level=info msg="CreateContainer within sandbox \"1c570939cbcb37ffd10d2c3afb783670ae0ecd7e4d3660b5dec9203ffde26e3c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 23:59:47.946250 containerd[2027]: time="2026-01-16T23:59:47.946027049Z" level=info msg="CreateContainer within sandbox \"a85398d0042b9a1d5b4ddc2d65be25692808d1a30c691320af5bdad8a6742903\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 23:59:47.957367 containerd[2027]: time="2026-01-16T23:59:47.956993799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-179,Uid:8ff2921a7c0f0b18dc6b3c7bd862e00c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a755cb338d787b1bf113a281b54e6394481140af849eef1147071d618ca0a730\"" Jan 16 23:59:47.969018 containerd[2027]: time="2026-01-16T23:59:47.968786264Z" level=info msg="CreateContainer within sandbox \"a755cb338d787b1bf113a281b54e6394481140af849eef1147071d618ca0a730\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 23:59:47.978379 containerd[2027]: time="2026-01-16T23:59:47.977481015Z" level=info msg="CreateContainer within sandbox \"1c570939cbcb37ffd10d2c3afb783670ae0ecd7e4d3660b5dec9203ffde26e3c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3fc1431a53c5d595970d218b7789a20621dd5651fe055673a484e06f697ea144\"" Jan 16 23:59:47.980223 containerd[2027]: time="2026-01-16T23:59:47.980162357Z" level=info msg="StartContainer for \"3fc1431a53c5d595970d218b7789a20621dd5651fe055673a484e06f697ea144\"" Jan 16 23:59:47.993582 containerd[2027]: time="2026-01-16T23:59:47.993502072Z" level=info msg="CreateContainer within sandbox \"a85398d0042b9a1d5b4ddc2d65be25692808d1a30c691320af5bdad8a6742903\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"277d0ed4c123683905279899b4b7b4297a23cdfddf8d16ee1a95f3f3916c76af\"" Jan 16 23:59:47.995009 containerd[2027]: time="2026-01-16T23:59:47.994238672Z" level=info msg="StartContainer for \"277d0ed4c123683905279899b4b7b4297a23cdfddf8d16ee1a95f3f3916c76af\"" Jan 16 23:59:48.013682 kubelet[2821]: I0116 23:59:48.013617 2821 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-179" Jan 16 23:59:48.014633 kubelet[2821]: E0116 23:59:48.014577 2821 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.179:6443/api/v1/nodes\": dial tcp 172.31.29.179:6443: connect: connection refused" node="ip-172-31-29-179" Jan 16 23:59:48.019342 containerd[2027]: time="2026-01-16T23:59:48.019264417Z" level=info msg="CreateContainer within sandbox \"a755cb338d787b1bf113a281b54e6394481140af849eef1147071d618ca0a730\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3637546aff2b6b2b53054e860af15950aca5b5a894e1f3cf942e8ecd008714c3\"" Jan 16 23:59:48.020111 containerd[2027]: time="2026-01-16T23:59:48.019957586Z" level=info msg="StartContainer for \"3637546aff2b6b2b53054e860af15950aca5b5a894e1f3cf942e8ecd008714c3\"" Jan 16 23:59:48.039714 systemd[1]: Started cri-containerd-3fc1431a53c5d595970d218b7789a20621dd5651fe055673a484e06f697ea144.scope - libcontainer container 3fc1431a53c5d595970d218b7789a20621dd5651fe055673a484e06f697ea144. Jan 16 23:59:48.048983 kubelet[2821]: E0116 23:59:48.048376 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.179:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 16 23:59:48.088540 systemd[1]: Started cri-containerd-277d0ed4c123683905279899b4b7b4297a23cdfddf8d16ee1a95f3f3916c76af.scope - libcontainer container 277d0ed4c123683905279899b4b7b4297a23cdfddf8d16ee1a95f3f3916c76af. Jan 16 23:59:48.109541 systemd[1]: Started cri-containerd-3637546aff2b6b2b53054e860af15950aca5b5a894e1f3cf942e8ecd008714c3.scope - libcontainer container 3637546aff2b6b2b53054e860af15950aca5b5a894e1f3cf942e8ecd008714c3. Jan 16 23:59:48.185063 containerd[2027]: time="2026-01-16T23:59:48.184987026Z" level=info msg="StartContainer for \"3fc1431a53c5d595970d218b7789a20621dd5651fe055673a484e06f697ea144\" returns successfully" Jan 16 23:59:48.206737 containerd[2027]: time="2026-01-16T23:59:48.206132280Z" level=info msg="StartContainer for \"277d0ed4c123683905279899b4b7b4297a23cdfddf8d16ee1a95f3f3916c76af\" returns successfully" Jan 16 23:59:48.289055 containerd[2027]: time="2026-01-16T23:59:48.288824902Z" level=info msg="StartContainer for \"3637546aff2b6b2b53054e860af15950aca5b5a894e1f3cf942e8ecd008714c3\" returns successfully" Jan 16 23:59:48.395084 kubelet[2821]: E0116 23:59:48.395023 2821 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.29.179:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.179:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 16 23:59:48.477943 kubelet[2821]: E0116 23:59:48.477870 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-179\" not found" node="ip-172-31-29-179" Jan 16 23:59:48.484257 kubelet[2821]: E0116 23:59:48.483653 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-179\" not found" node="ip-172-31-29-179" Jan 16 23:59:48.487253 kubelet[2821]: E0116 23:59:48.486183 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-179\" not found" node="ip-172-31-29-179" Jan 16 23:59:49.491245 kubelet[2821]: E0116 23:59:49.491174 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-179\" not found" node="ip-172-31-29-179" Jan 16 23:59:49.493245 kubelet[2821]: E0116 23:59:49.491922 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-179\" not found" node="ip-172-31-29-179" Jan 16 23:59:49.493245 kubelet[2821]: E0116 23:59:49.492735 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-179\" not found" node="ip-172-31-29-179" Jan 16 23:59:49.618566 kubelet[2821]: I0116 23:59:49.618516 2821 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-179" Jan 16 23:59:50.493880 kubelet[2821]: E0116 23:59:50.493812 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-179\" not found" node="ip-172-31-29-179" Jan 16 23:59:52.179513 kubelet[2821]: E0116 23:59:52.179459 2821 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-179\" not found" node="ip-172-31-29-179" Jan 16 23:59:52.293585 kubelet[2821]: I0116 23:59:52.293529 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-179" Jan 16 23:59:52.299721 kubelet[2821]: I0116 23:59:52.299666 2821 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-179" Jan 16 23:59:52.345747 kubelet[2821]: I0116 23:59:52.345379 2821 apiserver.go:52] "Watching apiserver" Jan 16 23:59:52.390792 kubelet[2821]: I0116 23:59:52.390749 2821 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 23:59:52.468454 kubelet[2821]: E0116 23:59:52.468294 2821 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-179\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-29-179" Jan 16 23:59:52.468454 kubelet[2821]: I0116 23:59:52.468341 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-179" Jan 16 23:59:52.537322 kubelet[2821]: E0116 23:59:52.536652 2821 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-179\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-29-179" Jan 16 23:59:52.537322 kubelet[2821]: I0116 23:59:52.536698 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-179" Jan 16 23:59:52.549310 kubelet[2821]: E0116 23:59:52.549241 2821 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-179\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-179" Jan 16 23:59:56.579840 update_engine[2020]: I20260116 23:59:56.579751 2020 update_attempter.cc:509] Updating boot flags... Jan 16 23:59:56.770331 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3123) Jan 16 23:59:57.118344 systemd[1]: Reloading requested from client PID 3214 ('systemctl') (unit session-5.scope)... Jan 16 23:59:57.118405 systemd[1]: Reloading... Jan 16 23:59:57.152292 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3114) Jan 16 23:59:57.478350 zram_generator::config[3335]: No configuration found. Jan 16 23:59:57.681127 kubelet[2821]: I0116 23:59:57.681076 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-179" Jan 16 23:59:57.771277 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:59:57.986702 systemd[1]: Reloading finished in 867 ms. Jan 16 23:59:58.113636 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:58.142736 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 23:59:58.143107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:58.143174 systemd[1]: kubelet.service: Consumed 1.989s CPU time, 129.8M memory peak, 0B memory swap peak. Jan 16 23:59:58.154705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:58.494306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:58.511833 (kubelet)[3392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 23:59:58.619896 kubelet[3392]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:59:58.624324 kubelet[3392]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 23:59:58.626605 kubelet[3392]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:59:58.626605 kubelet[3392]: I0116 23:59:58.624707 3392 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 23:59:58.643338 kubelet[3392]: I0116 23:59:58.642870 3392 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 16 23:59:58.643522 kubelet[3392]: I0116 23:59:58.643498 3392 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 23:59:58.644042 kubelet[3392]: I0116 23:59:58.644011 3392 server.go:956] "Client rotation is on, will bootstrap in background" Jan 16 23:59:58.647572 kubelet[3392]: I0116 23:59:58.647530 3392 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 16 23:59:58.653940 kubelet[3392]: I0116 23:59:58.653877 3392 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 23:59:58.667119 kubelet[3392]: E0116 23:59:58.667034 3392 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 23:59:58.667119 kubelet[3392]: I0116 23:59:58.667094 3392 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 23:59:58.684247 kubelet[3392]: I0116 23:59:58.681988 3392 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 23:59:58.684247 kubelet[3392]: I0116 23:59:58.682469 3392 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 23:59:58.684247 kubelet[3392]: I0116 23:59:58.682511 3392 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-179","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 23:59:58.684247 kubelet[3392]: I0116 23:59:58.682786 3392 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 23:59:58.684639 kubelet[3392]: I0116 23:59:58.682806 3392 container_manager_linux.go:303] "Creating device plugin manager" Jan 16 23:59:58.684639 kubelet[3392]: I0116 23:59:58.682886 3392 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:59:58.684639 kubelet[3392]: I0116 23:59:58.683122 3392 kubelet.go:480] "Attempting to sync node with API server" Jan 16 23:59:58.684639 kubelet[3392]: I0116 23:59:58.683144 3392 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 23:59:58.684639 kubelet[3392]: I0116 23:59:58.683182 3392 kubelet.go:386] "Adding apiserver pod source" Jan 16 23:59:58.684639 kubelet[3392]: I0116 23:59:58.683299 3392 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 23:59:58.692773 kubelet[3392]: I0116 23:59:58.692724 3392 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 23:59:58.696344 kubelet[3392]: I0116 23:59:58.695903 3392 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 16 23:59:58.710393 kubelet[3392]: I0116 23:59:58.709420 3392 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 23:59:58.710639 kubelet[3392]: I0116 23:59:58.710614 3392 server.go:1289] "Started kubelet" Jan 16 23:59:58.720238 kubelet[3392]: I0116 23:59:58.718334 3392 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 23:59:58.745233 kubelet[3392]: I0116 23:59:58.718478 3392 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 23:59:58.745233 kubelet[3392]: I0116 23:59:58.742677 3392 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 23:59:58.746325 kubelet[3392]: E0116 23:59:58.746132 3392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-179\" not found" Jan 16 23:59:58.749756 kubelet[3392]: I0116 23:59:58.718562 3392 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 23:59:58.767443 kubelet[3392]: I0116 23:59:58.766747 3392 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 23:59:58.767443 kubelet[3392]: I0116 23:59:58.731087 3392 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 23:59:58.786063 kubelet[3392]: I0116 23:59:58.749037 3392 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 23:59:58.786063 kubelet[3392]: I0116 23:59:58.766424 3392 reconciler.go:26] "Reconciler: start to sync state" Jan 16 23:59:58.786063 kubelet[3392]: I0116 23:59:58.752752 3392 server.go:317] "Adding debug handlers to kubelet server" Jan 16 23:59:58.807657 kubelet[3392]: I0116 23:59:58.807170 3392 factory.go:223] Registration of the containerd container factory successfully Jan 16 23:59:58.807657 kubelet[3392]: I0116 23:59:58.807255 3392 factory.go:223] Registration of the systemd container factory successfully Jan 16 23:59:58.807657 kubelet[3392]: I0116 23:59:58.807440 3392 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 23:59:58.814804 kubelet[3392]: E0116 23:59:58.814672 3392 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 23:59:58.846041 kubelet[3392]: I0116 23:59:58.845379 3392 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 16 23:59:58.859165 kubelet[3392]: I0116 23:59:58.858495 3392 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 16 23:59:58.859165 kubelet[3392]: I0116 23:59:58.858570 3392 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 16 23:59:58.859165 kubelet[3392]: I0116 23:59:58.858630 3392 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 23:59:58.859165 kubelet[3392]: I0116 23:59:58.858648 3392 kubelet.go:2436] "Starting kubelet main sync loop" Jan 16 23:59:58.859165 kubelet[3392]: E0116 23:59:58.858791 3392 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 23:59:58.950133 kubelet[3392]: I0116 23:59:58.950089 3392 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 23:59:58.950133 kubelet[3392]: I0116 23:59:58.950122 3392 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 23:59:58.950133 kubelet[3392]: I0116 23:59:58.950159 3392 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:59:58.950724 kubelet[3392]: I0116 23:59:58.950565 3392 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 23:59:58.950724 kubelet[3392]: I0116 23:59:58.950617 3392 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 23:59:58.950724 kubelet[3392]: I0116 23:59:58.950654 3392 policy_none.go:49] "None policy: Start" Jan 16 23:59:58.950724 kubelet[3392]: I0116 23:59:58.950697 3392 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 23:59:58.950724 kubelet[3392]: I0116 23:59:58.950724 3392 state_mem.go:35] "Initializing new in-memory state store" Jan 16 23:59:58.951039 kubelet[3392]: I0116 23:59:58.950998 3392 state_mem.go:75] "Updated machine memory state" Jan 16 23:59:58.961691 kubelet[3392]: E0116 23:59:58.959962 3392 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 16 23:59:58.967109 kubelet[3392]: E0116 23:59:58.967044 3392 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 16 23:59:58.968683 kubelet[3392]: I0116 23:59:58.968607 3392 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 23:59:58.971823 kubelet[3392]: I0116 23:59:58.968676 3392 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 23:59:58.974231 kubelet[3392]: I0116 23:59:58.972496 3392 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 23:59:58.974231 kubelet[3392]: E0116 23:59:58.972615 3392 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 23:59:59.095871 kubelet[3392]: I0116 23:59:59.094747 3392 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-179" Jan 16 23:59:59.117976 kubelet[3392]: I0116 23:59:59.117918 3392 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-29-179" Jan 16 23:59:59.118150 kubelet[3392]: I0116 23:59:59.118086 3392 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-179" Jan 16 23:59:59.161195 kubelet[3392]: I0116 23:59:59.161135 3392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-179" Jan 16 23:59:59.162721 kubelet[3392]: I0116 23:59:59.162670 3392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-179" Jan 16 23:59:59.164453 kubelet[3392]: I0116 23:59:59.164374 3392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-179" Jan 16 23:59:59.182155 kubelet[3392]: E0116 23:59:59.181999 3392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-179\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-179" Jan 16 23:59:59.186754 kubelet[3392]: I0116 23:59:59.186694 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3529ff057c1d29d4b0e2e7109861b50c-ca-certs\") pod \"kube-apiserver-ip-172-31-29-179\" (UID: \"3529ff057c1d29d4b0e2e7109861b50c\") " pod="kube-system/kube-apiserver-ip-172-31-29-179" Jan 16 23:59:59.186947 kubelet[3392]: I0116 23:59:59.186813 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3529ff057c1d29d4b0e2e7109861b50c-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-179\" (UID: \"3529ff057c1d29d4b0e2e7109861b50c\") " pod="kube-system/kube-apiserver-ip-172-31-29-179" Jan 16 23:59:59.187011 kubelet[3392]: I0116 23:59:59.186990 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9d22feb2fffdd178b61bafa6ee0f775-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-179\" (UID: \"c9d22feb2fffdd178b61bafa6ee0f775\") " pod="kube-system/kube-controller-manager-ip-172-31-29-179" Jan 16 23:59:59.187800 kubelet[3392]: I0116 23:59:59.187181 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9d22feb2fffdd178b61bafa6ee0f775-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-179\" (UID: \"c9d22feb2fffdd178b61bafa6ee0f775\") " pod="kube-system/kube-controller-manager-ip-172-31-29-179" Jan 16 23:59:59.187800 kubelet[3392]: I0116 23:59:59.187294 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9d22feb2fffdd178b61bafa6ee0f775-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-179\" (UID: \"c9d22feb2fffdd178b61bafa6ee0f775\") " pod="kube-system/kube-controller-manager-ip-172-31-29-179" Jan 16 23:59:59.187800 kubelet[3392]: I0116 23:59:59.187489 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9d22feb2fffdd178b61bafa6ee0f775-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-179\" (UID: \"c9d22feb2fffdd178b61bafa6ee0f775\") " pod="kube-system/kube-controller-manager-ip-172-31-29-179" Jan 16 23:59:59.187800 kubelet[3392]: I0116 23:59:59.187597 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ff2921a7c0f0b18dc6b3c7bd862e00c-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-179\" (UID: \"8ff2921a7c0f0b18dc6b3c7bd862e00c\") " pod="kube-system/kube-scheduler-ip-172-31-29-179" Jan 16 23:59:59.187800 kubelet[3392]: I0116 23:59:59.187667 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3529ff057c1d29d4b0e2e7109861b50c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-179\" (UID: \"3529ff057c1d29d4b0e2e7109861b50c\") " pod="kube-system/kube-apiserver-ip-172-31-29-179" Jan 16 23:59:59.188104 kubelet[3392]: I0116 23:59:59.187751 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9d22feb2fffdd178b61bafa6ee0f775-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-179\" (UID: \"c9d22feb2fffdd178b61bafa6ee0f775\") " pod="kube-system/kube-controller-manager-ip-172-31-29-179" Jan 16 23:59:59.455576 kubelet[3392]: I0116 23:59:59.455527 3392 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 23:59:59.457125 containerd[2027]: time="2026-01-16T23:59:59.456749771Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 23:59:59.458247 kubelet[3392]: I0116 23:59:59.457647 3392 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 23:59:59.689845 kubelet[3392]: I0116 23:59:59.687540 3392 apiserver.go:52] "Watching apiserver" Jan 16 23:59:59.785328 kubelet[3392]: I0116 23:59:59.785150 3392 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 23:59:59.991119 kubelet[3392]: I0116 23:59:59.991000 3392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-179" podStartSLOduration=2.990975594 podStartE2EDuration="2.990975594s" podCreationTimestamp="2026-01-16 23:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:59:59.9574678 +0000 UTC m=+1.432916612" watchObservedRunningTime="2026-01-16 23:59:59.990975594 +0000 UTC m=+1.466424382" Jan 17 00:00:00.024838 kubelet[3392]: I0117 00:00:00.024747 3392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-179" podStartSLOduration=1.0247231 podStartE2EDuration="1.0247231s" podCreationTimestamp="2026-01-16 23:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:59:59.991467096 +0000 UTC m=+1.466915884" watchObservedRunningTime="2026-01-17 00:00:00.0247231 +0000 UTC m=+1.500171900" Jan 17 00:00:00.045872 kubelet[3392]: I0117 00:00:00.045487 3392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-179" podStartSLOduration=1.045464372 podStartE2EDuration="1.045464372s" podCreationTimestamp="2026-01-16 23:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:00:00.029459135 +0000 UTC m=+1.504907923" watchObservedRunningTime="2026-01-17 00:00:00.045464372 +0000 UTC m=+1.520913172" Jan 17 00:00:00.263717 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jan 17 00:00:00.283935 systemd[1]: Created slice kubepods-besteffort-podbd3c6958_3d5f_413c_9f4f_9d8210656294.slice - libcontainer container kubepods-besteffort-podbd3c6958_3d5f_413c_9f4f_9d8210656294.slice. Jan 17 00:00:00.296675 kubelet[3392]: I0117 00:00:00.296528 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd3c6958-3d5f-413c-9f4f-9d8210656294-xtables-lock\") pod \"kube-proxy-pcznx\" (UID: \"bd3c6958-3d5f-413c-9f4f-9d8210656294\") " pod="kube-system/kube-proxy-pcznx" Jan 17 00:00:00.296675 kubelet[3392]: I0117 00:00:00.296601 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd3c6958-3d5f-413c-9f4f-9d8210656294-lib-modules\") pod \"kube-proxy-pcznx\" (UID: \"bd3c6958-3d5f-413c-9f4f-9d8210656294\") " pod="kube-system/kube-proxy-pcznx" Jan 17 00:00:00.297689 kubelet[3392]: I0117 00:00:00.296644 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjxgs\" (UniqueName: \"kubernetes.io/projected/bd3c6958-3d5f-413c-9f4f-9d8210656294-kube-api-access-wjxgs\") pod \"kube-proxy-pcznx\" (UID: \"bd3c6958-3d5f-413c-9f4f-9d8210656294\") " pod="kube-system/kube-proxy-pcznx" Jan 17 00:00:00.297689 kubelet[3392]: I0117 00:00:00.297635 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd3c6958-3d5f-413c-9f4f-9d8210656294-kube-proxy\") pod \"kube-proxy-pcznx\" (UID: \"bd3c6958-3d5f-413c-9f4f-9d8210656294\") " pod="kube-system/kube-proxy-pcznx" Jan 17 00:00:00.303499 systemd[1]: logrotate.service: Deactivated successfully. Jan 17 00:00:00.473690 systemd[1]: Created slice kubepods-burstable-pod420be314_6da5_437b_8e7e_9e4ed1be1fa4.slice - libcontainer container kubepods-burstable-pod420be314_6da5_437b_8e7e_9e4ed1be1fa4.slice. Jan 17 00:00:00.498822 kubelet[3392]: I0117 00:00:00.498744 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/420be314-6da5-437b-8e7e-9e4ed1be1fa4-run\") pod \"kube-flannel-ds-294tj\" (UID: \"420be314-6da5-437b-8e7e-9e4ed1be1fa4\") " pod="kube-flannel/kube-flannel-ds-294tj" Jan 17 00:00:00.498822 kubelet[3392]: I0117 00:00:00.498819 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/420be314-6da5-437b-8e7e-9e4ed1be1fa4-cni\") pod \"kube-flannel-ds-294tj\" (UID: \"420be314-6da5-437b-8e7e-9e4ed1be1fa4\") " pod="kube-flannel/kube-flannel-ds-294tj" Jan 17 00:00:00.499046 kubelet[3392]: I0117 00:00:00.498864 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/420be314-6da5-437b-8e7e-9e4ed1be1fa4-flannel-cfg\") pod \"kube-flannel-ds-294tj\" (UID: \"420be314-6da5-437b-8e7e-9e4ed1be1fa4\") " pod="kube-flannel/kube-flannel-ds-294tj" Jan 17 00:00:00.499046 kubelet[3392]: I0117 00:00:00.498905 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbstc\" (UniqueName: \"kubernetes.io/projected/420be314-6da5-437b-8e7e-9e4ed1be1fa4-kube-api-access-bbstc\") pod \"kube-flannel-ds-294tj\" (UID: \"420be314-6da5-437b-8e7e-9e4ed1be1fa4\") " pod="kube-flannel/kube-flannel-ds-294tj" Jan 17 00:00:00.499046 kubelet[3392]: I0117 00:00:00.498951 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/420be314-6da5-437b-8e7e-9e4ed1be1fa4-cni-plugin\") pod \"kube-flannel-ds-294tj\" (UID: \"420be314-6da5-437b-8e7e-9e4ed1be1fa4\") " pod="kube-flannel/kube-flannel-ds-294tj" Jan 17 00:00:00.499046 kubelet[3392]: I0117 00:00:00.498987 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/420be314-6da5-437b-8e7e-9e4ed1be1fa4-xtables-lock\") pod \"kube-flannel-ds-294tj\" (UID: \"420be314-6da5-437b-8e7e-9e4ed1be1fa4\") " pod="kube-flannel/kube-flannel-ds-294tj" Jan 17 00:00:00.595953 containerd[2027]: time="2026-01-17T00:00:00.595796470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pcznx,Uid:bd3c6958-3d5f-413c-9f4f-9d8210656294,Namespace:kube-system,Attempt:0,}" Jan 17 00:00:00.683922 containerd[2027]: time="2026-01-17T00:00:00.683742283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:00.683922 containerd[2027]: time="2026-01-17T00:00:00.683846595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:00.684408 containerd[2027]: time="2026-01-17T00:00:00.683995369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:00.687339 containerd[2027]: time="2026-01-17T00:00:00.686286775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:00.742987 systemd[1]: Started cri-containerd-e03d35ea8f3cce1a4c0f0a305650fc1a0b3dc069b7baca731a4d0ff89c64d022.scope - libcontainer container e03d35ea8f3cce1a4c0f0a305650fc1a0b3dc069b7baca731a4d0ff89c64d022. Jan 17 00:00:00.774335 sudo[2316]: pam_unix(sudo:session): session closed for user root Jan 17 00:00:00.787134 containerd[2027]: time="2026-01-17T00:00:00.787082816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-294tj,Uid:420be314-6da5-437b-8e7e-9e4ed1be1fa4,Namespace:kube-flannel,Attempt:0,}" Jan 17 00:00:00.806922 containerd[2027]: time="2026-01-17T00:00:00.806764166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pcznx,Uid:bd3c6958-3d5f-413c-9f4f-9d8210656294,Namespace:kube-system,Attempt:0,} returns sandbox id \"e03d35ea8f3cce1a4c0f0a305650fc1a0b3dc069b7baca731a4d0ff89c64d022\"" Jan 17 00:00:00.817819 containerd[2027]: time="2026-01-17T00:00:00.817086622Z" level=info msg="CreateContainer within sandbox \"e03d35ea8f3cce1a4c0f0a305650fc1a0b3dc069b7baca731a4d0ff89c64d022\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:00:00.838235 containerd[2027]: time="2026-01-17T00:00:00.838101178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:00.839722 containerd[2027]: time="2026-01-17T00:00:00.839545315Z" level=info msg="CreateContainer within sandbox \"e03d35ea8f3cce1a4c0f0a305650fc1a0b3dc069b7baca731a4d0ff89c64d022\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"251a65fe05d6a7465f9a7515f296a13ae8ea22ab3f6ce6cb7df108f66f0459ff\"" Jan 17 00:00:00.840047 containerd[2027]: time="2026-01-17T00:00:00.839561208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:00.840047 containerd[2027]: time="2026-01-17T00:00:00.839622401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:00.841989 containerd[2027]: time="2026-01-17T00:00:00.841394490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:00.844491 containerd[2027]: time="2026-01-17T00:00:00.844433158Z" level=info msg="StartContainer for \"251a65fe05d6a7465f9a7515f296a13ae8ea22ab3f6ce6cb7df108f66f0459ff\"" Jan 17 00:00:00.861461 sshd[2313]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:00.871770 systemd[1]: sshd@4-172.31.29.179:22-68.220.241.50:40806.service: Deactivated successfully. Jan 17 00:00:00.881145 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:00:00.881965 systemd[1]: session-5.scope: Consumed 11.656s CPU time, 151.7M memory peak, 0B memory swap peak. Jan 17 00:00:00.885435 systemd-logind[2017]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:00:00.888111 systemd-logind[2017]: Removed session 5. Jan 17 00:00:00.902540 systemd[1]: Started cri-containerd-6d7057af6ca36d5dc73db0ac077002942dde40f28923de012f29ba2c2c4f527b.scope - libcontainer container 6d7057af6ca36d5dc73db0ac077002942dde40f28923de012f29ba2c2c4f527b. Jan 17 00:00:00.945548 systemd[1]: Started cri-containerd-251a65fe05d6a7465f9a7515f296a13ae8ea22ab3f6ce6cb7df108f66f0459ff.scope - libcontainer container 251a65fe05d6a7465f9a7515f296a13ae8ea22ab3f6ce6cb7df108f66f0459ff. Jan 17 00:00:01.027891 containerd[2027]: time="2026-01-17T00:00:01.027581263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-294tj,Uid:420be314-6da5-437b-8e7e-9e4ed1be1fa4,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"6d7057af6ca36d5dc73db0ac077002942dde40f28923de012f29ba2c2c4f527b\"" Jan 17 00:00:01.031664 containerd[2027]: time="2026-01-17T00:00:01.031610491Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 17 00:00:01.048376 containerd[2027]: time="2026-01-17T00:00:01.047538260Z" level=info msg="StartContainer for \"251a65fe05d6a7465f9a7515f296a13ae8ea22ab3f6ce6cb7df108f66f0459ff\" returns successfully" Jan 17 00:00:06.582690 kubelet[3392]: I0117 00:00:06.581959 3392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pcznx" podStartSLOduration=6.581935945 podStartE2EDuration="6.581935945s" podCreationTimestamp="2026-01-17 00:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:00:01.942116833 +0000 UTC m=+3.417565621" watchObservedRunningTime="2026-01-17 00:00:06.581935945 +0000 UTC m=+8.057384721" Jan 17 00:00:18.197889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2095007567.mount: Deactivated successfully. Jan 17 00:00:18.272816 containerd[2027]: time="2026-01-17T00:00:18.272721735Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:18.275318 containerd[2027]: time="2026-01-17T00:00:18.275260600Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=5125564" Jan 17 00:00:18.277860 containerd[2027]: time="2026-01-17T00:00:18.277780672Z" level=info msg="ImageCreate event name:\"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:18.284170 containerd[2027]: time="2026-01-17T00:00:18.284101834Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:18.288234 containerd[2027]: time="2026-01-17T00:00:18.286705527Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"5125394\" in 17.254655108s" Jan 17 00:00:18.288234 containerd[2027]: time="2026-01-17T00:00:18.286783872Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\"" Jan 17 00:00:18.297174 containerd[2027]: time="2026-01-17T00:00:18.297099396Z" level=info msg="CreateContainer within sandbox \"6d7057af6ca36d5dc73db0ac077002942dde40f28923de012f29ba2c2c4f527b\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 17 00:00:18.325243 containerd[2027]: time="2026-01-17T00:00:18.325052468Z" level=info msg="CreateContainer within sandbox \"6d7057af6ca36d5dc73db0ac077002942dde40f28923de012f29ba2c2c4f527b\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"0b29617a532e99797dd4f045ce2d5a5b5f5956e38db3ba215b42a38986e8a789\"" Jan 17 00:00:18.327254 containerd[2027]: time="2026-01-17T00:00:18.325956096Z" level=info msg="StartContainer for \"0b29617a532e99797dd4f045ce2d5a5b5f5956e38db3ba215b42a38986e8a789\"" Jan 17 00:00:18.371515 systemd[1]: Started cri-containerd-0b29617a532e99797dd4f045ce2d5a5b5f5956e38db3ba215b42a38986e8a789.scope - libcontainer container 0b29617a532e99797dd4f045ce2d5a5b5f5956e38db3ba215b42a38986e8a789. Jan 17 00:00:18.426245 containerd[2027]: time="2026-01-17T00:00:18.424508935Z" level=info msg="StartContainer for \"0b29617a532e99797dd4f045ce2d5a5b5f5956e38db3ba215b42a38986e8a789\" returns successfully" Jan 17 00:00:18.430310 systemd[1]: cri-containerd-0b29617a532e99797dd4f045ce2d5a5b5f5956e38db3ba215b42a38986e8a789.scope: Deactivated successfully. Jan 17 00:00:18.511380 containerd[2027]: time="2026-01-17T00:00:18.510410395Z" level=info msg="shim disconnected" id=0b29617a532e99797dd4f045ce2d5a5b5f5956e38db3ba215b42a38986e8a789 namespace=k8s.io Jan 17 00:00:18.511380 containerd[2027]: time="2026-01-17T00:00:18.510517694Z" level=warning msg="cleaning up after shim disconnected" id=0b29617a532e99797dd4f045ce2d5a5b5f5956e38db3ba215b42a38986e8a789 namespace=k8s.io Jan 17 00:00:18.511380 containerd[2027]: time="2026-01-17T00:00:18.510539751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:00:18.535541 containerd[2027]: time="2026-01-17T00:00:18.535451901Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:00:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:00:18.973901 containerd[2027]: time="2026-01-17T00:00:18.973840674Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 17 00:00:19.016910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b29617a532e99797dd4f045ce2d5a5b5f5956e38db3ba215b42a38986e8a789-rootfs.mount: Deactivated successfully. Jan 17 00:00:39.777947 systemd[1]: Started sshd@5-172.31.29.179:22-68.220.241.50:45284.service - OpenSSH per-connection server daemon (68.220.241.50:45284). Jan 17 00:00:40.324711 sshd[3797]: Accepted publickey for core from 68.220.241.50 port 45284 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:00:40.327429 sshd[3797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:40.338354 systemd-logind[2017]: New session 6 of user core. Jan 17 00:00:40.346832 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:00:40.840132 sshd[3797]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:40.847240 systemd[1]: sshd@5-172.31.29.179:22-68.220.241.50:45284.service: Deactivated successfully. Jan 17 00:00:40.851965 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:00:40.853611 systemd-logind[2017]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:00:40.856153 systemd-logind[2017]: Removed session 6. Jan 17 00:00:45.005050 containerd[2027]: time="2026-01-17T00:00:45.004965829Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:45.007838 containerd[2027]: time="2026-01-17T00:00:45.007763922Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=28419854" Jan 17 00:00:45.009826 containerd[2027]: time="2026-01-17T00:00:45.009711663Z" level=info msg="ImageCreate event name:\"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:45.018240 containerd[2027]: time="2026-01-17T00:00:45.017946468Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:45.021971 containerd[2027]: time="2026-01-17T00:00:45.021157801Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32412118\" in 26.047250945s" Jan 17 00:00:45.021971 containerd[2027]: time="2026-01-17T00:00:45.021238149Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\"" Jan 17 00:00:45.030870 containerd[2027]: time="2026-01-17T00:00:45.030808834Z" level=info msg="CreateContainer within sandbox \"6d7057af6ca36d5dc73db0ac077002942dde40f28923de012f29ba2c2c4f527b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:00:45.059395 containerd[2027]: time="2026-01-17T00:00:45.059289234Z" level=info msg="CreateContainer within sandbox \"6d7057af6ca36d5dc73db0ac077002942dde40f28923de012f29ba2c2c4f527b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"53a7fc3dc03786cca87c9ccb696043468cac07ceb181908fcc027dc424d898a6\"" Jan 17 00:00:45.063236 containerd[2027]: time="2026-01-17T00:00:45.062108196Z" level=info msg="StartContainer for \"53a7fc3dc03786cca87c9ccb696043468cac07ceb181908fcc027dc424d898a6\"" Jan 17 00:00:45.120798 systemd[1]: run-containerd-runc-k8s.io-53a7fc3dc03786cca87c9ccb696043468cac07ceb181908fcc027dc424d898a6-runc.rZy8aN.mount: Deactivated successfully. Jan 17 00:00:45.133554 systemd[1]: Started cri-containerd-53a7fc3dc03786cca87c9ccb696043468cac07ceb181908fcc027dc424d898a6.scope - libcontainer container 53a7fc3dc03786cca87c9ccb696043468cac07ceb181908fcc027dc424d898a6. Jan 17 00:00:45.179388 containerd[2027]: time="2026-01-17T00:00:45.178999610Z" level=info msg="StartContainer for \"53a7fc3dc03786cca87c9ccb696043468cac07ceb181908fcc027dc424d898a6\" returns successfully" Jan 17 00:00:45.180630 systemd[1]: cri-containerd-53a7fc3dc03786cca87c9ccb696043468cac07ceb181908fcc027dc424d898a6.scope: Deactivated successfully. Jan 17 00:00:45.198030 kubelet[3392]: I0117 00:00:45.197994 3392 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:00:45.286148 kubelet[3392]: I0117 00:00:45.284861 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44c90dc7-dc5e-49b2-a5d2-3024b70df684-config-volume\") pod \"coredns-674b8bbfcf-q9bbq\" (UID: \"44c90dc7-dc5e-49b2-a5d2-3024b70df684\") " pod="kube-system/coredns-674b8bbfcf-q9bbq" Jan 17 00:00:45.286148 kubelet[3392]: I0117 00:00:45.284945 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7jpw\" (UniqueName: \"kubernetes.io/projected/44c90dc7-dc5e-49b2-a5d2-3024b70df684-kube-api-access-w7jpw\") pod \"coredns-674b8bbfcf-q9bbq\" (UID: \"44c90dc7-dc5e-49b2-a5d2-3024b70df684\") " pod="kube-system/coredns-674b8bbfcf-q9bbq" Jan 17 00:00:45.290779 systemd[1]: Created slice kubepods-burstable-pod44c90dc7_dc5e_49b2_a5d2_3024b70df684.slice - libcontainer container kubepods-burstable-pod44c90dc7_dc5e_49b2_a5d2_3024b70df684.slice. Jan 17 00:00:45.319823 systemd[1]: Created slice kubepods-burstable-podae36e0e1_6f9d_4a27_bae0_9a38a52e299c.slice - libcontainer container kubepods-burstable-podae36e0e1_6f9d_4a27_bae0_9a38a52e299c.slice. Jan 17 00:00:45.385434 kubelet[3392]: I0117 00:00:45.385291 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae36e0e1-6f9d-4a27-bae0-9a38a52e299c-config-volume\") pod \"coredns-674b8bbfcf-rtjjj\" (UID: \"ae36e0e1-6f9d-4a27-bae0-9a38a52e299c\") " pod="kube-system/coredns-674b8bbfcf-rtjjj" Jan 17 00:00:45.387752 kubelet[3392]: I0117 00:00:45.385741 3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxd6v\" (UniqueName: \"kubernetes.io/projected/ae36e0e1-6f9d-4a27-bae0-9a38a52e299c-kube-api-access-jxd6v\") pod \"coredns-674b8bbfcf-rtjjj\" (UID: \"ae36e0e1-6f9d-4a27-bae0-9a38a52e299c\") " pod="kube-system/coredns-674b8bbfcf-rtjjj" Jan 17 00:00:45.494999 containerd[2027]: time="2026-01-17T00:00:45.494902692Z" level=info msg="shim disconnected" id=53a7fc3dc03786cca87c9ccb696043468cac07ceb181908fcc027dc424d898a6 namespace=k8s.io Jan 17 00:00:45.494999 containerd[2027]: time="2026-01-17T00:00:45.494983723Z" level=warning msg="cleaning up after shim disconnected" id=53a7fc3dc03786cca87c9ccb696043468cac07ceb181908fcc027dc424d898a6 namespace=k8s.io Jan 17 00:00:45.494999 containerd[2027]: time="2026-01-17T00:00:45.495006164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:00:45.612071 containerd[2027]: time="2026-01-17T00:00:45.611911635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q9bbq,Uid:44c90dc7-dc5e-49b2-a5d2-3024b70df684,Namespace:kube-system,Attempt:0,}" Jan 17 00:00:45.639859 containerd[2027]: time="2026-01-17T00:00:45.639602854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rtjjj,Uid:ae36e0e1-6f9d-4a27-bae0-9a38a52e299c,Namespace:kube-system,Attempt:0,}" Jan 17 00:00:45.681639 containerd[2027]: time="2026-01-17T00:00:45.681279437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q9bbq,Uid:44c90dc7-dc5e-49b2-a5d2-3024b70df684,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81e2ea7de1e5d9e70f3058fa9da735d4f07064ebb860367eb03debcda0e3eaaa\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:00:45.681801 kubelet[3392]: E0117 00:00:45.681650 3392 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81e2ea7de1e5d9e70f3058fa9da735d4f07064ebb860367eb03debcda0e3eaaa\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:00:45.681801 kubelet[3392]: E0117 00:00:45.681739 3392 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81e2ea7de1e5d9e70f3058fa9da735d4f07064ebb860367eb03debcda0e3eaaa\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-q9bbq" Jan 17 00:00:45.681801 kubelet[3392]: E0117 00:00:45.681773 3392 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81e2ea7de1e5d9e70f3058fa9da735d4f07064ebb860367eb03debcda0e3eaaa\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-q9bbq" Jan 17 00:00:45.682011 kubelet[3392]: E0117 00:00:45.681854 3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-q9bbq_kube-system(44c90dc7-dc5e-49b2-a5d2-3024b70df684)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-q9bbq_kube-system(44c90dc7-dc5e-49b2-a5d2-3024b70df684)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81e2ea7de1e5d9e70f3058fa9da735d4f07064ebb860367eb03debcda0e3eaaa\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-q9bbq" podUID="44c90dc7-dc5e-49b2-a5d2-3024b70df684" Jan 17 00:00:45.691415 containerd[2027]: time="2026-01-17T00:00:45.689931333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rtjjj,Uid:ae36e0e1-6f9d-4a27-bae0-9a38a52e299c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef1480fec3802878b8694d65ee46b76f659e1ce21cf1d78b190159299c7291a8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:00:45.691546 kubelet[3392]: E0117 00:00:45.690274 3392 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1480fec3802878b8694d65ee46b76f659e1ce21cf1d78b190159299c7291a8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:00:45.691546 kubelet[3392]: E0117 00:00:45.690348 3392 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1480fec3802878b8694d65ee46b76f659e1ce21cf1d78b190159299c7291a8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-rtjjj" Jan 17 00:00:45.691546 kubelet[3392]: E0117 00:00:45.690381 3392 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1480fec3802878b8694d65ee46b76f659e1ce21cf1d78b190159299c7291a8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-rtjjj" Jan 17 00:00:45.691546 kubelet[3392]: E0117 00:00:45.690478 3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rtjjj_kube-system(ae36e0e1-6f9d-4a27-bae0-9a38a52e299c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rtjjj_kube-system(ae36e0e1-6f9d-4a27-bae0-9a38a52e299c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef1480fec3802878b8694d65ee46b76f659e1ce21cf1d78b190159299c7291a8\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-rtjjj" podUID="ae36e0e1-6f9d-4a27-bae0-9a38a52e299c" Jan 17 00:00:45.927751 systemd[1]: Started sshd@6-172.31.29.179:22-68.220.241.50:55622.service - OpenSSH per-connection server daemon (68.220.241.50:55622). Jan 17 00:00:46.045931 containerd[2027]: time="2026-01-17T00:00:46.045864247Z" level=info msg="CreateContainer within sandbox \"6d7057af6ca36d5dc73db0ac077002942dde40f28923de012f29ba2c2c4f527b\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 17 00:00:46.061072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53a7fc3dc03786cca87c9ccb696043468cac07ceb181908fcc027dc424d898a6-rootfs.mount: Deactivated successfully. Jan 17 00:00:46.077882 containerd[2027]: time="2026-01-17T00:00:46.068961453Z" level=info msg="CreateContainer within sandbox \"6d7057af6ca36d5dc73db0ac077002942dde40f28923de012f29ba2c2c4f527b\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"6b615ed300f4f6ee68839a64ea1b38538df4151f22d79af878ab8a816682ce07\"" Jan 17 00:00:46.077882 containerd[2027]: time="2026-01-17T00:00:46.070413938Z" level=info msg="StartContainer for \"6b615ed300f4f6ee68839a64ea1b38538df4151f22d79af878ab8a816682ce07\"" Jan 17 00:00:46.135529 systemd[1]: Started cri-containerd-6b615ed300f4f6ee68839a64ea1b38538df4151f22d79af878ab8a816682ce07.scope - libcontainer container 6b615ed300f4f6ee68839a64ea1b38538df4151f22d79af878ab8a816682ce07. Jan 17 00:00:46.185364 containerd[2027]: time="2026-01-17T00:00:46.185031913Z" level=info msg="StartContainer for \"6b615ed300f4f6ee68839a64ea1b38538df4151f22d79af878ab8a816682ce07\" returns successfully" Jan 17 00:00:46.447154 sshd[3943]: Accepted publickey for core from 68.220.241.50 port 55622 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:00:46.449824 sshd[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:46.457336 systemd-logind[2017]: New session 7 of user core. Jan 17 00:00:46.465452 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:00:46.922628 sshd[3943]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:46.931117 systemd[1]: sshd@6-172.31.29.179:22-68.220.241.50:55622.service: Deactivated successfully. Jan 17 00:00:46.935444 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:00:46.937095 systemd-logind[2017]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:00:46.939118 systemd-logind[2017]: Removed session 7. Jan 17 00:00:47.297169 (udev-worker)[3993]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:00:47.322266 systemd-networkd[1947]: flannel.1: Link UP Jan 17 00:00:47.322281 systemd-networkd[1947]: flannel.1: Gained carrier Jan 17 00:00:48.500424 systemd-networkd[1947]: flannel.1: Gained IPv6LL Jan 17 00:00:50.575886 ntpd[2011]: Listen normally on 8 flannel.1 192.168.0.0:123 Jan 17 00:00:50.576826 ntpd[2011]: 17 Jan 00:00:50 ntpd[2011]: Listen normally on 8 flannel.1 192.168.0.0:123 Jan 17 00:00:50.576826 ntpd[2011]: 17 Jan 00:00:50 ntpd[2011]: Listen normally on 9 flannel.1 [fe80::f064:62ff:fe02:8007%4]:123 Jan 17 00:00:50.576009 ntpd[2011]: Listen normally on 9 flannel.1 [fe80::f064:62ff:fe02:8007%4]:123 Jan 17 00:00:52.013730 systemd[1]: Started sshd@7-172.31.29.179:22-68.220.241.50:55630.service - OpenSSH per-connection server daemon (68.220.241.50:55630). Jan 17 00:00:52.526187 sshd[4048]: Accepted publickey for core from 68.220.241.50 port 55630 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:00:52.528940 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:52.536544 systemd-logind[2017]: New session 8 of user core. Jan 17 00:00:52.547465 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:00:52.997977 sshd[4048]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:53.004727 systemd[1]: sshd@7-172.31.29.179:22-68.220.241.50:55630.service: Deactivated successfully. Jan 17 00:00:53.009574 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:00:53.011359 systemd-logind[2017]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:00:53.013971 systemd-logind[2017]: Removed session 8. Jan 17 00:00:56.861454 containerd[2027]: time="2026-01-17T00:00:56.860377905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q9bbq,Uid:44c90dc7-dc5e-49b2-a5d2-3024b70df684,Namespace:kube-system,Attempt:0,}" Jan 17 00:00:56.916142 systemd-networkd[1947]: cni0: Link UP Jan 17 00:00:56.916162 systemd-networkd[1947]: cni0: Gained carrier Jan 17 00:00:56.927935 systemd-networkd[1947]: veth29ce57d4: Link UP Jan 17 00:00:56.932383 kernel: cni0: port 1(veth29ce57d4) entered blocking state Jan 17 00:00:56.932509 kernel: cni0: port 1(veth29ce57d4) entered disabled state Jan 17 00:00:56.934024 kernel: veth29ce57d4: entered allmulticast mode Jan 17 00:00:56.935907 kernel: veth29ce57d4: entered promiscuous mode Jan 17 00:00:56.938090 systemd-networkd[1947]: cni0: Lost carrier Jan 17 00:00:56.940476 (udev-worker)[4101]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:00:56.941188 (udev-worker)[4098]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:00:56.961359 kernel: cni0: port 1(veth29ce57d4) entered blocking state Jan 17 00:00:56.961441 kernel: cni0: port 1(veth29ce57d4) entered forwarding state Jan 17 00:00:56.960268 systemd-networkd[1947]: veth29ce57d4: Gained carrier Jan 17 00:00:56.967286 systemd-networkd[1947]: cni0: Gained carrier Jan 17 00:00:56.974965 containerd[2027]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400000e9a0), "name":"cbr0", "type":"bridge"} Jan 17 00:00:56.974965 containerd[2027]: delegateAdd: netconf sent to delegate plugin: Jan 17 00:00:57.025028 containerd[2027]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-17T00:00:57.024527850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:57.025028 containerd[2027]: time="2026-01-17T00:00:57.024707028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:57.025028 containerd[2027]: time="2026-01-17T00:00:57.024772240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:57.025028 containerd[2027]: time="2026-01-17T00:00:57.024934794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:57.069531 systemd[1]: Started cri-containerd-e3c640b88adf21a697fca51801289f5491c90649b1a33e088c226e3611dcf7c2.scope - libcontainer container e3c640b88adf21a697fca51801289f5491c90649b1a33e088c226e3611dcf7c2. Jan 17 00:00:57.131887 containerd[2027]: time="2026-01-17T00:00:57.131272519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q9bbq,Uid:44c90dc7-dc5e-49b2-a5d2-3024b70df684,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3c640b88adf21a697fca51801289f5491c90649b1a33e088c226e3611dcf7c2\"" Jan 17 00:00:57.142190 containerd[2027]: time="2026-01-17T00:00:57.142090903Z" level=info msg="CreateContainer within sandbox \"e3c640b88adf21a697fca51801289f5491c90649b1a33e088c226e3611dcf7c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:00:57.160348 containerd[2027]: time="2026-01-17T00:00:57.160085657Z" level=info msg="CreateContainer within sandbox \"e3c640b88adf21a697fca51801289f5491c90649b1a33e088c226e3611dcf7c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"189214745fa4ed6190c2219feba99aa1c6ea7d8f6bf6110ccf39eb31f49d94f5\"" Jan 17 00:00:57.164064 containerd[2027]: time="2026-01-17T00:00:57.162049963Z" level=info msg="StartContainer for \"189214745fa4ed6190c2219feba99aa1c6ea7d8f6bf6110ccf39eb31f49d94f5\"" Jan 17 00:00:57.218530 systemd[1]: Started cri-containerd-189214745fa4ed6190c2219feba99aa1c6ea7d8f6bf6110ccf39eb31f49d94f5.scope - libcontainer container 189214745fa4ed6190c2219feba99aa1c6ea7d8f6bf6110ccf39eb31f49d94f5. Jan 17 00:00:57.264275 containerd[2027]: time="2026-01-17T00:00:57.264133044Z" level=info msg="StartContainer for \"189214745fa4ed6190c2219feba99aa1c6ea7d8f6bf6110ccf39eb31f49d94f5\" returns successfully" Jan 17 00:00:58.088651 kubelet[3392]: I0117 00:00:58.088556 3392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-294tj" podStartSLOduration=14.095257939 podStartE2EDuration="58.08853434s" podCreationTimestamp="2026-01-17 00:00:00 +0000 UTC" firstStartedPulling="2026-01-17 00:00:01.030541658 +0000 UTC m=+2.505990422" lastFinishedPulling="2026-01-17 00:00:45.023818047 +0000 UTC m=+46.499266823" observedRunningTime="2026-01-17 00:00:47.060725981 +0000 UTC m=+48.536174781" watchObservedRunningTime="2026-01-17 00:00:58.08853434 +0000 UTC m=+59.563983116" Jan 17 00:00:58.110723 systemd[1]: Started sshd@8-172.31.29.179:22-68.220.241.50:39270.service - OpenSSH per-connection server daemon (68.220.241.50:39270). Jan 17 00:00:58.124887 kubelet[3392]: I0117 00:00:58.124780 3392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-q9bbq" podStartSLOduration=58.124756627 podStartE2EDuration="58.124756627s" podCreationTimestamp="2026-01-17 00:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:00:58.092589103 +0000 UTC m=+59.568037891" watchObservedRunningTime="2026-01-17 00:00:58.124756627 +0000 UTC m=+59.600205403" Jan 17 00:00:58.484503 systemd-networkd[1947]: veth29ce57d4: Gained IPv6LL Jan 17 00:00:58.668014 sshd[4219]: Accepted publickey for core from 68.220.241.50 port 39270 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:00:58.670787 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:58.680123 systemd-logind[2017]: New session 9 of user core. Jan 17 00:00:58.685898 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:00:58.804460 systemd-networkd[1947]: cni0: Gained IPv6LL Jan 17 00:00:59.180552 sshd[4219]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:59.187917 systemd[1]: sshd@8-172.31.29.179:22-68.220.241.50:39270.service: Deactivated successfully. Jan 17 00:00:59.191901 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:00:59.196102 systemd-logind[2017]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:00:59.198305 systemd-logind[2017]: Removed session 9. Jan 17 00:00:59.860174 containerd[2027]: time="2026-01-17T00:00:59.860067131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rtjjj,Uid:ae36e0e1-6f9d-4a27-bae0-9a38a52e299c,Namespace:kube-system,Attempt:0,}" Jan 17 00:00:59.894995 systemd-networkd[1947]: veth3c2f88c5: Link UP Jan 17 00:00:59.898768 kernel: cni0: port 2(veth3c2f88c5) entered blocking state Jan 17 00:00:59.898841 kernel: cni0: port 2(veth3c2f88c5) entered disabled state Jan 17 00:00:59.898902 kernel: veth3c2f88c5: entered allmulticast mode Jan 17 00:00:59.901089 kernel: veth3c2f88c5: entered promiscuous mode Jan 17 00:00:59.910619 kernel: cni0: port 2(veth3c2f88c5) entered blocking state Jan 17 00:00:59.910741 kernel: cni0: port 2(veth3c2f88c5) entered forwarding state Jan 17 00:00:59.910904 systemd-networkd[1947]: veth3c2f88c5: Gained carrier Jan 17 00:00:59.918507 containerd[2027]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400000e9a0), "name":"cbr0", "type":"bridge"} Jan 17 00:00:59.918507 containerd[2027]: delegateAdd: netconf sent to delegate plugin: Jan 17 00:00:59.974134 containerd[2027]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-17T00:00:59.973819847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:59.974134 containerd[2027]: time="2026-01-17T00:00:59.973923427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:59.974134 containerd[2027]: time="2026-01-17T00:00:59.973963595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:59.974748 containerd[2027]: time="2026-01-17T00:00:59.974519073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:00.021746 systemd[1]: Started cri-containerd-99876d43c87682e3f7d7101972c5fe904dc7569fc8e006fe0730e0ce286304fb.scope - libcontainer container 99876d43c87682e3f7d7101972c5fe904dc7569fc8e006fe0730e0ce286304fb. Jan 17 00:01:00.095880 containerd[2027]: time="2026-01-17T00:01:00.095716045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rtjjj,Uid:ae36e0e1-6f9d-4a27-bae0-9a38a52e299c,Namespace:kube-system,Attempt:0,} returns sandbox id \"99876d43c87682e3f7d7101972c5fe904dc7569fc8e006fe0730e0ce286304fb\"" Jan 17 00:01:00.107047 containerd[2027]: time="2026-01-17T00:01:00.106963994Z" level=info msg="CreateContainer within sandbox \"99876d43c87682e3f7d7101972c5fe904dc7569fc8e006fe0730e0ce286304fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:01:00.129954 containerd[2027]: time="2026-01-17T00:01:00.129650249Z" level=info msg="CreateContainer within sandbox \"99876d43c87682e3f7d7101972c5fe904dc7569fc8e006fe0730e0ce286304fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e7c195062da113474ca7eaae60af6d52c9264e3ee5fcea8527557c27876aeaf\"" Jan 17 00:01:00.134258 containerd[2027]: time="2026-01-17T00:01:00.133185253Z" level=info msg="StartContainer for \"7e7c195062da113474ca7eaae60af6d52c9264e3ee5fcea8527557c27876aeaf\"" Jan 17 00:01:00.179524 systemd[1]: Started cri-containerd-7e7c195062da113474ca7eaae60af6d52c9264e3ee5fcea8527557c27876aeaf.scope - libcontainer container 7e7c195062da113474ca7eaae60af6d52c9264e3ee5fcea8527557c27876aeaf. Jan 17 00:01:00.239985 containerd[2027]: time="2026-01-17T00:01:00.239903415Z" level=info msg="StartContainer for \"7e7c195062da113474ca7eaae60af6d52c9264e3ee5fcea8527557c27876aeaf\" returns successfully" Jan 17 00:01:01.129419 kubelet[3392]: I0117 00:01:01.129256 3392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rtjjj" podStartSLOduration=61.129171909 podStartE2EDuration="1m1.129171909s" podCreationTimestamp="2026-01-17 00:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:01:01.10102108 +0000 UTC m=+62.576469856" watchObservedRunningTime="2026-01-17 00:01:01.129171909 +0000 UTC m=+62.604620709" Jan 17 00:01:01.812587 systemd-networkd[1947]: veth3c2f88c5: Gained IPv6LL Jan 17 00:01:04.279720 systemd[1]: Started sshd@9-172.31.29.179:22-68.220.241.50:46034.service - OpenSSH per-connection server daemon (68.220.241.50:46034). Jan 17 00:01:04.575973 ntpd[2011]: Listen normally on 10 cni0 192.168.0.1:123 Jan 17 00:01:04.576956 ntpd[2011]: 17 Jan 00:01:04 ntpd[2011]: Listen normally on 10 cni0 192.168.0.1:123 Jan 17 00:01:04.576956 ntpd[2011]: 17 Jan 00:01:04 ntpd[2011]: Listen normally on 11 cni0 [fe80::1858:e0ff:fef9:3433%5]:123 Jan 17 00:01:04.576956 ntpd[2011]: 17 Jan 00:01:04 ntpd[2011]: Listen normally on 12 veth29ce57d4 [fe80::48dd:2ff:fe92:2b6d%6]:123 Jan 17 00:01:04.576956 ntpd[2011]: 17 Jan 00:01:04 ntpd[2011]: Listen normally on 13 veth3c2f88c5 [fe80::a422:86ff:fe7e:1e7c%7]:123 Jan 17 00:01:04.576113 ntpd[2011]: Listen normally on 11 cni0 [fe80::1858:e0ff:fef9:3433%5]:123 Jan 17 00:01:04.576195 ntpd[2011]: Listen normally on 12 veth29ce57d4 [fe80::48dd:2ff:fe92:2b6d%6]:123 Jan 17 00:01:04.576300 ntpd[2011]: Listen normally on 13 veth3c2f88c5 [fe80::a422:86ff:fe7e:1e7c%7]:123 Jan 17 00:01:04.833803 sshd[4378]: Accepted publickey for core from 68.220.241.50 port 46034 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:04.837996 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:04.850018 systemd-logind[2017]: New session 10 of user core. Jan 17 00:01:04.860572 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:01:05.330752 sshd[4378]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:05.335689 systemd-logind[2017]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:01:05.336916 systemd[1]: sshd@9-172.31.29.179:22-68.220.241.50:46034.service: Deactivated successfully. Jan 17 00:01:05.341522 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:01:05.346009 systemd-logind[2017]: Removed session 10. Jan 17 00:01:05.418731 systemd[1]: Started sshd@10-172.31.29.179:22-68.220.241.50:46046.service - OpenSSH per-connection server daemon (68.220.241.50:46046). Jan 17 00:01:05.916258 sshd[4392]: Accepted publickey for core from 68.220.241.50 port 46046 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:05.919905 sshd[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:05.930429 systemd-logind[2017]: New session 11 of user core. Jan 17 00:01:05.944480 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:01:06.450818 sshd[4392]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:06.455810 systemd[1]: sshd@10-172.31.29.179:22-68.220.241.50:46046.service: Deactivated successfully. Jan 17 00:01:06.459355 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:01:06.464138 systemd-logind[2017]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:01:06.466547 systemd-logind[2017]: Removed session 11. Jan 17 00:01:06.543771 systemd[1]: Started sshd@11-172.31.29.179:22-68.220.241.50:46050.service - OpenSSH per-connection server daemon (68.220.241.50:46050). Jan 17 00:01:07.038028 sshd[4403]: Accepted publickey for core from 68.220.241.50 port 46050 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:07.041066 sshd[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:07.049441 systemd-logind[2017]: New session 12 of user core. Jan 17 00:01:07.059468 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:01:07.499003 sshd[4403]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:07.504827 systemd[1]: sshd@11-172.31.29.179:22-68.220.241.50:46050.service: Deactivated successfully. Jan 17 00:01:07.509970 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:01:07.512552 systemd-logind[2017]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:01:07.514842 systemd-logind[2017]: Removed session 12. Jan 17 00:01:12.607128 systemd[1]: Started sshd@12-172.31.29.179:22-68.220.241.50:49180.service - OpenSSH per-connection server daemon (68.220.241.50:49180). Jan 17 00:01:13.156929 sshd[4448]: Accepted publickey for core from 68.220.241.50 port 49180 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:13.159665 sshd[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:13.167246 systemd-logind[2017]: New session 13 of user core. Jan 17 00:01:13.174451 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:01:13.659176 sshd[4448]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:13.665750 systemd[1]: sshd@12-172.31.29.179:22-68.220.241.50:49180.service: Deactivated successfully. Jan 17 00:01:13.668639 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:01:13.672734 systemd-logind[2017]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:01:13.674884 systemd-logind[2017]: Removed session 13. Jan 17 00:01:18.755706 systemd[1]: Started sshd@13-172.31.29.179:22-68.220.241.50:49194.service - OpenSSH per-connection server daemon (68.220.241.50:49194). Jan 17 00:01:19.312486 sshd[4489]: Accepted publickey for core from 68.220.241.50 port 49194 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:19.315066 sshd[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:19.323713 systemd-logind[2017]: New session 14 of user core. Jan 17 00:01:19.332490 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:01:19.812330 sshd[4489]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:19.820007 systemd-logind[2017]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:01:19.821825 systemd[1]: sshd@13-172.31.29.179:22-68.220.241.50:49194.service: Deactivated successfully. Jan 17 00:01:19.827921 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:01:19.830848 systemd-logind[2017]: Removed session 14. Jan 17 00:01:19.912894 systemd[1]: Started sshd@14-172.31.29.179:22-68.220.241.50:49198.service - OpenSSH per-connection server daemon (68.220.241.50:49198). Jan 17 00:01:20.451745 sshd[4503]: Accepted publickey for core from 68.220.241.50 port 49198 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:20.454653 sshd[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:20.463249 systemd-logind[2017]: New session 15 of user core. Jan 17 00:01:20.473495 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:01:21.036805 sshd[4503]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:21.041888 systemd[1]: sshd@14-172.31.29.179:22-68.220.241.50:49198.service: Deactivated successfully. Jan 17 00:01:21.046143 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:01:21.050761 systemd-logind[2017]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:01:21.053157 systemd-logind[2017]: Removed session 15. Jan 17 00:01:21.123705 systemd[1]: Started sshd@15-172.31.29.179:22-68.220.241.50:49204.service - OpenSSH per-connection server daemon (68.220.241.50:49204). Jan 17 00:01:21.635514 sshd[4514]: Accepted publickey for core from 68.220.241.50 port 49204 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:21.638181 sshd[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:21.646878 systemd-logind[2017]: New session 16 of user core. Jan 17 00:01:21.655488 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:01:22.961670 sshd[4514]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:22.968949 systemd[1]: sshd@15-172.31.29.179:22-68.220.241.50:49204.service: Deactivated successfully. Jan 17 00:01:22.973269 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:01:22.975783 systemd-logind[2017]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:01:22.978049 systemd-logind[2017]: Removed session 16. Jan 17 00:01:23.056723 systemd[1]: Started sshd@16-172.31.29.179:22-68.220.241.50:36254.service - OpenSSH per-connection server daemon (68.220.241.50:36254). Jan 17 00:01:23.549407 sshd[4552]: Accepted publickey for core from 68.220.241.50 port 36254 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:23.552161 sshd[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:23.559459 systemd-logind[2017]: New session 17 of user core. Jan 17 00:01:23.569470 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:01:24.267734 sshd[4552]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:24.274172 systemd[1]: sshd@16-172.31.29.179:22-68.220.241.50:36254.service: Deactivated successfully. Jan 17 00:01:24.279022 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:01:24.283996 systemd-logind[2017]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:01:24.285907 systemd-logind[2017]: Removed session 17. Jan 17 00:01:24.371732 systemd[1]: Started sshd@17-172.31.29.179:22-68.220.241.50:36256.service - OpenSSH per-connection server daemon (68.220.241.50:36256). Jan 17 00:01:24.884621 sshd[4563]: Accepted publickey for core from 68.220.241.50 port 36256 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:24.887346 sshd[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:24.894933 systemd-logind[2017]: New session 18 of user core. Jan 17 00:01:24.904484 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:01:25.354623 sshd[4563]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:25.360325 systemd-logind[2017]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:01:25.360946 systemd[1]: sshd@17-172.31.29.179:22-68.220.241.50:36256.service: Deactivated successfully. Jan 17 00:01:25.364116 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:01:25.369597 systemd-logind[2017]: Removed session 18. Jan 17 00:01:30.447745 systemd[1]: Started sshd@18-172.31.29.179:22-68.220.241.50:36272.service - OpenSSH per-connection server daemon (68.220.241.50:36272). Jan 17 00:01:30.961799 sshd[4597]: Accepted publickey for core from 68.220.241.50 port 36272 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:30.964613 sshd[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:30.971920 systemd-logind[2017]: New session 19 of user core. Jan 17 00:01:30.982463 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:01:31.430587 sshd[4597]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:31.435538 systemd[1]: sshd@18-172.31.29.179:22-68.220.241.50:36272.service: Deactivated successfully. Jan 17 00:01:31.440475 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:01:31.444445 systemd-logind[2017]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:01:31.446539 systemd-logind[2017]: Removed session 19. Jan 17 00:01:36.529688 systemd[1]: Started sshd@19-172.31.29.179:22-68.220.241.50:60184.service - OpenSSH per-connection server daemon (68.220.241.50:60184). Jan 17 00:01:37.030419 sshd[4633]: Accepted publickey for core from 68.220.241.50 port 60184 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:37.032130 sshd[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:37.040812 systemd-logind[2017]: New session 20 of user core. Jan 17 00:01:37.048475 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:01:37.499495 sshd[4633]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:37.506135 systemd-logind[2017]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:01:37.507936 systemd[1]: sshd@19-172.31.29.179:22-68.220.241.50:60184.service: Deactivated successfully. Jan 17 00:01:37.512553 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:01:37.514578 systemd-logind[2017]: Removed session 20. Jan 17 00:01:42.610674 systemd[1]: Started sshd@20-172.31.29.179:22-68.220.241.50:43402.service - OpenSSH per-connection server daemon (68.220.241.50:43402). Jan 17 00:01:43.147450 sshd[4672]: Accepted publickey for core from 68.220.241.50 port 43402 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:43.149173 sshd[4672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:43.157051 systemd-logind[2017]: New session 21 of user core. Jan 17 00:01:43.172500 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:01:43.635378 sshd[4672]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:43.641522 systemd[1]: sshd@20-172.31.29.179:22-68.220.241.50:43402.service: Deactivated successfully. Jan 17 00:01:43.645221 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:01:43.648287 systemd-logind[2017]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:01:43.650881 systemd-logind[2017]: Removed session 21. Jan 17 00:01:58.961704 systemd[1]: cri-containerd-3fc1431a53c5d595970d218b7789a20621dd5651fe055673a484e06f697ea144.scope: Deactivated successfully. Jan 17 00:01:58.962408 systemd[1]: cri-containerd-3fc1431a53c5d595970d218b7789a20621dd5651fe055673a484e06f697ea144.scope: Consumed 6.465s CPU time, 19.7M memory peak, 0B memory swap peak. Jan 17 00:01:59.015101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fc1431a53c5d595970d218b7789a20621dd5651fe055673a484e06f697ea144-rootfs.mount: Deactivated successfully. Jan 17 00:01:59.016295 containerd[2027]: time="2026-01-17T00:01:59.014710558Z" level=info msg="shim disconnected" id=3fc1431a53c5d595970d218b7789a20621dd5651fe055673a484e06f697ea144 namespace=k8s.io Jan 17 00:01:59.016295 containerd[2027]: time="2026-01-17T00:01:59.016274218Z" level=warning msg="cleaning up after shim disconnected" id=3fc1431a53c5d595970d218b7789a20621dd5651fe055673a484e06f697ea144 namespace=k8s.io Jan 17 00:01:59.017000 containerd[2027]: time="2026-01-17T00:01:59.016301326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:01:59.222426 kubelet[3392]: I0117 00:01:59.220871 3392 scope.go:117] "RemoveContainer" containerID="3fc1431a53c5d595970d218b7789a20621dd5651fe055673a484e06f697ea144" Jan 17 00:01:59.226041 containerd[2027]: time="2026-01-17T00:01:59.225962999Z" level=info msg="CreateContainer within sandbox \"1c570939cbcb37ffd10d2c3afb783670ae0ecd7e4d3660b5dec9203ffde26e3c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:01:59.246146 containerd[2027]: time="2026-01-17T00:01:59.246062387Z" level=info msg="CreateContainer within sandbox \"1c570939cbcb37ffd10d2c3afb783670ae0ecd7e4d3660b5dec9203ffde26e3c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"909d17d15958ffbe7a022e44d17fce032d4a2db4deb6c54866dba4917dff49a1\"" Jan 17 00:01:59.246979 containerd[2027]: time="2026-01-17T00:01:59.246907307Z" level=info msg="StartContainer for \"909d17d15958ffbe7a022e44d17fce032d4a2db4deb6c54866dba4917dff49a1\"" Jan 17 00:01:59.305610 systemd[1]: Started cri-containerd-909d17d15958ffbe7a022e44d17fce032d4a2db4deb6c54866dba4917dff49a1.scope - libcontainer container 909d17d15958ffbe7a022e44d17fce032d4a2db4deb6c54866dba4917dff49a1. Jan 17 00:01:59.388626 containerd[2027]: time="2026-01-17T00:01:59.388542647Z" level=info msg="StartContainer for \"909d17d15958ffbe7a022e44d17fce032d4a2db4deb6c54866dba4917dff49a1\" returns successfully" Jan 17 00:02:01.038020 kubelet[3392]: E0117 00:02:01.037940 3392 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-179?timeout=10s\": context deadline exceeded" Jan 17 00:02:02.126800 systemd[1]: cri-containerd-3637546aff2b6b2b53054e860af15950aca5b5a894e1f3cf942e8ecd008714c3.scope: Deactivated successfully. Jan 17 00:02:02.129408 systemd[1]: cri-containerd-3637546aff2b6b2b53054e860af15950aca5b5a894e1f3cf942e8ecd008714c3.scope: Consumed 6.086s CPU time, 15.5M memory peak, 0B memory swap peak. Jan 17 00:02:02.182384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3637546aff2b6b2b53054e860af15950aca5b5a894e1f3cf942e8ecd008714c3-rootfs.mount: Deactivated successfully. Jan 17 00:02:02.185501 containerd[2027]: time="2026-01-17T00:02:02.185394781Z" level=info msg="shim disconnected" id=3637546aff2b6b2b53054e860af15950aca5b5a894e1f3cf942e8ecd008714c3 namespace=k8s.io Jan 17 00:02:02.186106 containerd[2027]: time="2026-01-17T00:02:02.185515789Z" level=warning msg="cleaning up after shim disconnected" id=3637546aff2b6b2b53054e860af15950aca5b5a894e1f3cf942e8ecd008714c3 namespace=k8s.io Jan 17 00:02:02.186106 containerd[2027]: time="2026-01-17T00:02:02.185541925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:02.235380 kubelet[3392]: I0117 00:02:02.235328 3392 scope.go:117] "RemoveContainer" containerID="3637546aff2b6b2b53054e860af15950aca5b5a894e1f3cf942e8ecd008714c3" Jan 17 00:02:02.239760 containerd[2027]: time="2026-01-17T00:02:02.239663366Z" level=info msg="CreateContainer within sandbox \"a755cb338d787b1bf113a281b54e6394481140af849eef1147071d618ca0a730\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:02:02.260863 containerd[2027]: time="2026-01-17T00:02:02.260791274Z" level=info msg="CreateContainer within sandbox \"a755cb338d787b1bf113a281b54e6394481140af849eef1147071d618ca0a730\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a673e21c9898189f46e60b87212a1ba8dae403252ada72fec46dde8b2e58fae2\"" Jan 17 00:02:02.262441 containerd[2027]: time="2026-01-17T00:02:02.262385006Z" level=info msg="StartContainer for \"a673e21c9898189f46e60b87212a1ba8dae403252ada72fec46dde8b2e58fae2\"" Jan 17 00:02:02.329152 systemd[1]: Started cri-containerd-a673e21c9898189f46e60b87212a1ba8dae403252ada72fec46dde8b2e58fae2.scope - libcontainer container a673e21c9898189f46e60b87212a1ba8dae403252ada72fec46dde8b2e58fae2. Jan 17 00:02:02.406096 containerd[2027]: time="2026-01-17T00:02:02.406024562Z" level=info msg="StartContainer for \"a673e21c9898189f46e60b87212a1ba8dae403252ada72fec46dde8b2e58fae2\" returns successfully" Jan 17 00:02:11.039992 kubelet[3392]: E0117 00:02:11.038435 3392 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-179?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"