Jan 23 23:53:56.268144 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 23:53:56.268225 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:53:56.268254 kernel: KASLR disabled due to lack of seed Jan 23 23:53:56.268270 kernel: efi: EFI v2.7 by EDK II Jan 23 23:53:56.268287 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 23 23:53:56.268302 kernel: ACPI: Early table checksum verification disabled Jan 23 23:53:56.268320 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 23:53:56.268336 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 23:53:56.268352 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 23:53:56.268367 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 23:53:56.268388 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 23:53:56.268403 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 23:53:56.268419 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 23:53:56.268435 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 23:53:56.268453 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 23:53:56.268474 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 23:53:56.268491 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 23:53:56.268508 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 23:53:56.268524 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 23:53:56.268541 kernel: printk: bootconsole [uart0] enabled Jan 23 23:53:56.268557 kernel: NUMA: Failed to initialise from firmware Jan 23 23:53:56.268574 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:53:56.268590 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 23 23:53:56.268606 kernel: Zone ranges: Jan 23 23:53:56.268623 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 23:53:56.268639 kernel: DMA32 empty Jan 23 23:53:56.268660 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 23:53:56.268677 kernel: Movable zone start for each node Jan 23 23:53:56.268693 kernel: Early memory node ranges Jan 23 23:53:56.268709 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 23:53:56.268725 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 23:53:56.268742 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 23:53:56.268758 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 23:53:56.268775 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 23:53:56.268791 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 23:53:56.268808 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 23:53:56.268826 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 23:53:56.268842 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:53:56.268863 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 23:53:56.268880 kernel: psci: probing for conduit method from ACPI. Jan 23 23:53:56.268904 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 23:53:56.268921 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:53:56.268939 kernel: psci: Trusted OS migration not required Jan 23 23:53:56.268960 kernel: psci: SMC Calling Convention v1.1 Jan 23 23:53:56.268978 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 23:53:56.268995 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:53:56.269012 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:53:56.269030 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:53:56.269047 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:53:56.269065 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:53:56.269082 kernel: CPU features: detected: Spectre-v2 Jan 23 23:53:56.269100 kernel: CPU features: detected: Spectre-v3a Jan 23 23:53:56.269117 kernel: CPU features: detected: Spectre-BHB Jan 23 23:53:56.269135 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 23:53:56.269158 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 23:53:56.269175 kernel: alternatives: applying boot alternatives Jan 23 23:53:56.273380 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:53:56.273405 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:53:56.273423 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:53:56.273441 kernel: Fallback order for Node 0: 0 Jan 23 23:53:56.273459 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 23 23:53:56.273476 kernel: Policy zone: Normal Jan 23 23:53:56.273493 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:53:56.273511 kernel: software IO TLB: area num 2. Jan 23 23:53:56.273528 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 23 23:53:56.273558 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 23 23:53:56.273576 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:53:56.273593 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:53:56.273612 kernel: rcu: RCU event tracing is enabled. Jan 23 23:53:56.273630 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:53:56.273647 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:53:56.273665 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:53:56.273682 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:53:56.273700 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:53:56.273717 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:53:56.273734 kernel: GICv3: 96 SPIs implemented Jan 23 23:53:56.273756 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:53:56.273774 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:53:56.273791 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 23:53:56.273809 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 23:53:56.273826 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 23:53:56.273843 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 23:53:56.273861 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 23 23:53:56.273879 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 23 23:53:56.273896 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 23:53:56.273914 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 23 23:53:56.273931 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:53:56.273948 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 23:53:56.273970 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 23:53:56.273988 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 23:53:56.274005 kernel: Console: colour dummy device 80x25 Jan 23 23:53:56.274024 kernel: printk: console [tty1] enabled Jan 23 23:53:56.274042 kernel: ACPI: Core revision 20230628 Jan 23 23:53:56.274060 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 23:53:56.274078 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:53:56.274096 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:53:56.274113 kernel: landlock: Up and running. Jan 23 23:53:56.274135 kernel: SELinux: Initializing. Jan 23 23:53:56.274153 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:53:56.274172 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:53:56.274211 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:53:56.274232 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:53:56.274251 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:53:56.274269 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:53:56.274286 kernel: Platform MSI: ITS@0x10080000 domain created Jan 23 23:53:56.274304 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 23 23:53:56.274328 kernel: Remapping and enabling EFI services. Jan 23 23:53:56.274346 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:53:56.274363 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:53:56.274381 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 23:53:56.274399 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 23 23:53:56.274417 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 23:53:56.274434 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:53:56.274452 kernel: SMP: Total of 2 processors activated. Jan 23 23:53:56.274469 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:53:56.274491 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 23:53:56.274509 kernel: CPU features: detected: CRC32 instructions Jan 23 23:53:56.274527 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:53:56.274556 kernel: alternatives: applying system-wide alternatives Jan 23 23:53:56.274579 kernel: devtmpfs: initialized Jan 23 23:53:56.274598 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:53:56.274616 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:53:56.274634 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:53:56.274653 kernel: SMBIOS 3.0.0 present. Jan 23 23:53:56.274676 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 23:53:56.274694 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:53:56.274713 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:53:56.274732 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:53:56.274751 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:53:56.274769 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:53:56.274788 kernel: audit: type=2000 audit(0.330:1): state=initialized audit_enabled=0 res=1 Jan 23 23:53:56.274806 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:53:56.274829 kernel: cpuidle: using governor menu Jan 23 23:53:56.274847 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:53:56.274866 kernel: ASID allocator initialised with 65536 entries Jan 23 23:53:56.274884 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:53:56.274903 kernel: Serial: AMBA PL011 UART driver Jan 23 23:53:56.274921 kernel: Modules: 17488 pages in range for non-PLT usage Jan 23 23:53:56.274939 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:53:56.274958 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:53:56.274976 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:53:56.274999 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:53:56.275018 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:53:56.275036 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:53:56.275054 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:53:56.275073 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:53:56.275091 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:53:56.275110 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:53:56.275128 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:53:56.275147 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:53:56.275169 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:53:56.279271 kernel: ACPI: Interpreter enabled Jan 23 23:53:56.279303 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:53:56.279322 kernel: ACPI: MCFG table detected, 1 entries Jan 23 23:53:56.279341 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 23:53:56.279672 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 23:53:56.279887 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 23:53:56.280089 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 23:53:56.280344 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 23:53:56.280564 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 23:53:56.280593 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 23:53:56.280613 kernel: acpiphp: Slot [1] registered Jan 23 23:53:56.280632 kernel: acpiphp: Slot [2] registered Jan 23 23:53:56.280651 kernel: acpiphp: Slot [3] registered Jan 23 23:53:56.280670 kernel: acpiphp: Slot [4] registered Jan 23 23:53:56.280689 kernel: acpiphp: Slot [5] registered Jan 23 23:53:56.280717 kernel: acpiphp: Slot [6] registered Jan 23 23:53:56.280736 kernel: acpiphp: Slot [7] registered Jan 23 23:53:56.280756 kernel: acpiphp: Slot [8] registered Jan 23 23:53:56.280776 kernel: acpiphp: Slot [9] registered Jan 23 23:53:56.280795 kernel: acpiphp: Slot [10] registered Jan 23 23:53:56.280814 kernel: acpiphp: Slot [11] registered Jan 23 23:53:56.280834 kernel: acpiphp: Slot [12] registered Jan 23 23:53:56.280853 kernel: acpiphp: Slot [13] registered Jan 23 23:53:56.280872 kernel: acpiphp: Slot [14] registered Jan 23 23:53:56.280891 kernel: acpiphp: Slot [15] registered Jan 23 23:53:56.280918 kernel: acpiphp: Slot [16] registered Jan 23 23:53:56.280938 kernel: acpiphp: Slot [17] registered Jan 23 23:53:56.280957 kernel: acpiphp: Slot [18] registered Jan 23 23:53:56.280976 kernel: acpiphp: Slot [19] registered Jan 23 23:53:56.280995 kernel: acpiphp: Slot [20] registered Jan 23 23:53:56.281014 kernel: acpiphp: Slot [21] registered Jan 23 23:53:56.281033 kernel: acpiphp: Slot [22] registered Jan 23 23:53:56.281051 kernel: acpiphp: Slot [23] registered Jan 23 23:53:56.281070 kernel: acpiphp: Slot [24] registered Jan 23 23:53:56.281094 kernel: acpiphp: Slot [25] registered Jan 23 23:53:56.281113 kernel: acpiphp: Slot [26] registered Jan 23 23:53:56.281131 kernel: acpiphp: Slot [27] registered Jan 23 23:53:56.281150 kernel: acpiphp: Slot [28] registered Jan 23 23:53:56.281168 kernel: acpiphp: Slot [29] registered Jan 23 23:53:56.284426 kernel: acpiphp: Slot [30] registered Jan 23 23:53:56.284462 kernel: acpiphp: Slot [31] registered Jan 23 23:53:56.284482 kernel: PCI host bridge to bus 0000:00 Jan 23 23:53:56.284774 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 23:53:56.285006 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 23:53:56.285242 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 23:53:56.285438 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 23:53:56.285681 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 23 23:53:56.285925 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 23 23:53:56.286158 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 23 23:53:56.288879 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 23 23:53:56.289139 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 23 23:53:56.289565 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:53:56.289817 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 23 23:53:56.290038 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 23 23:53:56.290405 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 23 23:53:56.290746 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 23 23:53:56.291011 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:53:56.291281 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 23:53:56.291490 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 23:53:56.291688 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 23:53:56.291719 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 23:53:56.291741 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 23:53:56.291808 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 23:53:56.291830 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 23:53:56.291864 kernel: iommu: Default domain type: Translated Jan 23 23:53:56.291884 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:53:56.291904 kernel: efivars: Registered efivars operations Jan 23 23:53:56.291925 kernel: vgaarb: loaded Jan 23 23:53:56.291945 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:53:56.291965 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:53:56.291985 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:53:56.292006 kernel: pnp: PnP ACPI init Jan 23 23:53:56.292352 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 23:53:56.292405 kernel: pnp: PnP ACPI: found 1 devices Jan 23 23:53:56.292425 kernel: NET: Registered PF_INET protocol family Jan 23 23:53:56.292445 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:53:56.292466 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:53:56.292487 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:53:56.292508 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:53:56.292527 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:53:56.292547 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:53:56.292575 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:53:56.292596 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:53:56.292616 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:53:56.292637 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:53:56.292657 kernel: kvm [1]: HYP mode not available Jan 23 23:53:56.292677 kernel: Initialise system trusted keyrings Jan 23 23:53:56.292695 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:53:56.292715 kernel: Key type asymmetric registered Jan 23 23:53:56.292739 kernel: Asymmetric key parser 'x509' registered Jan 23 23:53:56.292765 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:53:56.292786 kernel: io scheduler mq-deadline registered Jan 23 23:53:56.292806 kernel: io scheduler kyber registered Jan 23 23:53:56.292825 kernel: io scheduler bfq registered Jan 23 23:53:56.293140 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 23:53:56.293212 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 23:53:56.293275 kernel: ACPI: button: Power Button [PWRB] Jan 23 23:53:56.293296 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 23:53:56.293316 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 23:53:56.293349 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:53:56.293370 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 23:53:56.293644 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 23:53:56.293677 kernel: printk: console [ttyS0] disabled Jan 23 23:53:56.293697 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 23:53:56.293718 kernel: printk: console [ttyS0] enabled Jan 23 23:53:56.293737 kernel: printk: bootconsole [uart0] disabled Jan 23 23:53:56.293757 kernel: thunder_xcv, ver 1.0 Jan 23 23:53:56.293776 kernel: thunder_bgx, ver 1.0 Jan 23 23:53:56.293805 kernel: nicpf, ver 1.0 Jan 23 23:53:56.293825 kernel: nicvf, ver 1.0 Jan 23 23:53:56.294075 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:53:56.295501 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:53:55 UTC (1769212435) Jan 23 23:53:56.295550 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:53:56.295571 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 23 23:53:56.295590 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:53:56.295610 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:53:56.295641 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:53:56.295661 kernel: Segment Routing with IPv6 Jan 23 23:53:56.295679 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:53:56.295698 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:53:56.295717 kernel: Key type dns_resolver registered Jan 23 23:53:56.295737 kernel: registered taskstats version 1 Jan 23 23:53:56.295756 kernel: Loading compiled-in X.509 certificates Jan 23 23:53:56.295776 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:53:56.295795 kernel: Key type .fscrypt registered Jan 23 23:53:56.295820 kernel: Key type fscrypt-provisioning registered Jan 23 23:53:56.295839 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:53:56.295858 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:53:56.295876 kernel: ima: No architecture policies found Jan 23 23:53:56.295895 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:53:56.295914 kernel: clk: Disabling unused clocks Jan 23 23:53:56.295932 kernel: Freeing unused kernel memory: 39424K Jan 23 23:53:56.295952 kernel: Run /init as init process Jan 23 23:53:56.295971 kernel: with arguments: Jan 23 23:53:56.295995 kernel: /init Jan 23 23:53:56.296014 kernel: with environment: Jan 23 23:53:56.296032 kernel: HOME=/ Jan 23 23:53:56.296050 kernel: TERM=linux Jan 23 23:53:56.296076 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:53:56.296101 systemd[1]: Detected virtualization amazon. Jan 23 23:53:56.296122 systemd[1]: Detected architecture arm64. Jan 23 23:53:56.296142 systemd[1]: Running in initrd. Jan 23 23:53:56.296168 systemd[1]: No hostname configured, using default hostname. Jan 23 23:53:56.299484 systemd[1]: Hostname set to . Jan 23 23:53:56.299526 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:53:56.299549 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:53:56.299573 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:53:56.299597 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:53:56.299622 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:53:56.299646 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:53:56.299681 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:53:56.299704 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:53:56.299729 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:53:56.299750 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:53:56.299773 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:53:56.299794 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:53:56.299821 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:53:56.299843 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:53:56.299864 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:53:56.299887 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:53:56.299908 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:53:56.299929 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:53:56.299950 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:53:56.299972 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:53:56.299993 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:53:56.300020 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:53:56.300041 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:53:56.300062 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:53:56.300083 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:53:56.300104 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:53:56.300125 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:53:56.300145 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:53:56.300167 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:53:56.300229 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:53:56.300266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:53:56.300288 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:53:56.300312 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:53:56.300334 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:53:56.300417 systemd-journald[251]: Collecting audit messages is disabled. Jan 23 23:53:56.300476 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:53:56.300500 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:53:56.300521 kernel: Bridge firewalling registered Jan 23 23:53:56.300547 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:53:56.300569 systemd-journald[251]: Journal started Jan 23 23:53:56.300608 systemd-journald[251]: Runtime Journal (/run/log/journal/ec28ee77a2d537222dcb6e16525c1598) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:53:56.257409 systemd-modules-load[252]: Inserted module 'overlay' Jan 23 23:53:56.311395 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:53:56.291857 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 23 23:53:56.318506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:53:56.325802 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:53:56.344671 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:53:56.362719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:53:56.377604 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:53:56.388811 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:53:56.419355 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:53:56.440828 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:53:56.448017 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:53:56.465620 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:53:56.471940 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:53:56.485488 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:53:56.505261 dracut-cmdline[287]: dracut-dracut-053 Jan 23 23:53:56.513766 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:53:56.581913 systemd-resolved[289]: Positive Trust Anchors: Jan 23 23:53:56.581955 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:53:56.582018 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:53:56.677243 kernel: SCSI subsystem initialized Jan 23 23:53:56.685292 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:53:56.700239 kernel: iscsi: registered transport (tcp) Jan 23 23:53:56.724050 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:53:56.724159 kernel: QLogic iSCSI HBA Driver Jan 23 23:53:56.822446 kernel: random: crng init done Jan 23 23:53:56.822739 systemd-resolved[289]: Defaulting to hostname 'linux'. Jan 23 23:53:56.827285 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:53:56.832509 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:53:56.865345 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:53:56.875520 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:53:56.923289 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:53:56.923384 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:53:56.925305 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:53:56.996268 kernel: raid6: neonx8 gen() 6660 MB/s Jan 23 23:53:57.013244 kernel: raid6: neonx4 gen() 6484 MB/s Jan 23 23:53:57.031255 kernel: raid6: neonx2 gen() 5424 MB/s Jan 23 23:53:57.048252 kernel: raid6: neonx1 gen() 3928 MB/s Jan 23 23:53:57.065252 kernel: raid6: int64x8 gen() 3763 MB/s Jan 23 23:53:57.083254 kernel: raid6: int64x4 gen() 3695 MB/s Jan 23 23:53:57.100246 kernel: raid6: int64x2 gen() 3524 MB/s Jan 23 23:53:57.118355 kernel: raid6: int64x1 gen() 2718 MB/s Jan 23 23:53:57.118444 kernel: raid6: using algorithm neonx8 gen() 6660 MB/s Jan 23 23:53:57.137334 kernel: raid6: .... xor() 4781 MB/s, rmw enabled Jan 23 23:53:57.137413 kernel: raid6: using neon recovery algorithm Jan 23 23:53:57.146964 kernel: xor: measuring software checksum speed Jan 23 23:53:57.147040 kernel: 8regs : 11028 MB/sec Jan 23 23:53:57.149552 kernel: 32regs : 10699 MB/sec Jan 23 23:53:57.149620 kernel: arm64_neon : 9556 MB/sec Jan 23 23:53:57.149645 kernel: xor: using function: 8regs (11028 MB/sec) Jan 23 23:53:57.236258 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:53:57.259026 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:53:57.277508 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:53:57.314572 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jan 23 23:53:57.325015 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:53:57.337517 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:53:57.376445 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jan 23 23:53:57.439649 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:53:57.450539 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:53:57.576770 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:53:57.592545 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:53:57.637392 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:53:57.650338 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:53:57.654690 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:53:57.657856 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:53:57.675999 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:53:57.712027 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:53:57.807731 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 23:53:57.807839 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 23:53:57.816235 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 23:53:57.816732 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 23:53:57.820368 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:53:57.822885 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:53:57.829244 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:9e:24:59:65:51 Jan 23 23:53:57.833116 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:53:57.835883 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:53:57.836244 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:53:57.840735 (udev-worker)[531]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:53:57.841423 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:53:57.865714 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:53:57.884250 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 23:53:57.887871 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 23:53:57.900252 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 23:53:57.907379 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:53:57.918691 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 23:53:57.918774 kernel: GPT:9289727 != 33554431 Jan 23 23:53:57.918805 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 23:53:57.919703 kernel: GPT:9289727 != 33554431 Jan 23 23:53:57.921008 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:53:57.922118 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:53:57.923570 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:53:57.964627 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:53:58.054222 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (515) Jan 23 23:53:58.080245 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (540) Jan 23 23:53:58.144164 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 23:53:58.167675 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 23:53:58.201732 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:53:58.234719 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 23:53:58.241101 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 23:53:58.255495 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:53:58.279819 disk-uuid[662]: Primary Header is updated. Jan 23 23:53:58.279819 disk-uuid[662]: Secondary Entries is updated. Jan 23 23:53:58.279819 disk-uuid[662]: Secondary Header is updated. Jan 23 23:53:58.296240 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:53:58.310246 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:53:59.312286 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:53:59.314742 disk-uuid[663]: The operation has completed successfully. Jan 23 23:53:59.524632 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:53:59.526690 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:53:59.574491 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:53:59.596342 sh[921]: Success Jan 23 23:53:59.621231 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:53:59.745625 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:53:59.751905 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:53:59.764409 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:53:59.806580 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:53:59.806643 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:53:59.806672 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:53:59.810139 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:53:59.810174 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:53:59.896233 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 23:53:59.908998 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:53:59.914123 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:53:59.924691 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:53:59.934894 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:53:59.982304 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:53:59.982399 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:53:59.984517 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:00.000285 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:00.021936 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:54:00.028373 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:00.036885 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:54:00.050590 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:54:00.131763 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:54:00.145569 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:54:00.207755 systemd-networkd[1113]: lo: Link UP Jan 23 23:54:00.207777 systemd-networkd[1113]: lo: Gained carrier Jan 23 23:54:00.213616 systemd-networkd[1113]: Enumeration completed Jan 23 23:54:00.214727 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:00.214735 systemd-networkd[1113]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:54:00.215541 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:54:00.219872 systemd-networkd[1113]: eth0: Link UP Jan 23 23:54:00.219881 systemd-networkd[1113]: eth0: Gained carrier Jan 23 23:54:00.219900 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:00.220122 systemd[1]: Reached target network.target - Network. Jan 23 23:54:00.245315 systemd-networkd[1113]: eth0: DHCPv4 address 172.31.21.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:54:00.477695 ignition[1052]: Ignition 2.19.0 Jan 23 23:54:00.478317 ignition[1052]: Stage: fetch-offline Jan 23 23:54:00.480101 ignition[1052]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:00.480131 ignition[1052]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:00.482372 ignition[1052]: Ignition finished successfully Jan 23 23:54:00.491051 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:54:00.504595 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:54:00.546369 ignition[1124]: Ignition 2.19.0 Jan 23 23:54:00.546403 ignition[1124]: Stage: fetch Jan 23 23:54:00.548350 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:00.548390 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:00.548592 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:00.581105 ignition[1124]: PUT result: OK Jan 23 23:54:00.585516 ignition[1124]: parsed url from cmdline: "" Jan 23 23:54:00.585537 ignition[1124]: no config URL provided Jan 23 23:54:00.585562 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:54:00.585601 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:54:00.585647 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:00.590716 ignition[1124]: PUT result: OK Jan 23 23:54:00.590879 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 23:54:00.594010 ignition[1124]: GET result: OK Jan 23 23:54:00.594233 ignition[1124]: parsing config with SHA512: 61744b429db0e844793da021aba2eddfb98064f41cf11f1c849876b7b1daa9f80ce938b11e569127e87d95374eb89fb232d1312ca63d66ab1972272b868c48b2 Jan 23 23:54:00.608801 unknown[1124]: fetched base config from "system" Jan 23 23:54:00.611661 unknown[1124]: fetched base config from "system" Jan 23 23:54:00.613155 unknown[1124]: fetched user config from "aws" Jan 23 23:54:00.616662 ignition[1124]: fetch: fetch complete Jan 23 23:54:00.616676 ignition[1124]: fetch: fetch passed Jan 23 23:54:00.616812 ignition[1124]: Ignition finished successfully Jan 23 23:54:00.626173 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:54:00.640768 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:54:00.673952 ignition[1131]: Ignition 2.19.0 Jan 23 23:54:00.673983 ignition[1131]: Stage: kargs Jan 23 23:54:00.676105 ignition[1131]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:00.676139 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:00.676620 ignition[1131]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:00.682540 ignition[1131]: PUT result: OK Jan 23 23:54:00.690060 ignition[1131]: kargs: kargs passed Jan 23 23:54:00.690250 ignition[1131]: Ignition finished successfully Jan 23 23:54:00.695825 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:54:00.708593 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:54:00.742401 ignition[1138]: Ignition 2.19.0 Jan 23 23:54:00.742433 ignition[1138]: Stage: disks Jan 23 23:54:00.744527 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:00.744556 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:00.744744 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:00.747080 ignition[1138]: PUT result: OK Jan 23 23:54:00.759997 ignition[1138]: disks: disks passed Jan 23 23:54:00.760124 ignition[1138]: Ignition finished successfully Jan 23 23:54:00.764665 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:54:00.765665 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:54:00.772561 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:54:00.785497 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:54:00.788216 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:54:00.790722 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:54:00.808501 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:54:00.856026 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 23 23:54:00.871132 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:54:00.883595 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:54:00.971255 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:54:00.971710 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:54:00.976000 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:54:00.992432 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:54:01.000536 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:54:01.005941 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 23:54:01.006046 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:54:01.006097 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:54:01.037595 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:54:01.043841 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:54:01.070241 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1165) Jan 23 23:54:01.075423 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:01.075512 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:01.076993 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:01.090239 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:01.095351 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:54:01.261646 systemd-networkd[1113]: eth0: Gained IPv6LL Jan 23 23:54:01.329625 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:54:01.348340 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:54:01.358848 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:54:01.368971 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:54:01.670857 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:54:01.686540 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:54:01.692458 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:54:01.713059 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:54:01.717421 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:01.763294 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:54:01.769615 ignition[1278]: INFO : Ignition 2.19.0 Jan 23 23:54:01.769615 ignition[1278]: INFO : Stage: mount Jan 23 23:54:01.769615 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:01.769615 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:01.769615 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:01.769615 ignition[1278]: INFO : PUT result: OK Jan 23 23:54:01.786455 ignition[1278]: INFO : mount: mount passed Jan 23 23:54:01.786455 ignition[1278]: INFO : Ignition finished successfully Jan 23 23:54:01.782693 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:54:01.798553 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:54:01.983544 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:54:02.011268 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1289) Jan 23 23:54:02.016089 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:02.016134 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:02.016173 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:02.024232 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:02.027779 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:54:02.075077 ignition[1306]: INFO : Ignition 2.19.0 Jan 23 23:54:02.075077 ignition[1306]: INFO : Stage: files Jan 23 23:54:02.079473 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:02.079473 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:02.079473 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:02.087988 ignition[1306]: INFO : PUT result: OK Jan 23 23:54:02.091779 ignition[1306]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:54:02.094872 ignition[1306]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:54:02.094872 ignition[1306]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:54:02.135452 ignition[1306]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:54:02.139061 ignition[1306]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:54:02.142771 unknown[1306]: wrote ssh authorized keys file for user: core Jan 23 23:54:02.146004 ignition[1306]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:54:02.150075 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:54:02.150075 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 23:54:02.225755 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:54:02.392307 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:54:02.392307 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:54:02.401315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:54:02.401315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:54:02.401315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:54:02.401315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:54:02.401315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:54:02.401315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:54:02.401315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:54:02.401315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:54:02.401315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:54:02.401315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:54:02.401315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:54:02.401315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:54:02.401315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 23 23:54:02.853931 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 23:54:03.242786 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:54:03.242786 ignition[1306]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 23:54:03.251612 ignition[1306]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:54:03.251612 ignition[1306]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:54:03.251612 ignition[1306]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 23:54:03.251612 ignition[1306]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:54:03.251612 ignition[1306]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:54:03.251612 ignition[1306]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:54:03.251612 ignition[1306]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:54:03.251612 ignition[1306]: INFO : files: files passed Jan 23 23:54:03.251612 ignition[1306]: INFO : Ignition finished successfully Jan 23 23:54:03.285515 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:54:03.297594 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:54:03.303803 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:54:03.322686 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:54:03.324943 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:54:03.346713 initrd-setup-root-after-ignition[1334]: grep: Jan 23 23:54:03.350233 initrd-setup-root-after-ignition[1338]: grep: Jan 23 23:54:03.350233 initrd-setup-root-after-ignition[1334]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:03.350233 initrd-setup-root-after-ignition[1334]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:03.363570 initrd-setup-root-after-ignition[1338]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:03.354382 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:54:03.363286 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:54:03.384580 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:54:03.439646 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:54:03.440056 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:54:03.448583 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:54:03.451270 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:54:03.453718 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:54:03.468445 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:54:03.501239 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:54:03.513486 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:54:03.541195 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:54:03.546689 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:54:03.547032 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:54:03.554034 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:54:03.554302 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:54:03.559611 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:54:03.562302 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:54:03.569935 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:54:03.572670 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:54:03.575940 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:54:03.578934 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:54:03.585081 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:54:03.586099 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:54:03.593424 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:54:03.602910 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:54:03.608303 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:54:03.608575 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:54:03.614245 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:54:03.619458 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:54:03.625013 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:54:03.626752 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:54:03.630564 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:54:03.630853 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:54:03.637964 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:54:03.638372 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:54:03.643459 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:54:03.643845 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:54:03.681674 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:54:03.691399 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:54:03.693723 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:54:03.694103 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:54:03.698822 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:54:03.699135 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:54:03.720308 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:54:03.724685 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:54:03.751359 ignition[1358]: INFO : Ignition 2.19.0 Jan 23 23:54:03.753803 ignition[1358]: INFO : Stage: umount Jan 23 23:54:03.757336 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:03.757336 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:03.757336 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:03.765675 ignition[1358]: INFO : PUT result: OK Jan 23 23:54:03.773071 ignition[1358]: INFO : umount: umount passed Jan 23 23:54:03.776793 ignition[1358]: INFO : Ignition finished successfully Jan 23 23:54:03.774365 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:54:03.776393 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:54:03.776619 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:54:03.791684 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:54:03.791913 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:54:03.796945 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:54:03.797074 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:54:03.805017 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:54:03.805157 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:54:03.811070 systemd[1]: Stopped target network.target - Network. Jan 23 23:54:03.813289 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:54:03.813447 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:54:03.818557 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:54:03.818702 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:54:03.822824 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:54:03.823041 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:54:03.830490 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:54:03.832952 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:54:03.833051 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:54:03.838340 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:54:03.838459 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:54:03.845157 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:54:03.845421 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:54:03.846070 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:54:03.846208 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:54:03.847150 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:54:03.850355 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:54:03.862109 systemd-networkd[1113]: eth0: DHCPv6 lease lost Jan 23 23:54:03.895879 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:54:03.898379 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:54:03.905044 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:54:03.905355 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:54:03.917668 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:54:03.917787 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:54:03.927613 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:54:03.930068 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:54:03.930240 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:54:03.935547 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:54:03.935674 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:03.950376 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:54:03.950525 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:54:03.958371 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:54:03.958479 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:54:03.970983 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:54:03.992586 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:54:03.995216 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:54:04.011271 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:54:04.017107 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:54:04.023994 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:54:04.024406 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:54:04.036707 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:54:04.037245 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:54:04.047418 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:54:04.047636 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:54:04.053469 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:54:04.053559 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:54:04.066119 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:54:04.066279 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:54:04.069255 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:54:04.069368 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:54:04.086673 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:54:04.086795 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:54:04.100560 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:54:04.103385 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:54:04.103527 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:54:04.107128 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 23:54:04.107512 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:54:04.125810 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:54:04.125929 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:54:04.128821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:54:04.128935 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:04.155789 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:54:04.156262 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:54:04.164807 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:54:04.175641 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:54:04.198932 systemd[1]: Switching root. Jan 23 23:54:04.257238 systemd-journald[251]: Journal stopped Jan 23 23:54:06.757810 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 23 23:54:06.757946 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:54:06.757998 kernel: SELinux: policy capability open_perms=1 Jan 23 23:54:06.758029 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:54:06.758069 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:54:06.758100 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:54:06.758137 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:54:06.758176 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:54:06.758245 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:54:06.758278 kernel: audit: type=1403 audit(1769212444.744:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:54:06.758313 systemd[1]: Successfully loaded SELinux policy in 66.746ms. Jan 23 23:54:06.758352 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.744ms. Jan 23 23:54:06.758388 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:54:06.758422 systemd[1]: Detected virtualization amazon. Jan 23 23:54:06.758455 systemd[1]: Detected architecture arm64. Jan 23 23:54:06.758492 systemd[1]: Detected first boot. Jan 23 23:54:06.758525 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:54:06.758569 zram_generator::config[1403]: No configuration found. Jan 23 23:54:06.758607 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:54:06.758642 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 23:54:06.758675 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 23:54:06.758710 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 23:54:06.758741 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:54:06.758780 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:54:06.758813 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:54:06.758845 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:54:06.758877 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:54:06.758909 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:54:06.758942 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:54:06.758974 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:54:06.759004 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:54:06.759039 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:54:06.759072 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:54:06.759103 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:54:06.759134 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:54:06.768381 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:54:06.768441 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 23:54:06.768476 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:54:06.768506 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 23:54:06.768538 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 23:54:06.768577 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 23:54:06.768607 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:54:06.768639 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:54:06.768672 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:54:06.768702 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:54:06.768733 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:54:06.768763 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:54:06.768796 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:54:06.768831 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:54:06.768861 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:54:06.768894 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:54:06.768937 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:54:06.768971 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:54:06.769003 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:54:06.769035 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:54:06.769065 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:54:06.769095 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:54:06.769131 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:54:06.769164 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:54:06.769219 systemd[1]: Reached target machines.target - Containers. Jan 23 23:54:06.769254 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:54:06.769285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:06.769315 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:54:06.769348 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:54:06.769379 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:54:06.769420 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:54:06.769454 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:54:06.769487 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:54:06.769517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:54:06.769547 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:54:06.769579 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 23:54:06.769608 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 23:54:06.769639 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 23:54:06.769668 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 23:54:06.769702 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:54:06.769732 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:54:06.769764 kernel: loop: module loaded Jan 23 23:54:06.769799 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:54:06.769831 kernel: fuse: init (API version 7.39) Jan 23 23:54:06.769861 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:54:06.769893 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:54:06.769925 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 23:54:06.769953 kernel: ACPI: bus type drm_connector registered Jan 23 23:54:06.769987 systemd[1]: Stopped verity-setup.service. Jan 23 23:54:06.770017 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:54:06.770049 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:54:06.770078 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:54:06.770107 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:54:06.770136 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:54:06.770166 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:54:06.775313 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:54:06.775358 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:54:06.775409 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:54:06.775442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:54:06.775473 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:54:06.775504 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:54:06.775534 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:54:06.775573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:54:06.775605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:54:06.775635 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:54:06.775665 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:54:06.775701 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:54:06.775736 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:54:06.775770 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:54:06.775841 systemd-journald[1485]: Collecting audit messages is disabled. Jan 23 23:54:06.775895 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:54:06.775926 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:54:06.775960 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:54:06.775992 systemd-journald[1485]: Journal started Jan 23 23:54:06.776051 systemd-journald[1485]: Runtime Journal (/run/log/journal/ec28ee77a2d537222dcb6e16525c1598) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:54:06.778466 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:54:06.050946 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:54:06.106119 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 23:54:06.106956 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 23:54:06.801216 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:54:06.801320 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:54:06.809484 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:54:06.827433 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:54:06.842851 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:54:06.863642 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:54:06.863746 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:06.878027 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:54:06.882422 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:54:06.905278 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:54:06.905370 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:54:06.922402 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:54:06.943228 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:54:06.952228 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:54:06.969170 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:54:06.968957 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:54:06.972584 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:54:06.981566 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:54:06.987391 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:54:06.993327 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:54:07.058056 kernel: loop0: detected capacity change from 0 to 114328 Jan 23 23:54:07.068346 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:54:07.080449 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:54:07.093596 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:54:07.102778 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Jan 23 23:54:07.102821 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Jan 23 23:54:07.132302 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:54:07.140852 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:54:07.156778 systemd-journald[1485]: Time spent on flushing to /var/log/journal/ec28ee77a2d537222dcb6e16525c1598 is 74.918ms for 910 entries. Jan 23 23:54:07.156778 systemd-journald[1485]: System Journal (/var/log/journal/ec28ee77a2d537222dcb6e16525c1598) is 8.0M, max 195.6M, 187.6M free. Jan 23 23:54:07.275138 systemd-journald[1485]: Received client request to flush runtime journal. Jan 23 23:54:07.275286 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:54:07.275495 kernel: loop1: detected capacity change from 0 to 52536 Jan 23 23:54:07.161561 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:54:07.176984 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:54:07.183398 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:07.220158 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:54:07.227577 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:54:07.272792 udevadm[1546]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 23 23:54:07.283485 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:54:07.320555 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:54:07.337913 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:54:07.375242 kernel: loop2: detected capacity change from 0 to 114432 Jan 23 23:54:07.394554 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Jan 23 23:54:07.395671 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Jan 23 23:54:07.417822 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:54:07.484242 kernel: loop3: detected capacity change from 0 to 200800 Jan 23 23:54:07.772895 kernel: loop4: detected capacity change from 0 to 114328 Jan 23 23:54:07.810242 kernel: loop5: detected capacity change from 0 to 52536 Jan 23 23:54:07.846307 kernel: loop6: detected capacity change from 0 to 114432 Jan 23 23:54:07.876231 kernel: loop7: detected capacity change from 0 to 200800 Jan 23 23:54:07.922084 (sd-merge)[1560]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 23:54:07.923337 (sd-merge)[1560]: Merged extensions into '/usr'. Jan 23 23:54:07.931833 systemd[1]: Reloading requested from client PID 1514 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:54:07.931879 systemd[1]: Reloading... Jan 23 23:54:08.137281 zram_generator::config[1589]: No configuration found. Jan 23 23:54:08.477129 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:08.488333 ldconfig[1510]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:54:08.598411 systemd[1]: Reloading finished in 665 ms. Jan 23 23:54:08.641339 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:54:08.644736 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:54:08.648483 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:54:08.667608 systemd[1]: Starting ensure-sysext.service... Jan 23 23:54:08.672563 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:54:08.681606 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:54:08.721389 systemd[1]: Reloading requested from client PID 1639 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:54:08.721640 systemd[1]: Reloading... Jan 23 23:54:08.732261 systemd-tmpfiles[1640]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:54:08.733714 systemd-tmpfiles[1640]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:54:08.736167 systemd-tmpfiles[1640]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:54:08.737013 systemd-tmpfiles[1640]: ACLs are not supported, ignoring. Jan 23 23:54:08.737441 systemd-tmpfiles[1640]: ACLs are not supported, ignoring. Jan 23 23:54:08.747174 systemd-tmpfiles[1640]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:54:08.747541 systemd-tmpfiles[1640]: Skipping /boot Jan 23 23:54:08.771450 systemd-tmpfiles[1640]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:54:08.771617 systemd-tmpfiles[1640]: Skipping /boot Jan 23 23:54:08.820064 systemd-udevd[1641]: Using default interface naming scheme 'v255'. Jan 23 23:54:08.949228 zram_generator::config[1667]: No configuration found. Jan 23 23:54:09.114443 (udev-worker)[1681]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:54:09.351729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:09.388298 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1710) Jan 23 23:54:09.529996 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 23:54:09.530263 systemd[1]: Reloading finished in 807 ms. Jan 23 23:54:09.571704 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:54:09.579341 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:54:09.725877 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:54:09.736802 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:54:09.739977 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:09.744721 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:54:09.750850 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:54:09.770924 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:54:09.773782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:09.778857 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:54:09.787817 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:54:09.807586 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:54:09.826863 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:54:09.836868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:54:09.859605 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:54:09.862327 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:54:09.951872 systemd[1]: Finished ensure-sysext.service. Jan 23 23:54:09.958304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:54:09.960475 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:54:09.964289 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:54:09.982045 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:54:09.989593 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:54:09.989995 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:54:10.010951 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:54:10.026502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:10.036527 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:54:10.048601 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:54:10.056523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:54:10.063560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:10.070490 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:54:10.073399 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:54:10.073492 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:54:10.081584 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:54:10.086027 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:54:10.089565 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:54:10.095311 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:54:10.095691 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:54:10.132882 lvm[1861]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:54:10.162263 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:54:10.163112 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:54:10.165889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:54:10.166560 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:54:10.184578 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:54:10.191941 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:54:10.198155 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:54:10.217245 augenrules[1878]: No rules Jan 23 23:54:10.220459 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:54:10.224120 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:54:10.232027 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:54:10.261024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:10.273231 lvm[1882]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:54:10.297498 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:54:10.314436 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:54:10.335815 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:54:10.448923 systemd-networkd[1841]: lo: Link UP Jan 23 23:54:10.449633 systemd-networkd[1841]: lo: Gained carrier Jan 23 23:54:10.453238 systemd-networkd[1841]: Enumeration completed Jan 23 23:54:10.453312 systemd-resolved[1842]: Positive Trust Anchors: Jan 23 23:54:10.453341 systemd-resolved[1842]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:54:10.453405 systemd-resolved[1842]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:54:10.454382 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:54:10.458511 systemd-networkd[1841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:10.458530 systemd-networkd[1841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:54:10.461387 systemd-networkd[1841]: eth0: Link UP Jan 23 23:54:10.461994 systemd-networkd[1841]: eth0: Gained carrier Jan 23 23:54:10.462255 systemd-networkd[1841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:10.469694 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:54:10.477346 systemd-networkd[1841]: eth0: DHCPv4 address 172.31.21.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:54:10.495503 systemd-resolved[1842]: Defaulting to hostname 'linux'. Jan 23 23:54:10.499393 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:54:10.502260 systemd[1]: Reached target network.target - Network. Jan 23 23:54:10.504492 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:54:10.507449 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:54:10.510443 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:54:10.513798 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:54:10.517236 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:54:10.520414 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:54:10.523637 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:54:10.526770 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:54:10.527025 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:54:10.529377 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:54:10.533363 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:54:10.539312 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:54:10.554756 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:54:10.558424 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:54:10.561370 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:54:10.563790 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:54:10.566088 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:54:10.566152 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:54:10.568625 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:54:10.577635 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:54:10.583748 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:54:10.593480 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:54:10.603615 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:54:10.606296 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:54:10.612760 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:54:10.621709 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 23:54:10.628137 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:54:10.642452 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 23:54:10.646866 jq[1905]: false Jan 23 23:54:10.648596 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:54:10.656562 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:54:10.669116 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:54:10.680754 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:54:10.681740 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:54:10.685643 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:54:10.693674 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:54:10.701002 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:54:10.702948 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:54:10.794637 extend-filesystems[1906]: Found loop4 Jan 23 23:54:10.804575 extend-filesystems[1906]: Found loop5 Jan 23 23:54:10.804575 extend-filesystems[1906]: Found loop6 Jan 23 23:54:10.804575 extend-filesystems[1906]: Found loop7 Jan 23 23:54:10.804575 extend-filesystems[1906]: Found nvme0n1 Jan 23 23:54:10.804575 extend-filesystems[1906]: Found nvme0n1p1 Jan 23 23:54:10.804575 extend-filesystems[1906]: Found nvme0n1p2 Jan 23 23:54:10.804575 extend-filesystems[1906]: Found nvme0n1p3 Jan 23 23:54:10.804575 extend-filesystems[1906]: Found usr Jan 23 23:54:10.804575 extend-filesystems[1906]: Found nvme0n1p4 Jan 23 23:54:10.804575 extend-filesystems[1906]: Found nvme0n1p6 Jan 23 23:54:10.804575 extend-filesystems[1906]: Found nvme0n1p7 Jan 23 23:54:10.804575 extend-filesystems[1906]: Found nvme0n1p9 Jan 23 23:54:10.804575 extend-filesystems[1906]: Checking size of /dev/nvme0n1p9 Jan 23 23:54:10.847811 jq[1915]: true Jan 23 23:54:10.849939 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:54:10.855376 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:54:10.894833 (ntainerd)[1924]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:54:10.917825 tar[1921]: linux-arm64/LICENSE Jan 23 23:54:10.918342 tar[1921]: linux-arm64/helm Jan 23 23:54:10.928018 dbus-daemon[1904]: [system] SELinux support is enabled Jan 23 23:54:10.938517 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:54:10.949617 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:54:10.949688 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:54:10.953656 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:54:10.953724 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:54:10.978760 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:54:10.985582 ntpd[1908]: 23 Jan 23:54:10 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:54:10.985582 ntpd[1908]: 23 Jan 23:54:10 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:54:10.985582 ntpd[1908]: 23 Jan 23:54:10 ntpd[1908]: ---------------------------------------------------- Jan 23 23:54:10.985582 ntpd[1908]: 23 Jan 23:54:10 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:54:10.985582 ntpd[1908]: 23 Jan 23:54:10 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:54:10.985582 ntpd[1908]: 23 Jan 23:54:10 ntpd[1908]: corporation. Support and training for ntp-4 are Jan 23 23:54:10.985582 ntpd[1908]: 23 Jan 23:54:10 ntpd[1908]: available at https://www.nwtime.org/support Jan 23 23:54:10.985582 ntpd[1908]: 23 Jan 23:54:10 ntpd[1908]: ---------------------------------------------------- Jan 23 23:54:10.978837 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:54:10.978861 ntpd[1908]: ---------------------------------------------------- Jan 23 23:54:10.978881 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:54:10.978902 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:54:10.978922 ntpd[1908]: corporation. Support and training for ntp-4 are Jan 23 23:54:10.978941 ntpd[1908]: available at https://www.nwtime.org/support Jan 23 23:54:10.978960 ntpd[1908]: ---------------------------------------------------- Jan 23 23:54:10.995234 dbus-daemon[1904]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1841 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 23:54:11.001136 ntpd[1908]: proto: precision = 0.096 usec (-23) Jan 23 23:54:11.005857 ntpd[1908]: 23 Jan 23:54:10 ntpd[1908]: proto: precision = 0.096 usec (-23) Jan 23 23:54:11.006362 ntpd[1908]: basedate set to 2026-01-11 Jan 23 23:54:11.010453 extend-filesystems[1906]: Resized partition /dev/nvme0n1p9 Jan 23 23:54:11.020971 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: basedate set to 2026-01-11 Jan 23 23:54:11.020971 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: gps base set to 2026-01-11 (week 2401) Jan 23 23:54:11.020403 ntpd[1908]: gps base set to 2026-01-11 (week 2401) Jan 23 23:54:11.024598 extend-filesystems[1948]: resize2fs 1.47.1 (20-May-2024) Jan 23 23:54:11.038623 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 23:54:11.052544 update_engine[1914]: I20260123 23:54:11.041567 1914 main.cc:92] Flatcar Update Engine starting Jan 23 23:54:11.051026 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:54:11.065248 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 23:54:11.065348 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:54:11.065348 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:54:11.063383 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:54:11.057375 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:54:11.063502 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:54:11.065966 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:54:11.069621 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:54:11.069621 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: Listen normally on 3 eth0 172.31.21.163:123 Jan 23 23:54:11.069621 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: Listen normally on 4 lo [::1]:123 Jan 23 23:54:11.067960 ntpd[1908]: Listen normally on 3 eth0 172.31.21.163:123 Jan 23 23:54:11.068046 ntpd[1908]: Listen normally on 4 lo [::1]:123 Jan 23 23:54:11.075703 ntpd[1908]: bind(21) AF_INET6 fe80::49e:24ff:fe59:6551%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:54:11.079383 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:54:11.092753 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: bind(21) AF_INET6 fe80::49e:24ff:fe59:6551%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:54:11.092753 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: unable to create socket on eth0 (5) for fe80::49e:24ff:fe59:6551%2#123 Jan 23 23:54:11.092753 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: failed to init interface for address fe80::49e:24ff:fe59:6551%2 Jan 23 23:54:11.092753 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: Listening on routing socket on fd #21 for interface updates Jan 23 23:54:11.075775 ntpd[1908]: unable to create socket on eth0 (5) for fe80::49e:24ff:fe59:6551%2#123 Jan 23 23:54:11.075808 ntpd[1908]: failed to init interface for address fe80::49e:24ff:fe59:6551%2 Jan 23 23:54:11.075884 ntpd[1908]: Listening on routing socket on fd #21 for interface updates Jan 23 23:54:11.095399 update_engine[1914]: I20260123 23:54:11.093758 1914 update_check_scheduler.cc:74] Next update check in 6m55s Jan 23 23:54:11.100638 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:54:11.122452 jq[1933]: true Jan 23 23:54:11.147378 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:11.149907 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:11.149907 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:11.147448 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:11.192648 coreos-metadata[1903]: Jan 23 23:54:11.188 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:54:11.192648 coreos-metadata[1903]: Jan 23 23:54:11.190 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 23:54:11.200571 coreos-metadata[1903]: Jan 23 23:54:11.193 INFO Fetch successful Jan 23 23:54:11.204249 coreos-metadata[1903]: Jan 23 23:54:11.201 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 23:54:11.204249 coreos-metadata[1903]: Jan 23 23:54:11.204 INFO Fetch successful Jan 23 23:54:11.204249 coreos-metadata[1903]: Jan 23 23:54:11.204 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 23:54:11.205873 coreos-metadata[1903]: Jan 23 23:54:11.205 INFO Fetch successful Jan 23 23:54:11.205873 coreos-metadata[1903]: Jan 23 23:54:11.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 23:54:11.210435 coreos-metadata[1903]: Jan 23 23:54:11.209 INFO Fetch successful Jan 23 23:54:11.210435 coreos-metadata[1903]: Jan 23 23:54:11.210 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 23:54:11.221265 coreos-metadata[1903]: Jan 23 23:54:11.213 INFO Fetch failed with 404: resource not found Jan 23 23:54:11.221265 coreos-metadata[1903]: Jan 23 23:54:11.213 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 23:54:11.226495 coreos-metadata[1903]: Jan 23 23:54:11.226 INFO Fetch successful Jan 23 23:54:11.226495 coreos-metadata[1903]: Jan 23 23:54:11.226 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 23:54:11.229305 coreos-metadata[1903]: Jan 23 23:54:11.228 INFO Fetch successful Jan 23 23:54:11.229305 coreos-metadata[1903]: Jan 23 23:54:11.228 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 23:54:11.234288 coreos-metadata[1903]: Jan 23 23:54:11.233 INFO Fetch successful Jan 23 23:54:11.234288 coreos-metadata[1903]: Jan 23 23:54:11.233 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 23:54:11.236319 coreos-metadata[1903]: Jan 23 23:54:11.236 INFO Fetch successful Jan 23 23:54:11.236319 coreos-metadata[1903]: Jan 23 23:54:11.236 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 23:54:11.241032 coreos-metadata[1903]: Jan 23 23:54:11.240 INFO Fetch successful Jan 23 23:54:11.312586 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 23:54:11.374089 systemd-logind[1913]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 23:54:11.374150 systemd-logind[1913]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 23:54:11.374614 systemd-logind[1913]: New seat seat0. Jan 23 23:54:11.378413 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:54:11.392724 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 23:54:11.436217 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1685) Jan 23 23:54:11.441853 extend-filesystems[1948]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 23:54:11.441853 extend-filesystems[1948]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 23:54:11.441853 extend-filesystems[1948]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 23:54:11.451074 extend-filesystems[1906]: Resized filesystem in /dev/nvme0n1p9 Jan 23 23:54:11.464584 bash[1995]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:54:11.535773 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:54:11.536231 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:54:11.540620 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:54:11.560409 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:54:11.568156 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:54:11.580943 systemd[1]: Starting sshkeys.service... Jan 23 23:54:11.826125 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 23:54:11.841715 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 23:54:11.857588 containerd[1924]: time="2026-01-23T23:54:11.853305781Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:54:11.889607 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:54:11.982030 ntpd[1908]: bind(24) AF_INET6 fe80::49e:24ff:fe59:6551%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:54:11.983890 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: bind(24) AF_INET6 fe80::49e:24ff:fe59:6551%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:54:11.983890 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: unable to create socket on eth0 (6) for fe80::49e:24ff:fe59:6551%2#123 Jan 23 23:54:11.983890 ntpd[1908]: 23 Jan 23:54:11 ntpd[1908]: failed to init interface for address fe80::49e:24ff:fe59:6551%2 Jan 23 23:54:11.982147 ntpd[1908]: unable to create socket on eth0 (6) for fe80::49e:24ff:fe59:6551%2#123 Jan 23 23:54:11.982211 ntpd[1908]: failed to init interface for address fe80::49e:24ff:fe59:6551%2 Jan 23 23:54:12.128358 containerd[1924]: time="2026-01-23T23:54:12.126946774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:12.165862 containerd[1924]: time="2026-01-23T23:54:12.165775834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:12.169236 containerd[1924]: time="2026-01-23T23:54:12.166951114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:54:12.169236 containerd[1924]: time="2026-01-23T23:54:12.168804442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:54:12.170495 containerd[1924]: time="2026-01-23T23:54:12.170425606Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:54:12.172658 containerd[1924]: time="2026-01-23T23:54:12.171298918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:12.174500 containerd[1924]: time="2026-01-23T23:54:12.172957186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:12.174980 containerd[1924]: time="2026-01-23T23:54:12.174923614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:12.178400 containerd[1924]: time="2026-01-23T23:54:12.177475690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:12.178400 containerd[1924]: time="2026-01-23T23:54:12.178264366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:12.178400 containerd[1924]: time="2026-01-23T23:54:12.178335898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:12.183368 containerd[1924]: time="2026-01-23T23:54:12.178363198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:12.183368 containerd[1924]: time="2026-01-23T23:54:12.180658162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:12.183368 containerd[1924]: time="2026-01-23T23:54:12.181244878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:12.183368 containerd[1924]: time="2026-01-23T23:54:12.181528486Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:12.183368 containerd[1924]: time="2026-01-23T23:54:12.181568926Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:54:12.183368 containerd[1924]: time="2026-01-23T23:54:12.181864186Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:54:12.183368 containerd[1924]: time="2026-01-23T23:54:12.182018626Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:54:12.188527 locksmithd[1954]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:54:12.192367 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 23:54:12.195551 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 23:54:12.199383 containerd[1924]: time="2026-01-23T23:54:12.195692938Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:54:12.200316 dbus-daemon[1904]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1951 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.200572714Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.200660746Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.200701618Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.200749378Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.201063982Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.201517246Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.201866950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.201917902Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.201967846Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.202004914Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.202038526Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.202070758Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.202110094Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:54:12.202360 containerd[1924]: time="2026-01-23T23:54:12.202144702Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:54:12.207237 containerd[1924]: time="2026-01-23T23:54:12.202178602Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:54:12.207237 containerd[1924]: time="2026-01-23T23:54:12.207069118Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:54:12.207237 containerd[1924]: time="2026-01-23T23:54:12.207108622Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.207158662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.207778078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.207820486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.207869194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.207901930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.207934282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.207969238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.208002562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.208038646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.208076254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.208113802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.208144678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.208177582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.208251430Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:54:12.208655 containerd[1924]: time="2026-01-23T23:54:12.208304674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.209472 containerd[1924]: time="2026-01-23T23:54:12.208338766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.209472 containerd[1924]: time="2026-01-23T23:54:12.208383790Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:54:12.214833 containerd[1924]: time="2026-01-23T23:54:12.211031015Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:54:12.214833 containerd[1924]: time="2026-01-23T23:54:12.212408903Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:54:12.214833 containerd[1924]: time="2026-01-23T23:54:12.212444231Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:54:12.214833 containerd[1924]: time="2026-01-23T23:54:12.212481875Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:54:12.214833 containerd[1924]: time="2026-01-23T23:54:12.212507699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.214833 containerd[1924]: time="2026-01-23T23:54:12.212539787Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:54:12.214833 containerd[1924]: time="2026-01-23T23:54:12.212564471Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:54:12.214833 containerd[1924]: time="2026-01-23T23:54:12.212590631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:54:12.225823 containerd[1924]: time="2026-01-23T23:54:12.223769567Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:54:12.225823 containerd[1924]: time="2026-01-23T23:54:12.223950131Z" level=info msg="Connect containerd service" Jan 23 23:54:12.225823 containerd[1924]: time="2026-01-23T23:54:12.224043023Z" level=info msg="using legacy CRI server" Jan 23 23:54:12.225823 containerd[1924]: time="2026-01-23T23:54:12.224078075Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:54:12.225823 containerd[1924]: time="2026-01-23T23:54:12.224360603Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:54:12.227216 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 23:54:12.234536 containerd[1924]: time="2026-01-23T23:54:12.233040155Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:54:12.244986 containerd[1924]: time="2026-01-23T23:54:12.243162695Z" level=info msg="Start subscribing containerd event" Jan 23 23:54:12.244986 containerd[1924]: time="2026-01-23T23:54:12.243484163Z" level=info msg="Start recovering state" Jan 23 23:54:12.244986 containerd[1924]: time="2026-01-23T23:54:12.243632015Z" level=info msg="Start event monitor" Jan 23 23:54:12.244986 containerd[1924]: time="2026-01-23T23:54:12.243657983Z" level=info msg="Start snapshots syncer" Jan 23 23:54:12.244986 containerd[1924]: time="2026-01-23T23:54:12.243680423Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:54:12.244986 containerd[1924]: time="2026-01-23T23:54:12.243701987Z" level=info msg="Start streaming server" Jan 23 23:54:12.253241 containerd[1924]: time="2026-01-23T23:54:12.250163147Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:54:12.253241 containerd[1924]: time="2026-01-23T23:54:12.250367915Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:54:12.253241 containerd[1924]: time="2026-01-23T23:54:12.250562111Z" level=info msg="containerd successfully booted in 0.410154s" Jan 23 23:54:12.250722 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:54:12.312552 polkitd[2085]: Started polkitd version 121 Jan 23 23:54:12.333458 systemd-networkd[1841]: eth0: Gained IPv6LL Jan 23 23:54:12.347956 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:54:12.352227 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:54:12.376014 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 23:54:12.386525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:12.393303 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:54:12.419601 coreos-metadata[2059]: Jan 23 23:54:12.418 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:54:12.421971 polkitd[2085]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 23:54:12.422095 polkitd[2085]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 23:54:12.434295 coreos-metadata[2059]: Jan 23 23:54:12.427 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 23:54:12.439881 coreos-metadata[2059]: Jan 23 23:54:12.437 INFO Fetch successful Jan 23 23:54:12.439881 coreos-metadata[2059]: Jan 23 23:54:12.437 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 23:54:12.445013 coreos-metadata[2059]: Jan 23 23:54:12.444 INFO Fetch successful Jan 23 23:54:12.455096 unknown[2059]: wrote ssh authorized keys file for user: core Jan 23 23:54:12.465547 polkitd[2085]: Finished loading, compiling and executing 2 rules Jan 23 23:54:12.469473 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 23:54:12.481315 polkitd[2085]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 23:54:12.486834 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 23:54:12.543009 update-ssh-keys[2112]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:54:12.548225 sshd_keygen[1947]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:54:12.548413 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 23:54:12.560133 systemd[1]: Finished sshkeys.service. Jan 23 23:54:12.589653 systemd-hostnamed[1951]: Hostname set to (transient) Jan 23 23:54:12.590541 systemd-resolved[1842]: System hostname changed to 'ip-172-31-21-163'. Jan 23 23:54:12.610029 amazon-ssm-agent[2102]: Initializing new seelog logger Jan 23 23:54:12.611055 amazon-ssm-agent[2102]: New Seelog Logger Creation Complete Jan 23 23:54:12.611055 amazon-ssm-agent[2102]: 2026/01/23 23:54:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:12.611055 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:12.612546 amazon-ssm-agent[2102]: 2026/01/23 23:54:12 processing appconfig overrides Jan 23 23:54:12.612820 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:54:12.615523 amazon-ssm-agent[2102]: 2026/01/23 23:54:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:12.615686 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:12.616236 amazon-ssm-agent[2102]: 2026/01/23 23:54:12 processing appconfig overrides Jan 23 23:54:12.616498 amazon-ssm-agent[2102]: 2026/01/23 23:54:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:12.616598 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:12.618206 amazon-ssm-agent[2102]: 2026/01/23 23:54:12 processing appconfig overrides Jan 23 23:54:12.618206 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO Proxy environment variables: Jan 23 23:54:12.624242 amazon-ssm-agent[2102]: 2026/01/23 23:54:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:12.624242 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:12.624242 amazon-ssm-agent[2102]: 2026/01/23 23:54:12 processing appconfig overrides Jan 23 23:54:12.709041 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:54:12.719359 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO https_proxy: Jan 23 23:54:12.732794 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:54:12.745771 systemd[1]: Started sshd@0-172.31.21.163:22-4.153.228.146:54152.service - OpenSSH per-connection server daemon (4.153.228.146:54152). Jan 23 23:54:12.790225 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:54:12.791373 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:54:12.811995 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:54:12.819504 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO http_proxy: Jan 23 23:54:12.873662 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:54:12.890921 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:54:12.897086 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 23:54:12.901751 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:54:12.920251 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO no_proxy: Jan 23 23:54:13.017800 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO Checking if agent identity type OnPrem can be assumed Jan 23 23:54:13.116229 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO Checking if agent identity type EC2 can be assumed Jan 23 23:54:13.215672 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO Agent will take identity from EC2 Jan 23 23:54:13.315225 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:13.401326 sshd[2136]: Accepted publickey for core from 4.153.228.146 port 54152 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:13.409785 sshd[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:13.413238 tar[1921]: linux-arm64/README.md Jan 23 23:54:13.416237 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:13.444971 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:54:13.459939 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:54:13.467319 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:54:13.493389 systemd-logind[1913]: New session 1 of user core. Jan 23 23:54:13.514052 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:13.523305 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:54:13.537416 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:54:13.562023 (systemd)[2153]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:54:13.614311 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 23 23:54:13.715343 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 23:54:13.814943 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 23:54:13.838966 systemd[2153]: Queued start job for default target default.target. Jan 23 23:54:13.851929 systemd[2153]: Created slice app.slice - User Application Slice. Jan 23 23:54:13.852043 systemd[2153]: Reached target paths.target - Paths. Jan 23 23:54:13.852079 systemd[2153]: Reached target timers.target - Timers. Jan 23 23:54:13.855387 systemd[2153]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:54:13.893041 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 23 23:54:13.893041 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO [Registrar] Starting registrar module Jan 23 23:54:13.893041 amazon-ssm-agent[2102]: 2026-01-23 23:54:12 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 23 23:54:13.893041 amazon-ssm-agent[2102]: 2026-01-23 23:54:13 INFO [EC2Identity] EC2 registration was successful. Jan 23 23:54:13.893041 amazon-ssm-agent[2102]: 2026-01-23 23:54:13 INFO [CredentialRefresher] credentialRefresher has started Jan 23 23:54:13.893041 amazon-ssm-agent[2102]: 2026-01-23 23:54:13 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 23:54:13.893041 amazon-ssm-agent[2102]: 2026-01-23 23:54:13 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 23:54:13.898405 systemd[2153]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:54:13.898767 systemd[2153]: Reached target sockets.target - Sockets. Jan 23 23:54:13.898806 systemd[2153]: Reached target basic.target - Basic System. Jan 23 23:54:13.898910 systemd[2153]: Reached target default.target - Main User Target. Jan 23 23:54:13.898979 systemd[2153]: Startup finished in 322ms. Jan 23 23:54:13.899243 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:54:13.913526 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:54:13.915426 amazon-ssm-agent[2102]: 2026-01-23 23:54:13 INFO [CredentialRefresher] Next credential rotation will be in 31.091656950166666 minutes Jan 23 23:54:14.323894 systemd[1]: Started sshd@1-172.31.21.163:22-4.153.228.146:54162.service - OpenSSH per-connection server daemon (4.153.228.146:54162). Jan 23 23:54:14.881088 sshd[2164]: Accepted publickey for core from 4.153.228.146 port 54162 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:14.884310 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:14.894336 systemd-logind[1913]: New session 2 of user core. Jan 23 23:54:14.903528 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:54:14.935769 amazon-ssm-agent[2102]: 2026-01-23 23:54:14 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 23:54:14.980061 ntpd[1908]: Listen normally on 7 eth0 [fe80::49e:24ff:fe59:6551%2]:123 Jan 23 23:54:14.981105 ntpd[1908]: 23 Jan 23:54:14 ntpd[1908]: Listen normally on 7 eth0 [fe80::49e:24ff:fe59:6551%2]:123 Jan 23 23:54:15.037463 amazon-ssm-agent[2102]: 2026-01-23 23:54:14 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2168) started Jan 23 23:54:15.138030 amazon-ssm-agent[2102]: 2026-01-23 23:54:14 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 23:54:15.274573 sshd[2164]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:15.280333 systemd[1]: sshd@1-172.31.21.163:22-4.153.228.146:54162.service: Deactivated successfully. Jan 23 23:54:15.281801 systemd-logind[1913]: Session 2 logged out. Waiting for processes to exit. Jan 23 23:54:15.285401 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 23:54:15.291503 systemd-logind[1913]: Removed session 2. Jan 23 23:54:15.378758 systemd[1]: Started sshd@2-172.31.21.163:22-4.153.228.146:47974.service - OpenSSH per-connection server daemon (4.153.228.146:47974). Jan 23 23:54:15.912862 sshd[2182]: Accepted publickey for core from 4.153.228.146 port 47974 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:15.914847 sshd[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:15.924586 systemd-logind[1913]: New session 3 of user core. Jan 23 23:54:15.932562 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:54:16.292578 sshd[2182]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:16.300423 systemd[1]: sshd@2-172.31.21.163:22-4.153.228.146:47974.service: Deactivated successfully. Jan 23 23:54:16.304792 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 23:54:16.308305 systemd-logind[1913]: Session 3 logged out. Waiting for processes to exit. Jan 23 23:54:16.311080 systemd-logind[1913]: Removed session 3. Jan 23 23:54:17.042480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:17.048324 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:54:17.052978 systemd[1]: Startup finished in 1.265s (kernel) + 8.901s (initrd) + 12.375s (userspace) = 22.542s. Jan 23 23:54:17.057769 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:18.700846 kubelet[2193]: E0123 23:54:18.700727 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:18.705149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:18.705539 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:18.706383 systemd[1]: kubelet.service: Consumed 1.320s CPU time. Jan 23 23:54:26.401782 systemd[1]: Started sshd@3-172.31.21.163:22-4.153.228.146:48426.service - OpenSSH per-connection server daemon (4.153.228.146:48426). Jan 23 23:54:26.936463 sshd[2205]: Accepted publickey for core from 4.153.228.146 port 48426 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:26.939300 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:26.947825 systemd-logind[1913]: New session 4 of user core. Jan 23 23:54:26.957510 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:54:27.316381 sshd[2205]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:27.322674 systemd-logind[1913]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:54:27.323179 systemd[1]: sshd@3-172.31.21.163:22-4.153.228.146:48426.service: Deactivated successfully. Jan 23 23:54:27.326346 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:54:27.328683 systemd-logind[1913]: Removed session 4. Jan 23 23:54:27.404696 systemd[1]: Started sshd@4-172.31.21.163:22-4.153.228.146:48430.service - OpenSSH per-connection server daemon (4.153.228.146:48430). Jan 23 23:54:27.899233 sshd[2212]: Accepted publickey for core from 4.153.228.146 port 48430 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:27.902127 sshd[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:27.910652 systemd-logind[1913]: New session 5 of user core. Jan 23 23:54:27.923538 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:54:28.246043 sshd[2212]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:28.251679 systemd[1]: sshd@4-172.31.21.163:22-4.153.228.146:48430.service: Deactivated successfully. Jan 23 23:54:28.255819 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:54:28.259635 systemd-logind[1913]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:54:28.262274 systemd-logind[1913]: Removed session 5. Jan 23 23:54:28.347648 systemd[1]: Started sshd@5-172.31.21.163:22-4.153.228.146:48446.service - OpenSSH per-connection server daemon (4.153.228.146:48446). Jan 23 23:54:28.801362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:54:28.810593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:28.896004 sshd[2219]: Accepted publickey for core from 4.153.228.146 port 48446 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:28.899495 sshd[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:28.911273 systemd-logind[1913]: New session 6 of user core. Jan 23 23:54:28.917476 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:54:29.160629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:29.180141 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:29.265230 kubelet[2230]: E0123 23:54:29.263988 2230 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:29.271570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:29.271943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:29.276652 sshd[2219]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:29.284965 systemd[1]: sshd@5-172.31.21.163:22-4.153.228.146:48446.service: Deactivated successfully. Jan 23 23:54:29.289059 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:54:29.290822 systemd-logind[1913]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:54:29.294394 systemd-logind[1913]: Removed session 6. Jan 23 23:54:29.379744 systemd[1]: Started sshd@6-172.31.21.163:22-4.153.228.146:48462.service - OpenSSH per-connection server daemon (4.153.228.146:48462). Jan 23 23:54:29.926949 sshd[2241]: Accepted publickey for core from 4.153.228.146 port 48462 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:29.929649 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:29.938421 systemd-logind[1913]: New session 7 of user core. Jan 23 23:54:29.950475 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:54:30.248513 sudo[2244]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:54:30.249382 sudo[2244]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:54:30.894739 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:54:30.897340 (dockerd)[2259]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:54:31.408060 dockerd[2259]: time="2026-01-23T23:54:31.407584559Z" level=info msg="Starting up" Jan 23 23:54:31.632567 dockerd[2259]: time="2026-01-23T23:54:31.632497076Z" level=info msg="Loading containers: start." Jan 23 23:54:31.833278 kernel: Initializing XFRM netlink socket Jan 23 23:54:31.893527 (udev-worker)[2281]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:54:32.002983 systemd-networkd[1841]: docker0: Link UP Jan 23 23:54:32.035036 dockerd[2259]: time="2026-01-23T23:54:32.034949004Z" level=info msg="Loading containers: done." Jan 23 23:54:32.069426 dockerd[2259]: time="2026-01-23T23:54:32.069337022Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:54:32.069661 dockerd[2259]: time="2026-01-23T23:54:32.069530871Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:54:32.069867 dockerd[2259]: time="2026-01-23T23:54:32.069796816Z" level=info msg="Daemon has completed initialization" Jan 23 23:54:32.153913 dockerd[2259]: time="2026-01-23T23:54:32.153552749Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:54:32.156055 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:54:33.901914 containerd[1924]: time="2026-01-23T23:54:33.901866398Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 23:54:34.552698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103620386.mount: Deactivated successfully. Jan 23 23:54:36.084156 containerd[1924]: time="2026-01-23T23:54:36.084039250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:36.085995 containerd[1924]: time="2026-01-23T23:54:36.085914971Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Jan 23 23:54:36.089025 containerd[1924]: time="2026-01-23T23:54:36.088907071Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:36.096259 containerd[1924]: time="2026-01-23T23:54:36.095718110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:36.098886 containerd[1924]: time="2026-01-23T23:54:36.098429822Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.195875108s" Jan 23 23:54:36.098886 containerd[1924]: time="2026-01-23T23:54:36.098511931Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 23 23:54:36.099782 containerd[1924]: time="2026-01-23T23:54:36.099466192Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 23:54:37.405127 containerd[1924]: time="2026-01-23T23:54:37.405032522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:37.407629 containerd[1924]: time="2026-01-23T23:54:37.407544705Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Jan 23 23:54:37.409894 containerd[1924]: time="2026-01-23T23:54:37.409778349Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:37.416679 containerd[1924]: time="2026-01-23T23:54:37.416584191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:37.420518 containerd[1924]: time="2026-01-23T23:54:37.419738527Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.320200695s" Jan 23 23:54:37.420518 containerd[1924]: time="2026-01-23T23:54:37.419825378Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 23 23:54:37.421694 containerd[1924]: time="2026-01-23T23:54:37.421640602Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 23:54:38.451240 containerd[1924]: time="2026-01-23T23:54:38.450225412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:38.452763 containerd[1924]: time="2026-01-23T23:54:38.452677818Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Jan 23 23:54:38.455153 containerd[1924]: time="2026-01-23T23:54:38.455038065Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:38.461956 containerd[1924]: time="2026-01-23T23:54:38.461862876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:38.464645 containerd[1924]: time="2026-01-23T23:54:38.464560192Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.042657235s" Jan 23 23:54:38.465038 containerd[1924]: time="2026-01-23T23:54:38.464835394Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 23 23:54:38.466246 containerd[1924]: time="2026-01-23T23:54:38.465785500Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 23:54:39.443221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:54:39.459558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:39.792138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3046865466.mount: Deactivated successfully. Jan 23 23:54:39.843827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:39.861094 (kubelet)[2474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:39.958873 kubelet[2474]: E0123 23:54:39.958300 2474 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:39.965584 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:39.965956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:40.298695 containerd[1924]: time="2026-01-23T23:54:40.298607030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:40.300818 containerd[1924]: time="2026-01-23T23:54:40.300432014Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Jan 23 23:54:40.302891 containerd[1924]: time="2026-01-23T23:54:40.302799297Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:40.309202 containerd[1924]: time="2026-01-23T23:54:40.307712044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:40.309202 containerd[1924]: time="2026-01-23T23:54:40.309007071Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.843155538s" Jan 23 23:54:40.309202 containerd[1924]: time="2026-01-23T23:54:40.309052610Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 23 23:54:40.309741 containerd[1924]: time="2026-01-23T23:54:40.309675001Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 23:54:40.906248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901740366.mount: Deactivated successfully. Jan 23 23:54:42.251228 containerd[1924]: time="2026-01-23T23:54:42.249472453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:42.253412 containerd[1924]: time="2026-01-23T23:54:42.253359191Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Jan 23 23:54:42.256879 containerd[1924]: time="2026-01-23T23:54:42.256807086Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:42.271476 containerd[1924]: time="2026-01-23T23:54:42.271396826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:42.276902 containerd[1924]: time="2026-01-23T23:54:42.275563748Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.965669865s" Jan 23 23:54:42.276902 containerd[1924]: time="2026-01-23T23:54:42.275634415Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 23 23:54:42.277465 containerd[1924]: time="2026-01-23T23:54:42.277417018Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 23:54:42.612371 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 23:54:42.811739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382809421.mount: Deactivated successfully. Jan 23 23:54:42.822053 containerd[1924]: time="2026-01-23T23:54:42.821974485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:42.825789 containerd[1924]: time="2026-01-23T23:54:42.825391176Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Jan 23 23:54:42.828055 containerd[1924]: time="2026-01-23T23:54:42.827992120Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:42.833353 containerd[1924]: time="2026-01-23T23:54:42.833300609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:42.835304 containerd[1924]: time="2026-01-23T23:54:42.835235508Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 557.646108ms" Jan 23 23:54:42.835304 containerd[1924]: time="2026-01-23T23:54:42.835297855Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 23 23:54:42.836809 containerd[1924]: time="2026-01-23T23:54:42.836667584Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 23:54:43.420391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount387012513.mount: Deactivated successfully. Jan 23 23:54:46.288745 containerd[1924]: time="2026-01-23T23:54:46.288683345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:46.292876 containerd[1924]: time="2026-01-23T23:54:46.292804717Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Jan 23 23:54:46.295983 containerd[1924]: time="2026-01-23T23:54:46.295911113Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:46.301538 containerd[1924]: time="2026-01-23T23:54:46.301450165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:46.304454 containerd[1924]: time="2026-01-23T23:54:46.304168900Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.467437264s" Jan 23 23:54:46.304454 containerd[1924]: time="2026-01-23T23:54:46.304255884Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 23 23:54:50.193384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 23:54:50.204359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:50.564499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:50.565601 (kubelet)[2628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:50.646641 kubelet[2628]: E0123 23:54:50.646557 2628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:50.651529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:50.651890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:54.689981 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:54.698756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:54.767389 systemd[1]: Reloading requested from client PID 2642 ('systemctl') (unit session-7.scope)... Jan 23 23:54:54.767674 systemd[1]: Reloading... Jan 23 23:54:55.022249 zram_generator::config[2685]: No configuration found. Jan 23 23:54:55.303361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:55.494662 systemd[1]: Reloading finished in 726 ms. Jan 23 23:54:55.588464 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:54:55.588666 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:54:55.589283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:55.598926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:55.945560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:55.956938 (kubelet)[2745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:54:56.039985 kubelet[2745]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:54:56.039985 kubelet[2745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:54:56.041568 kubelet[2745]: I0123 23:54:56.041445 2745 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:54:56.531612 update_engine[1914]: I20260123 23:54:56.524453 1914 update_attempter.cc:509] Updating boot flags... Jan 23 23:54:56.663541 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (2765) Jan 23 23:54:57.248409 kubelet[2745]: I0123 23:54:57.248318 2745 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 23:54:57.248409 kubelet[2745]: I0123 23:54:57.248379 2745 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:54:57.251052 kubelet[2745]: I0123 23:54:57.250961 2745 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 23:54:57.251052 kubelet[2745]: I0123 23:54:57.251025 2745 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:54:57.251622 kubelet[2745]: I0123 23:54:57.251542 2745 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:54:57.266035 kubelet[2745]: E0123 23:54:57.265969 2745 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.21.163:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.163:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 23:54:57.268126 kubelet[2745]: I0123 23:54:57.268032 2745 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:54:57.278432 kubelet[2745]: E0123 23:54:57.277687 2745 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:54:57.278432 kubelet[2745]: I0123 23:54:57.277865 2745 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 23 23:54:57.283862 kubelet[2745]: I0123 23:54:57.283816 2745 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 23:54:57.284687 kubelet[2745]: I0123 23:54:57.284626 2745 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:54:57.285127 kubelet[2745]: I0123 23:54:57.284831 2745 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:54:57.285618 kubelet[2745]: I0123 23:54:57.285458 2745 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:54:57.285618 kubelet[2745]: I0123 23:54:57.285514 2745 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 23:54:57.286405 kubelet[2745]: I0123 23:54:57.285933 2745 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 23:54:57.294590 kubelet[2745]: I0123 23:54:57.294501 2745 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:54:57.299253 kubelet[2745]: I0123 23:54:57.297215 2745 kubelet.go:475] "Attempting to sync node with API server" Jan 23 23:54:57.299253 kubelet[2745]: I0123 23:54:57.297296 2745 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:54:57.299253 kubelet[2745]: I0123 23:54:57.297352 2745 kubelet.go:387] "Adding apiserver pod source" Jan 23 23:54:57.299253 kubelet[2745]: I0123 23:54:57.297383 2745 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:54:57.299253 kubelet[2745]: E0123 23:54:57.298121 2745 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-163&limit=500&resourceVersion=0\": dial tcp 172.31.21.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:54:57.300099 kubelet[2745]: E0123 23:54:57.300015 2745 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.163:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:54:57.300953 kubelet[2745]: I0123 23:54:57.300878 2745 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:54:57.302318 kubelet[2745]: I0123 23:54:57.302256 2745 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:54:57.302482 kubelet[2745]: I0123 23:54:57.302334 2745 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 23:54:57.302482 kubelet[2745]: W0123 23:54:57.302440 2745 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:54:57.308608 kubelet[2745]: I0123 23:54:57.308534 2745 server.go:1262] "Started kubelet" Jan 23 23:54:57.321296 kubelet[2745]: E0123 23:54:57.317559 2745 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.163:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.163:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-163.188d81648240e210 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-163,UID:ip-172-31-21-163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-163,},FirstTimestamp:2026-01-23 23:54:57.30847592 +0000 UTC m=+1.344554651,LastTimestamp:2026-01-23 23:54:57.30847592 +0000 UTC m=+1.344554651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-163,}" Jan 23 23:54:57.321773 kubelet[2745]: I0123 23:54:57.321699 2745 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:54:57.331882 kubelet[2745]: I0123 23:54:57.331800 2745 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:54:57.334647 kubelet[2745]: I0123 23:54:57.334583 2745 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 23:54:57.335275 kubelet[2745]: E0123 23:54:57.335146 2745 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-21-163\" not found" Jan 23 23:54:57.336539 kubelet[2745]: I0123 23:54:57.336463 2745 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 23:54:57.336710 kubelet[2745]: I0123 23:54:57.336639 2745 reconciler.go:29] "Reconciler: start to sync state" Jan 23 23:54:57.337105 kubelet[2745]: I0123 23:54:57.337054 2745 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:54:57.338889 kubelet[2745]: I0123 23:54:57.338796 2745 server.go:310] "Adding debug handlers to kubelet server" Jan 23 23:54:57.345555 kubelet[2745]: I0123 23:54:57.345240 2745 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 23:54:57.346232 kubelet[2745]: I0123 23:54:57.346073 2745 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:54:57.346364 kubelet[2745]: I0123 23:54:57.346283 2745 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 23:54:57.346677 kubelet[2745]: I0123 23:54:57.346599 2745 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:54:57.350561 kubelet[2745]: I0123 23:54:57.350469 2745 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:54:57.350756 kubelet[2745]: I0123 23:54:57.350706 2745 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:54:57.351621 kubelet[2745]: E0123 23:54:57.351475 2745 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:54:57.351785 kubelet[2745]: E0123 23:54:57.351679 2745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-163?timeout=10s\": dial tcp 172.31.21.163:6443: connect: connection refused" interval="200ms" Jan 23 23:54:57.354853 kubelet[2745]: I0123 23:54:57.354635 2745 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:54:57.388301 kubelet[2745]: I0123 23:54:57.388068 2745 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:54:57.388301 kubelet[2745]: I0123 23:54:57.388101 2745 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:54:57.388301 kubelet[2745]: I0123 23:54:57.388138 2745 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:54:57.394964 kubelet[2745]: I0123 23:54:57.394914 2745 policy_none.go:49] "None policy: Start" Jan 23 23:54:57.395341 kubelet[2745]: I0123 23:54:57.395249 2745 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 23:54:57.395603 kubelet[2745]: I0123 23:54:57.395301 2745 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 23:54:57.401837 kubelet[2745]: I0123 23:54:57.400575 2745 policy_none.go:47] "Start" Jan 23 23:54:57.414007 kubelet[2745]: I0123 23:54:57.413958 2745 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 23:54:57.414380 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 23:54:57.415023 kubelet[2745]: I0123 23:54:57.414970 2745 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 23:54:57.415840 kubelet[2745]: I0123 23:54:57.415321 2745 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 23:54:57.415840 kubelet[2745]: E0123 23:54:57.415439 2745 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:54:57.421841 kubelet[2745]: E0123 23:54:57.421615 2745 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:54:57.435429 kubelet[2745]: E0123 23:54:57.435358 2745 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-21-163\" not found" Jan 23 23:54:57.437377 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 23:54:57.445008 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 23:54:57.456562 kubelet[2745]: E0123 23:54:57.455690 2745 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:54:57.456562 kubelet[2745]: I0123 23:54:57.456026 2745 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:54:57.456562 kubelet[2745]: I0123 23:54:57.456054 2745 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:54:57.457956 kubelet[2745]: I0123 23:54:57.457898 2745 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:54:57.460053 kubelet[2745]: E0123 23:54:57.459988 2745 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:54:57.460296 kubelet[2745]: E0123 23:54:57.460067 2745 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-163\" not found" Jan 23 23:54:57.538409 kubelet[2745]: I0123 23:54:57.536978 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32670ae5de9e4fb069b4dea2eaaae753-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-163\" (UID: \"32670ae5de9e4fb069b4dea2eaaae753\") " pod="kube-system/kube-scheduler-ip-172-31-21-163" Jan 23 23:54:57.538409 kubelet[2745]: I0123 23:54:57.537050 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86fc39f5af262685e759e329a9c4cb93-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-163\" (UID: \"86fc39f5af262685e759e329a9c4cb93\") " pod="kube-system/kube-apiserver-ip-172-31-21-163" Jan 23 23:54:57.538409 kubelet[2745]: I0123 23:54:57.537119 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86fc39f5af262685e759e329a9c4cb93-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-163\" (UID: \"86fc39f5af262685e759e329a9c4cb93\") " pod="kube-system/kube-apiserver-ip-172-31-21-163" Jan 23 23:54:57.538409 kubelet[2745]: I0123 23:54:57.537168 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b207a04c492e1302948a4dbbf03d948c-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-163\" (UID: \"b207a04c492e1302948a4dbbf03d948c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:54:57.538409 kubelet[2745]: I0123 23:54:57.537234 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b207a04c492e1302948a4dbbf03d948c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-163\" (UID: \"b207a04c492e1302948a4dbbf03d948c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:54:57.538815 kubelet[2745]: I0123 23:54:57.537272 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b207a04c492e1302948a4dbbf03d948c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-163\" (UID: \"b207a04c492e1302948a4dbbf03d948c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:54:57.538815 kubelet[2745]: I0123 23:54:57.537311 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86fc39f5af262685e759e329a9c4cb93-ca-certs\") pod \"kube-apiserver-ip-172-31-21-163\" (UID: \"86fc39f5af262685e759e329a9c4cb93\") " pod="kube-system/kube-apiserver-ip-172-31-21-163" Jan 23 23:54:57.538815 kubelet[2745]: I0123 23:54:57.537348 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b207a04c492e1302948a4dbbf03d948c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-163\" (UID: \"b207a04c492e1302948a4dbbf03d948c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:54:57.538815 kubelet[2745]: I0123 23:54:57.537387 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b207a04c492e1302948a4dbbf03d948c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-163\" (UID: \"b207a04c492e1302948a4dbbf03d948c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:54:57.546445 systemd[1]: Created slice kubepods-burstable-pod86fc39f5af262685e759e329a9c4cb93.slice - libcontainer container kubepods-burstable-pod86fc39f5af262685e759e329a9c4cb93.slice. Jan 23 23:54:57.552976 kubelet[2745]: E0123 23:54:57.552922 2745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-163?timeout=10s\": dial tcp 172.31.21.163:6443: connect: connection refused" interval="400ms" Jan 23 23:54:57.556224 kubelet[2745]: E0123 23:54:57.556129 2745 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-163\" not found" node="ip-172-31-21-163" Jan 23 23:54:57.559419 kubelet[2745]: I0123 23:54:57.559373 2745 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-163" Jan 23 23:54:57.560482 kubelet[2745]: E0123 23:54:57.560421 2745 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.163:6443/api/v1/nodes\": dial tcp 172.31.21.163:6443: connect: connection refused" node="ip-172-31-21-163" Jan 23 23:54:57.561150 systemd[1]: Created slice kubepods-burstable-podb207a04c492e1302948a4dbbf03d948c.slice - libcontainer container kubepods-burstable-podb207a04c492e1302948a4dbbf03d948c.slice. Jan 23 23:54:57.566006 kubelet[2745]: E0123 23:54:57.565943 2745 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-163\" not found" node="ip-172-31-21-163" Jan 23 23:54:57.582308 systemd[1]: Created slice kubepods-burstable-pod32670ae5de9e4fb069b4dea2eaaae753.slice - libcontainer container kubepods-burstable-pod32670ae5de9e4fb069b4dea2eaaae753.slice. Jan 23 23:54:57.586915 kubelet[2745]: E0123 23:54:57.586831 2745 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-163\" not found" node="ip-172-31-21-163" Jan 23 23:54:57.763883 kubelet[2745]: I0123 23:54:57.763807 2745 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-163" Jan 23 23:54:57.764481 kubelet[2745]: E0123 23:54:57.764416 2745 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.163:6443/api/v1/nodes\": dial tcp 172.31.21.163:6443: connect: connection refused" node="ip-172-31-21-163" Jan 23 23:54:57.864738 containerd[1924]: time="2026-01-23T23:54:57.864122969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-163,Uid:86fc39f5af262685e759e329a9c4cb93,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:57.873524 containerd[1924]: time="2026-01-23T23:54:57.873383449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-163,Uid:b207a04c492e1302948a4dbbf03d948c,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:57.899827 containerd[1924]: time="2026-01-23T23:54:57.899608959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-163,Uid:32670ae5de9e4fb069b4dea2eaaae753,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:57.954256 kubelet[2745]: E0123 23:54:57.953804 2745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-163?timeout=10s\": dial tcp 172.31.21.163:6443: connect: connection refused" interval="800ms" Jan 23 23:54:58.135284 kubelet[2745]: E0123 23:54:58.135042 2745 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.163:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:54:58.168037 kubelet[2745]: I0123 23:54:58.167521 2745 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-163" Jan 23 23:54:58.168037 kubelet[2745]: E0123 23:54:58.167987 2745 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.163:6443/api/v1/nodes\": dial tcp 172.31.21.163:6443: connect: connection refused" node="ip-172-31-21-163" Jan 23 23:54:58.396135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2188161537.mount: Deactivated successfully. Jan 23 23:54:58.415882 containerd[1924]: time="2026-01-23T23:54:58.415751381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:58.418375 containerd[1924]: time="2026-01-23T23:54:58.418283375Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:58.420957 containerd[1924]: time="2026-01-23T23:54:58.420780995Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:54:58.423447 containerd[1924]: time="2026-01-23T23:54:58.423367627Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:54:58.427264 containerd[1924]: time="2026-01-23T23:54:58.426684393Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:58.431155 containerd[1924]: time="2026-01-23T23:54:58.431074399Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:58.431497 containerd[1924]: time="2026-01-23T23:54:58.431445180Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:54:58.438644 containerd[1924]: time="2026-01-23T23:54:58.438548062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:58.440891 containerd[1924]: time="2026-01-23T23:54:58.440532065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.011965ms" Jan 23 23:54:58.445887 containerd[1924]: time="2026-01-23T23:54:58.445775962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.010204ms" Jan 23 23:54:58.485986 containerd[1924]: time="2026-01-23T23:54:58.485422824Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 621.126597ms" Jan 23 23:54:58.690039 containerd[1924]: time="2026-01-23T23:54:58.689382782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:58.690039 containerd[1924]: time="2026-01-23T23:54:58.689561900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:58.690039 containerd[1924]: time="2026-01-23T23:54:58.689595036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:58.690894 containerd[1924]: time="2026-01-23T23:54:58.690226444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:58.692586 containerd[1924]: time="2026-01-23T23:54:58.691765397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:58.694695 containerd[1924]: time="2026-01-23T23:54:58.692484400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:58.695018 containerd[1924]: time="2026-01-23T23:54:58.694891387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:58.696896 containerd[1924]: time="2026-01-23T23:54:58.695991953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:58.697426 containerd[1924]: time="2026-01-23T23:54:58.697135093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:58.697694 containerd[1924]: time="2026-01-23T23:54:58.697458138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:58.697694 containerd[1924]: time="2026-01-23T23:54:58.697548891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:58.698154 containerd[1924]: time="2026-01-23T23:54:58.697980987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:58.711131 kubelet[2745]: E0123 23:54:58.711066 2745 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:54:58.740280 kubelet[2745]: E0123 23:54:58.737883 2745 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-163&limit=500&resourceVersion=0\": dial tcp 172.31.21.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:54:58.748978 systemd[1]: Started cri-containerd-ac10e7a832b7f2ee82ab474fb21b5fcc595c698081704ad8583c3d0beee3f727.scope - libcontainer container ac10e7a832b7f2ee82ab474fb21b5fcc595c698081704ad8583c3d0beee3f727. Jan 23 23:54:58.765785 kubelet[2745]: E0123 23:54:58.761997 2745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-163?timeout=10s\": dial tcp 172.31.21.163:6443: connect: connection refused" interval="1.6s" Jan 23 23:54:58.766458 systemd[1]: Started cri-containerd-ac224f223e01f6bdfc9dca7d6dc84330a92c0d5a598bab0c1707b137d230827a.scope - libcontainer container ac224f223e01f6bdfc9dca7d6dc84330a92c0d5a598bab0c1707b137d230827a. Jan 23 23:54:58.792035 systemd[1]: Started cri-containerd-873f428b8b62bf56db4e136644716ab49074e21a9cb4c1bcd1b7c092c6ab808a.scope - libcontainer container 873f428b8b62bf56db4e136644716ab49074e21a9cb4c1bcd1b7c092c6ab808a. Jan 23 23:54:58.892305 containerd[1924]: time="2026-01-23T23:54:58.892080214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-163,Uid:b207a04c492e1302948a4dbbf03d948c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac224f223e01f6bdfc9dca7d6dc84330a92c0d5a598bab0c1707b137d230827a\"" Jan 23 23:54:58.915298 containerd[1924]: time="2026-01-23T23:54:58.914759080Z" level=info msg="CreateContainer within sandbox \"ac224f223e01f6bdfc9dca7d6dc84330a92c0d5a598bab0c1707b137d230827a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:54:58.930163 containerd[1924]: time="2026-01-23T23:54:58.930041709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-163,Uid:32670ae5de9e4fb069b4dea2eaaae753,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac10e7a832b7f2ee82ab474fb21b5fcc595c698081704ad8583c3d0beee3f727\"" Jan 23 23:54:58.942062 containerd[1924]: time="2026-01-23T23:54:58.941638353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-163,Uid:86fc39f5af262685e759e329a9c4cb93,Namespace:kube-system,Attempt:0,} returns sandbox id \"873f428b8b62bf56db4e136644716ab49074e21a9cb4c1bcd1b7c092c6ab808a\"" Jan 23 23:54:58.946714 containerd[1924]: time="2026-01-23T23:54:58.946547714Z" level=info msg="CreateContainer within sandbox \"ac10e7a832b7f2ee82ab474fb21b5fcc595c698081704ad8583c3d0beee3f727\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:54:58.958061 containerd[1924]: time="2026-01-23T23:54:58.957972228Z" level=info msg="CreateContainer within sandbox \"ac224f223e01f6bdfc9dca7d6dc84330a92c0d5a598bab0c1707b137d230827a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"277a3e040b79e40011534ea4f0378e37467f84bcb2b971534b09695ee6431d6a\"" Jan 23 23:54:58.961256 containerd[1924]: time="2026-01-23T23:54:58.960513286Z" level=info msg="StartContainer for \"277a3e040b79e40011534ea4f0378e37467f84bcb2b971534b09695ee6431d6a\"" Jan 23 23:54:58.961256 containerd[1924]: time="2026-01-23T23:54:58.961222348Z" level=info msg="CreateContainer within sandbox \"873f428b8b62bf56db4e136644716ab49074e21a9cb4c1bcd1b7c092c6ab808a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:54:58.961450 kubelet[2745]: E0123 23:54:58.961137 2745 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:54:58.972097 kubelet[2745]: I0123 23:54:58.972044 2745 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-163" Jan 23 23:54:58.973560 kubelet[2745]: E0123 23:54:58.973486 2745 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.163:6443/api/v1/nodes\": dial tcp 172.31.21.163:6443: connect: connection refused" node="ip-172-31-21-163" Jan 23 23:54:59.000114 containerd[1924]: time="2026-01-23T23:54:58.999826825Z" level=info msg="CreateContainer within sandbox \"ac10e7a832b7f2ee82ab474fb21b5fcc595c698081704ad8583c3d0beee3f727\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d482863789020a0b1117947034f19db913fdcc64eba59a3b85c09f579ace969d\"" Jan 23 23:54:59.001274 containerd[1924]: time="2026-01-23T23:54:59.000915277Z" level=info msg="StartContainer for \"d482863789020a0b1117947034f19db913fdcc64eba59a3b85c09f579ace969d\"" Jan 23 23:54:59.010450 containerd[1924]: time="2026-01-23T23:54:59.010361202Z" level=info msg="CreateContainer within sandbox \"873f428b8b62bf56db4e136644716ab49074e21a9cb4c1bcd1b7c092c6ab808a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"08280e4df9d2b58e4c1205efe2038759ebd4f084e60b6db065339e8f4bb3c55c\"" Jan 23 23:54:59.011871 containerd[1924]: time="2026-01-23T23:54:59.011806508Z" level=info msg="StartContainer for \"08280e4df9d2b58e4c1205efe2038759ebd4f084e60b6db065339e8f4bb3c55c\"" Jan 23 23:54:59.039206 systemd[1]: Started cri-containerd-277a3e040b79e40011534ea4f0378e37467f84bcb2b971534b09695ee6431d6a.scope - libcontainer container 277a3e040b79e40011534ea4f0378e37467f84bcb2b971534b09695ee6431d6a. Jan 23 23:54:59.110976 systemd[1]: Started cri-containerd-d482863789020a0b1117947034f19db913fdcc64eba59a3b85c09f579ace969d.scope - libcontainer container d482863789020a0b1117947034f19db913fdcc64eba59a3b85c09f579ace969d. Jan 23 23:54:59.124554 systemd[1]: Started cri-containerd-08280e4df9d2b58e4c1205efe2038759ebd4f084e60b6db065339e8f4bb3c55c.scope - libcontainer container 08280e4df9d2b58e4c1205efe2038759ebd4f084e60b6db065339e8f4bb3c55c. Jan 23 23:54:59.208365 containerd[1924]: time="2026-01-23T23:54:59.208236687Z" level=info msg="StartContainer for \"277a3e040b79e40011534ea4f0378e37467f84bcb2b971534b09695ee6431d6a\" returns successfully" Jan 23 23:54:59.266088 containerd[1924]: time="2026-01-23T23:54:59.265990314Z" level=info msg="StartContainer for \"08280e4df9d2b58e4c1205efe2038759ebd4f084e60b6db065339e8f4bb3c55c\" returns successfully" Jan 23 23:54:59.289815 kubelet[2745]: E0123 23:54:59.289696 2745 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.21.163:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.163:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 23:54:59.353384 containerd[1924]: time="2026-01-23T23:54:59.353160190Z" level=info msg="StartContainer for \"d482863789020a0b1117947034f19db913fdcc64eba59a3b85c09f579ace969d\" returns successfully" Jan 23 23:54:59.451280 kubelet[2745]: E0123 23:54:59.449893 2745 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-163\" not found" node="ip-172-31-21-163" Jan 23 23:54:59.460256 kubelet[2745]: E0123 23:54:59.460050 2745 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-163\" not found" node="ip-172-31-21-163" Jan 23 23:54:59.464795 kubelet[2745]: E0123 23:54:59.464741 2745 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-163\" not found" node="ip-172-31-21-163" Jan 23 23:55:00.467281 kubelet[2745]: E0123 23:55:00.466415 2745 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-163\" not found" node="ip-172-31-21-163" Jan 23 23:55:00.467281 kubelet[2745]: E0123 23:55:00.467133 2745 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-163\" not found" node="ip-172-31-21-163" Jan 23 23:55:00.577874 kubelet[2745]: I0123 23:55:00.577822 2745 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-163" Jan 23 23:55:01.469227 kubelet[2745]: E0123 23:55:01.468011 2745 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-163\" not found" node="ip-172-31-21-163" Jan 23 23:55:01.471602 kubelet[2745]: E0123 23:55:01.468752 2745 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-163\" not found" node="ip-172-31-21-163" Jan 23 23:55:03.907942 kubelet[2745]: E0123 23:55:03.907751 2745 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-163\" not found" node="ip-172-31-21-163" Jan 23 23:55:04.043268 kubelet[2745]: E0123 23:55:04.042854 2745 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-163.188d81648240e210 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-163,UID:ip-172-31-21-163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-163,},FirstTimestamp:2026-01-23 23:54:57.30847592 +0000 UTC m=+1.344554651,LastTimestamp:2026-01-23 23:54:57.30847592 +0000 UTC m=+1.344554651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-163,}" Jan 23 23:55:04.085627 kubelet[2745]: I0123 23:55:04.085252 2745 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-163" Jan 23 23:55:04.085627 kubelet[2745]: E0123 23:55:04.085314 2745 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-21-163\": node \"ip-172-31-21-163\" not found" Jan 23 23:55:04.136812 kubelet[2745]: I0123 23:55:04.136753 2745 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-163" Jan 23 23:55:04.171334 kubelet[2745]: E0123 23:55:04.169752 2745 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-163\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-163" Jan 23 23:55:04.171334 kubelet[2745]: I0123 23:55:04.169811 2745 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:55:04.180632 kubelet[2745]: E0123 23:55:04.180272 2745 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-163\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:55:04.180632 kubelet[2745]: I0123 23:55:04.180329 2745 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-163" Jan 23 23:55:04.188144 kubelet[2745]: E0123 23:55:04.188088 2745 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-163\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-163" Jan 23 23:55:04.303045 kubelet[2745]: I0123 23:55:04.302991 2745 apiserver.go:52] "Watching apiserver" Jan 23 23:55:04.336849 kubelet[2745]: I0123 23:55:04.336789 2745 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 23:55:05.831692 kubelet[2745]: I0123 23:55:05.831634 2745 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:55:06.139150 systemd[1]: Reloading requested from client PID 3132 ('systemctl') (unit session-7.scope)... Jan 23 23:55:06.139216 systemd[1]: Reloading... Jan 23 23:55:06.301247 zram_generator::config[3171]: No configuration found. Jan 23 23:55:06.610251 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:55:06.840536 systemd[1]: Reloading finished in 700 ms. Jan 23 23:55:06.927675 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:06.946453 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:55:06.948312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:06.949378 systemd[1]: kubelet.service: Consumed 2.027s CPU time, 120.8M memory peak, 0B memory swap peak. Jan 23 23:55:06.960826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:07.428310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:07.448947 (kubelet)[3232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:55:07.564317 kubelet[3232]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:55:07.564317 kubelet[3232]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:55:07.564317 kubelet[3232]: I0123 23:55:07.563945 3232 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:55:07.583249 kubelet[3232]: I0123 23:55:07.582537 3232 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 23:55:07.583249 kubelet[3232]: I0123 23:55:07.582588 3232 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:55:07.583249 kubelet[3232]: I0123 23:55:07.582653 3232 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 23:55:07.583249 kubelet[3232]: I0123 23:55:07.582668 3232 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:55:07.583249 kubelet[3232]: I0123 23:55:07.583109 3232 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:55:07.592811 kubelet[3232]: I0123 23:55:07.590876 3232 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 23:55:07.602107 kubelet[3232]: I0123 23:55:07.602020 3232 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:55:07.612480 kubelet[3232]: E0123 23:55:07.612355 3232 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:55:07.613127 kubelet[3232]: I0123 23:55:07.612559 3232 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 23 23:55:07.621679 kubelet[3232]: I0123 23:55:07.621637 3232 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 23:55:07.622916 kubelet[3232]: I0123 23:55:07.622516 3232 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:55:07.622916 kubelet[3232]: I0123 23:55:07.622578 3232 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:55:07.622916 kubelet[3232]: I0123 23:55:07.622840 3232 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:55:07.622916 kubelet[3232]: I0123 23:55:07.622859 3232 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 23:55:07.624245 kubelet[3232]: I0123 23:55:07.623501 3232 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 23:55:07.625558 kubelet[3232]: I0123 23:55:07.625497 3232 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:55:07.626132 kubelet[3232]: I0123 23:55:07.626099 3232 kubelet.go:475] "Attempting to sync node with API server" Jan 23 23:55:07.626460 kubelet[3232]: I0123 23:55:07.626430 3232 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:55:07.626779 kubelet[3232]: I0123 23:55:07.626723 3232 kubelet.go:387] "Adding apiserver pod source" Jan 23 23:55:07.627117 kubelet[3232]: I0123 23:55:07.627000 3232 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:55:07.633367 kubelet[3232]: I0123 23:55:07.632575 3232 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:55:07.634661 kubelet[3232]: I0123 23:55:07.634242 3232 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:55:07.635012 kubelet[3232]: I0123 23:55:07.634968 3232 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 23:55:07.643923 kubelet[3232]: I0123 23:55:07.643741 3232 server.go:1262] "Started kubelet" Jan 23 23:55:07.649437 kubelet[3232]: I0123 23:55:07.649144 3232 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:55:07.660103 kubelet[3232]: I0123 23:55:07.660021 3232 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:55:07.662507 kubelet[3232]: I0123 23:55:07.662101 3232 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:55:07.663086 kubelet[3232]: I0123 23:55:07.662371 3232 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 23:55:07.668781 kubelet[3232]: I0123 23:55:07.668497 3232 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:55:07.670684 kubelet[3232]: I0123 23:55:07.670496 3232 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 23:55:07.673023 kubelet[3232]: E0123 23:55:07.672469 3232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-21-163\" not found" Jan 23 23:55:07.691053 kubelet[3232]: I0123 23:55:07.690666 3232 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 23:55:07.691972 kubelet[3232]: I0123 23:55:07.691900 3232 reconciler.go:29] "Reconciler: start to sync state" Jan 23 23:55:07.697808 kubelet[3232]: I0123 23:55:07.697642 3232 server.go:310] "Adding debug handlers to kubelet server" Jan 23 23:55:07.710724 kubelet[3232]: I0123 23:55:07.710292 3232 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:55:07.734050 kubelet[3232]: I0123 23:55:07.733977 3232 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:55:07.734713 kubelet[3232]: I0123 23:55:07.734635 3232 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:55:07.760854 kubelet[3232]: I0123 23:55:07.760794 3232 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 23:55:07.764071 kubelet[3232]: I0123 23:55:07.763448 3232 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 23:55:07.764071 kubelet[3232]: I0123 23:55:07.763495 3232 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 23:55:07.764071 kubelet[3232]: I0123 23:55:07.763539 3232 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 23:55:07.764071 kubelet[3232]: E0123 23:55:07.763621 3232 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:55:07.772803 kubelet[3232]: E0123 23:55:07.772745 3232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-21-163\" not found" Jan 23 23:55:07.786591 kubelet[3232]: I0123 23:55:07.786521 3232 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:55:07.792611 kubelet[3232]: E0123 23:55:07.791778 3232 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:55:07.864137 kubelet[3232]: E0123 23:55:07.863790 3232 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 23:55:07.922249 kubelet[3232]: I0123 23:55:07.921823 3232 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:55:07.922249 kubelet[3232]: I0123 23:55:07.921872 3232 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:55:07.922249 kubelet[3232]: I0123 23:55:07.921914 3232 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:55:07.922716 kubelet[3232]: I0123 23:55:07.922669 3232 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:55:07.924257 kubelet[3232]: I0123 23:55:07.922844 3232 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:55:07.924257 kubelet[3232]: I0123 23:55:07.922905 3232 policy_none.go:49] "None policy: Start" Jan 23 23:55:07.924257 kubelet[3232]: I0123 23:55:07.922928 3232 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 23:55:07.924257 kubelet[3232]: I0123 23:55:07.922957 3232 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 23:55:07.924257 kubelet[3232]: I0123 23:55:07.923324 3232 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 23:55:07.924257 kubelet[3232]: I0123 23:55:07.923355 3232 policy_none.go:47] "Start" Jan 23 23:55:07.939558 kubelet[3232]: E0123 23:55:07.939505 3232 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:55:07.939876 kubelet[3232]: I0123 23:55:07.939824 3232 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:55:07.940027 kubelet[3232]: I0123 23:55:07.939863 3232 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:55:07.940738 kubelet[3232]: I0123 23:55:07.940655 3232 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:55:07.952416 kubelet[3232]: E0123 23:55:07.951454 3232 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:55:08.058719 kubelet[3232]: I0123 23:55:08.058641 3232 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-163" Jan 23 23:55:08.065653 kubelet[3232]: I0123 23:55:08.065584 3232 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-163" Jan 23 23:55:08.068258 kubelet[3232]: I0123 23:55:08.066782 3232 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:55:08.068258 kubelet[3232]: I0123 23:55:08.067721 3232 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-163" Jan 23 23:55:08.093846 kubelet[3232]: I0123 23:55:08.093762 3232 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-163" Jan 23 23:55:08.093981 kubelet[3232]: I0123 23:55:08.093915 3232 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-163" Jan 23 23:55:08.103736 kubelet[3232]: E0123 23:55:08.103669 3232 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-163\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:55:08.111674 kubelet[3232]: I0123 23:55:08.111598 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b207a04c492e1302948a4dbbf03d948c-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-163\" (UID: \"b207a04c492e1302948a4dbbf03d948c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:55:08.111862 kubelet[3232]: I0123 23:55:08.111682 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b207a04c492e1302948a4dbbf03d948c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-163\" (UID: \"b207a04c492e1302948a4dbbf03d948c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:55:08.111862 kubelet[3232]: I0123 23:55:08.111753 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32670ae5de9e4fb069b4dea2eaaae753-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-163\" (UID: \"32670ae5de9e4fb069b4dea2eaaae753\") " pod="kube-system/kube-scheduler-ip-172-31-21-163" Jan 23 23:55:08.111862 kubelet[3232]: I0123 23:55:08.111807 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86fc39f5af262685e759e329a9c4cb93-ca-certs\") pod \"kube-apiserver-ip-172-31-21-163\" (UID: \"86fc39f5af262685e759e329a9c4cb93\") " pod="kube-system/kube-apiserver-ip-172-31-21-163" Jan 23 23:55:08.111862 kubelet[3232]: I0123 23:55:08.111843 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86fc39f5af262685e759e329a9c4cb93-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-163\" (UID: \"86fc39f5af262685e759e329a9c4cb93\") " pod="kube-system/kube-apiserver-ip-172-31-21-163" Jan 23 23:55:08.112122 kubelet[3232]: I0123 23:55:08.111901 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86fc39f5af262685e759e329a9c4cb93-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-163\" (UID: \"86fc39f5af262685e759e329a9c4cb93\") " pod="kube-system/kube-apiserver-ip-172-31-21-163" Jan 23 23:55:08.112122 kubelet[3232]: I0123 23:55:08.111940 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b207a04c492e1302948a4dbbf03d948c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-163\" (UID: \"b207a04c492e1302948a4dbbf03d948c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:55:08.112122 kubelet[3232]: I0123 23:55:08.111977 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b207a04c492e1302948a4dbbf03d948c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-163\" (UID: \"b207a04c492e1302948a4dbbf03d948c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:55:08.112122 kubelet[3232]: I0123 23:55:08.112016 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b207a04c492e1302948a4dbbf03d948c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-163\" (UID: \"b207a04c492e1302948a4dbbf03d948c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-163" Jan 23 23:55:08.629302 kubelet[3232]: I0123 23:55:08.629215 3232 apiserver.go:52] "Watching apiserver" Jan 23 23:55:08.691609 kubelet[3232]: I0123 23:55:08.691471 3232 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 23:55:08.770227 kubelet[3232]: I0123 23:55:08.769694 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-163" podStartSLOduration=0.769671382 podStartE2EDuration="769.671382ms" podCreationTimestamp="2026-01-23 23:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:08.750359815 +0000 UTC m=+1.291992539" watchObservedRunningTime="2026-01-23 23:55:08.769671382 +0000 UTC m=+1.311304094" Jan 23 23:55:08.786467 kubelet[3232]: I0123 23:55:08.786358 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-163" podStartSLOduration=0.786337091 podStartE2EDuration="786.337091ms" podCreationTimestamp="2026-01-23 23:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:08.770558349 +0000 UTC m=+1.312191097" watchObservedRunningTime="2026-01-23 23:55:08.786337091 +0000 UTC m=+1.327969803" Jan 23 23:55:08.786711 kubelet[3232]: I0123 23:55:08.786536 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-163" podStartSLOduration=3.78652394 podStartE2EDuration="3.78652394s" podCreationTimestamp="2026-01-23 23:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:08.78592657 +0000 UTC m=+1.327559318" watchObservedRunningTime="2026-01-23 23:55:08.78652394 +0000 UTC m=+1.328156676" Jan 23 23:55:10.327790 sudo[2244]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:10.408001 sshd[2241]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:10.414735 systemd[1]: sshd@6-172.31.21.163:22-4.153.228.146:48462.service: Deactivated successfully. Jan 23 23:55:10.418352 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:55:10.418728 systemd[1]: session-7.scope: Consumed 10.499s CPU time, 153.1M memory peak, 0B memory swap peak. Jan 23 23:55:10.421977 systemd-logind[1913]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:55:10.424987 systemd-logind[1913]: Removed session 7. Jan 23 23:55:12.326945 kubelet[3232]: I0123 23:55:12.326527 3232 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:55:12.328136 containerd[1924]: time="2026-01-23T23:55:12.327425391Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:55:12.333745 kubelet[3232]: I0123 23:55:12.329333 3232 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:55:13.055230 kubelet[3232]: E0123 23:55:13.054159 3232 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-t8lhs\" is forbidden: User \"system:node:ip-172-31-21-163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-163' and this object" podUID="97b72fc0-d33c-4e82-a86e-960039348f18" pod="kube-system/kube-proxy-t8lhs" Jan 23 23:55:13.059592 systemd[1]: Created slice kubepods-besteffort-pod97b72fc0_d33c_4e82_a86e_960039348f18.slice - libcontainer container kubepods-besteffort-pod97b72fc0_d33c_4e82_a86e_960039348f18.slice. Jan 23 23:55:13.095171 systemd[1]: Created slice kubepods-burstable-pod22d7b97a_16ed_4979_82ca_706c6f272538.slice - libcontainer container kubepods-burstable-pod22d7b97a_16ed_4979_82ca_706c6f272538.slice. Jan 23 23:55:13.144332 kubelet[3232]: I0123 23:55:13.144255 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/22d7b97a-16ed-4979-82ca-706c6f272538-cni\") pod \"kube-flannel-ds-zzd5q\" (UID: \"22d7b97a-16ed-4979-82ca-706c6f272538\") " pod="kube-flannel/kube-flannel-ds-zzd5q" Jan 23 23:55:13.144511 kubelet[3232]: I0123 23:55:13.144340 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97b72fc0-d33c-4e82-a86e-960039348f18-lib-modules\") pod \"kube-proxy-t8lhs\" (UID: \"97b72fc0-d33c-4e82-a86e-960039348f18\") " pod="kube-system/kube-proxy-t8lhs" Jan 23 23:55:13.144511 kubelet[3232]: I0123 23:55:13.144385 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwqlm\" (UniqueName: \"kubernetes.io/projected/97b72fc0-d33c-4e82-a86e-960039348f18-kube-api-access-xwqlm\") pod \"kube-proxy-t8lhs\" (UID: \"97b72fc0-d33c-4e82-a86e-960039348f18\") " pod="kube-system/kube-proxy-t8lhs" Jan 23 23:55:13.144511 kubelet[3232]: I0123 23:55:13.144443 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/22d7b97a-16ed-4979-82ca-706c6f272538-flannel-cfg\") pod \"kube-flannel-ds-zzd5q\" (UID: \"22d7b97a-16ed-4979-82ca-706c6f272538\") " pod="kube-flannel/kube-flannel-ds-zzd5q" Jan 23 23:55:13.144511 kubelet[3232]: I0123 23:55:13.144479 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22d7b97a-16ed-4979-82ca-706c6f272538-xtables-lock\") pod \"kube-flannel-ds-zzd5q\" (UID: \"22d7b97a-16ed-4979-82ca-706c6f272538\") " pod="kube-flannel/kube-flannel-ds-zzd5q" Jan 23 23:55:13.144768 kubelet[3232]: I0123 23:55:13.144527 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/97b72fc0-d33c-4e82-a86e-960039348f18-kube-proxy\") pod \"kube-proxy-t8lhs\" (UID: \"97b72fc0-d33c-4e82-a86e-960039348f18\") " pod="kube-system/kube-proxy-t8lhs" Jan 23 23:55:13.144768 kubelet[3232]: I0123 23:55:13.144562 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97b72fc0-d33c-4e82-a86e-960039348f18-xtables-lock\") pod \"kube-proxy-t8lhs\" (UID: \"97b72fc0-d33c-4e82-a86e-960039348f18\") " pod="kube-system/kube-proxy-t8lhs" Jan 23 23:55:13.144768 kubelet[3232]: I0123 23:55:13.144607 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/22d7b97a-16ed-4979-82ca-706c6f272538-cni-plugin\") pod \"kube-flannel-ds-zzd5q\" (UID: \"22d7b97a-16ed-4979-82ca-706c6f272538\") " pod="kube-flannel/kube-flannel-ds-zzd5q" Jan 23 23:55:13.144768 kubelet[3232]: I0123 23:55:13.144645 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7gj7\" (UniqueName: \"kubernetes.io/projected/22d7b97a-16ed-4979-82ca-706c6f272538-kube-api-access-x7gj7\") pod \"kube-flannel-ds-zzd5q\" (UID: \"22d7b97a-16ed-4979-82ca-706c6f272538\") " pod="kube-flannel/kube-flannel-ds-zzd5q" Jan 23 23:55:13.144768 kubelet[3232]: I0123 23:55:13.144682 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/22d7b97a-16ed-4979-82ca-706c6f272538-run\") pod \"kube-flannel-ds-zzd5q\" (UID: \"22d7b97a-16ed-4979-82ca-706c6f272538\") " pod="kube-flannel/kube-flannel-ds-zzd5q" Jan 23 23:55:13.263647 kubelet[3232]: E0123 23:55:13.263576 3232 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 23 23:55:13.263816 kubelet[3232]: E0123 23:55:13.263634 3232 projected.go:196] Error preparing data for projected volume kube-api-access-xwqlm for pod kube-system/kube-proxy-t8lhs: configmap "kube-root-ca.crt" not found Jan 23 23:55:13.263816 kubelet[3232]: E0123 23:55:13.263769 3232 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97b72fc0-d33c-4e82-a86e-960039348f18-kube-api-access-xwqlm podName:97b72fc0-d33c-4e82-a86e-960039348f18 nodeName:}" failed. No retries permitted until 2026-01-23 23:55:13.763735026 +0000 UTC m=+6.305367738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xwqlm" (UniqueName: "kubernetes.io/projected/97b72fc0-d33c-4e82-a86e-960039348f18-kube-api-access-xwqlm") pod "kube-proxy-t8lhs" (UID: "97b72fc0-d33c-4e82-a86e-960039348f18") : configmap "kube-root-ca.crt" not found Jan 23 23:55:13.266520 kubelet[3232]: E0123 23:55:13.266438 3232 projected.go:291] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 23 23:55:13.266520 kubelet[3232]: E0123 23:55:13.266499 3232 projected.go:196] Error preparing data for projected volume kube-api-access-x7gj7 for pod kube-flannel/kube-flannel-ds-zzd5q: configmap "kube-root-ca.crt" not found Jan 23 23:55:13.267439 kubelet[3232]: E0123 23:55:13.267377 3232 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22d7b97a-16ed-4979-82ca-706c6f272538-kube-api-access-x7gj7 podName:22d7b97a-16ed-4979-82ca-706c6f272538 nodeName:}" failed. No retries permitted until 2026-01-23 23:55:13.767242339 +0000 UTC m=+6.308875039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x7gj7" (UniqueName: "kubernetes.io/projected/22d7b97a-16ed-4979-82ca-706c6f272538-kube-api-access-x7gj7") pod "kube-flannel-ds-zzd5q" (UID: "22d7b97a-16ed-4979-82ca-706c6f272538") : configmap "kube-root-ca.crt" not found Jan 23 23:55:13.981925 containerd[1924]: time="2026-01-23T23:55:13.981681969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8lhs,Uid:97b72fc0-d33c-4e82-a86e-960039348f18,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:14.016994 containerd[1924]: time="2026-01-23T23:55:14.014655248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zzd5q,Uid:22d7b97a-16ed-4979-82ca-706c6f272538,Namespace:kube-flannel,Attempt:0,}" Jan 23 23:55:14.050120 containerd[1924]: time="2026-01-23T23:55:14.048438139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:14.050120 containerd[1924]: time="2026-01-23T23:55:14.048888208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:14.050120 containerd[1924]: time="2026-01-23T23:55:14.049905621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:14.050607 containerd[1924]: time="2026-01-23T23:55:14.050439900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:14.104564 containerd[1924]: time="2026-01-23T23:55:14.103712827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:14.104564 containerd[1924]: time="2026-01-23T23:55:14.103819596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:14.104564 containerd[1924]: time="2026-01-23T23:55:14.104059452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:14.107599 systemd[1]: Started cri-containerd-d196e7239d63fac6d7a34335ee7d314e1fa8601a90585b81f9997c3033c06542.scope - libcontainer container d196e7239d63fac6d7a34335ee7d314e1fa8601a90585b81f9997c3033c06542. Jan 23 23:55:14.110111 containerd[1924]: time="2026-01-23T23:55:14.107228244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:14.155961 systemd[1]: Started cri-containerd-8c8ef63a6044bea01d98ff241dfd0079d7b33801eff4ed3df9c3672203c05576.scope - libcontainer container 8c8ef63a6044bea01d98ff241dfd0079d7b33801eff4ed3df9c3672203c05576. Jan 23 23:55:14.182898 containerd[1924]: time="2026-01-23T23:55:14.182574299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8lhs,Uid:97b72fc0-d33c-4e82-a86e-960039348f18,Namespace:kube-system,Attempt:0,} returns sandbox id \"d196e7239d63fac6d7a34335ee7d314e1fa8601a90585b81f9997c3033c06542\"" Jan 23 23:55:14.197172 containerd[1924]: time="2026-01-23T23:55:14.196957991Z" level=info msg="CreateContainer within sandbox \"d196e7239d63fac6d7a34335ee7d314e1fa8601a90585b81f9997c3033c06542\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:55:14.228965 containerd[1924]: time="2026-01-23T23:55:14.228855448Z" level=info msg="CreateContainer within sandbox \"d196e7239d63fac6d7a34335ee7d314e1fa8601a90585b81f9997c3033c06542\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b92f3ca47f435c789ba92e2820538bbcac89efefd074bc4c361645dc2cdf58ea\"" Jan 23 23:55:14.232132 containerd[1924]: time="2026-01-23T23:55:14.231542439Z" level=info msg="StartContainer for \"b92f3ca47f435c789ba92e2820538bbcac89efefd074bc4c361645dc2cdf58ea\"" Jan 23 23:55:14.269817 containerd[1924]: time="2026-01-23T23:55:14.269763372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zzd5q,Uid:22d7b97a-16ed-4979-82ca-706c6f272538,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"8c8ef63a6044bea01d98ff241dfd0079d7b33801eff4ed3df9c3672203c05576\"" Jan 23 23:55:14.275541 containerd[1924]: time="2026-01-23T23:55:14.275462956Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 23 23:55:14.309551 systemd[1]: Started cri-containerd-b92f3ca47f435c789ba92e2820538bbcac89efefd074bc4c361645dc2cdf58ea.scope - libcontainer container b92f3ca47f435c789ba92e2820538bbcac89efefd074bc4c361645dc2cdf58ea. Jan 23 23:55:14.382859 containerd[1924]: time="2026-01-23T23:55:14.382794556Z" level=info msg="StartContainer for \"b92f3ca47f435c789ba92e2820538bbcac89efefd074bc4c361645dc2cdf58ea\" returns successfully" Jan 23 23:55:14.910757 kubelet[3232]: I0123 23:55:14.910264 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t8lhs" podStartSLOduration=1.9102421760000001 podStartE2EDuration="1.910242176s" podCreationTimestamp="2026-01-23 23:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:14.910179 +0000 UTC m=+7.451811724" watchObservedRunningTime="2026-01-23 23:55:14.910242176 +0000 UTC m=+7.451874900" Jan 23 23:55:15.713701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426406395.mount: Deactivated successfully. Jan 23 23:55:15.796823 containerd[1924]: time="2026-01-23T23:55:15.796246011Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:15.799211 containerd[1924]: time="2026-01-23T23:55:15.799095046Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=5125564" Jan 23 23:55:15.801029 containerd[1924]: time="2026-01-23T23:55:15.800548433Z" level=info msg="ImageCreate event name:\"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:15.808872 containerd[1924]: time="2026-01-23T23:55:15.808780179Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:15.812045 containerd[1924]: time="2026-01-23T23:55:15.811951804Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"5125394\" in 1.536096491s" Jan 23 23:55:15.812045 containerd[1924]: time="2026-01-23T23:55:15.812028450Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\"" Jan 23 23:55:15.823075 containerd[1924]: time="2026-01-23T23:55:15.822989988Z" level=info msg="CreateContainer within sandbox \"8c8ef63a6044bea01d98ff241dfd0079d7b33801eff4ed3df9c3672203c05576\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 23 23:55:15.852652 containerd[1924]: time="2026-01-23T23:55:15.852583830Z" level=info msg="CreateContainer within sandbox \"8c8ef63a6044bea01d98ff241dfd0079d7b33801eff4ed3df9c3672203c05576\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"5dec5fb5e390acd1ac73ca7bac96dc4e37bdc8f4269f816d6a2838986bcd09db\"" Jan 23 23:55:15.854176 containerd[1924]: time="2026-01-23T23:55:15.854090091Z" level=info msg="StartContainer for \"5dec5fb5e390acd1ac73ca7bac96dc4e37bdc8f4269f816d6a2838986bcd09db\"" Jan 23 23:55:15.923575 systemd[1]: Started cri-containerd-5dec5fb5e390acd1ac73ca7bac96dc4e37bdc8f4269f816d6a2838986bcd09db.scope - libcontainer container 5dec5fb5e390acd1ac73ca7bac96dc4e37bdc8f4269f816d6a2838986bcd09db. Jan 23 23:55:15.996873 containerd[1924]: time="2026-01-23T23:55:15.994469130Z" level=info msg="StartContainer for \"5dec5fb5e390acd1ac73ca7bac96dc4e37bdc8f4269f816d6a2838986bcd09db\" returns successfully" Jan 23 23:55:15.997998 systemd[1]: cri-containerd-5dec5fb5e390acd1ac73ca7bac96dc4e37bdc8f4269f816d6a2838986bcd09db.scope: Deactivated successfully. Jan 23 23:55:16.056452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dec5fb5e390acd1ac73ca7bac96dc4e37bdc8f4269f816d6a2838986bcd09db-rootfs.mount: Deactivated successfully. Jan 23 23:55:16.077071 containerd[1924]: time="2026-01-23T23:55:16.076979693Z" level=info msg="shim disconnected" id=5dec5fb5e390acd1ac73ca7bac96dc4e37bdc8f4269f816d6a2838986bcd09db namespace=k8s.io Jan 23 23:55:16.077722 containerd[1924]: time="2026-01-23T23:55:16.077483765Z" level=warning msg="cleaning up after shim disconnected" id=5dec5fb5e390acd1ac73ca7bac96dc4e37bdc8f4269f816d6a2838986bcd09db namespace=k8s.io Jan 23 23:55:16.077722 containerd[1924]: time="2026-01-23T23:55:16.077566330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:55:16.102219 containerd[1924]: time="2026-01-23T23:55:16.101988672Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:55:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:55:16.924536 containerd[1924]: time="2026-01-23T23:55:16.924467319Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 23 23:55:19.764440 containerd[1924]: time="2026-01-23T23:55:19.764339025Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:19.769124 containerd[1924]: time="2026-01-23T23:55:19.769048475Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=28419854" Jan 23 23:55:19.772279 containerd[1924]: time="2026-01-23T23:55:19.771954251Z" level=info msg="ImageCreate event name:\"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:19.783918 containerd[1924]: time="2026-01-23T23:55:19.783796945Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:19.788011 containerd[1924]: time="2026-01-23T23:55:19.787771892Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32412118\" in 2.861633325s" Jan 23 23:55:19.788011 containerd[1924]: time="2026-01-23T23:55:19.787848058Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\"" Jan 23 23:55:19.800125 containerd[1924]: time="2026-01-23T23:55:19.800044917Z" level=info msg="CreateContainer within sandbox \"8c8ef63a6044bea01d98ff241dfd0079d7b33801eff4ed3df9c3672203c05576\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 23:55:19.833043 containerd[1924]: time="2026-01-23T23:55:19.832818584Z" level=info msg="CreateContainer within sandbox \"8c8ef63a6044bea01d98ff241dfd0079d7b33801eff4ed3df9c3672203c05576\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3459c8591671c87f68b3bd17ecd38a3de1b72265b59311e33f5c391a19d7958b\"" Jan 23 23:55:19.833852 containerd[1924]: time="2026-01-23T23:55:19.833722144Z" level=info msg="StartContainer for \"3459c8591671c87f68b3bd17ecd38a3de1b72265b59311e33f5c391a19d7958b\"" Jan 23 23:55:19.910572 systemd[1]: Started cri-containerd-3459c8591671c87f68b3bd17ecd38a3de1b72265b59311e33f5c391a19d7958b.scope - libcontainer container 3459c8591671c87f68b3bd17ecd38a3de1b72265b59311e33f5c391a19d7958b. Jan 23 23:55:19.973859 systemd[1]: cri-containerd-3459c8591671c87f68b3bd17ecd38a3de1b72265b59311e33f5c391a19d7958b.scope: Deactivated successfully. Jan 23 23:55:19.982639 kubelet[3232]: I0123 23:55:19.981846 3232 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 23:55:19.985725 containerd[1924]: time="2026-01-23T23:55:19.985528338Z" level=info msg="StartContainer for \"3459c8591671c87f68b3bd17ecd38a3de1b72265b59311e33f5c391a19d7958b\" returns successfully" Jan 23 23:55:20.076719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3459c8591671c87f68b3bd17ecd38a3de1b72265b59311e33f5c391a19d7958b-rootfs.mount: Deactivated successfully. Jan 23 23:55:20.092302 systemd[1]: Created slice kubepods-burstable-pod9058fc0e_2092_4aa4_962a_dd4abb957330.slice - libcontainer container kubepods-burstable-pod9058fc0e_2092_4aa4_962a_dd4abb957330.slice. Jan 23 23:55:20.099247 kubelet[3232]: I0123 23:55:20.096426 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9058fc0e-2092-4aa4-962a-dd4abb957330-config-volume\") pod \"coredns-66bc5c9577-7jn92\" (UID: \"9058fc0e-2092-4aa4-962a-dd4abb957330\") " pod="kube-system/coredns-66bc5c9577-7jn92" Jan 23 23:55:20.099247 kubelet[3232]: I0123 23:55:20.096507 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1008d00-1ff3-49fa-a9f7-78f1563cd91d-config-volume\") pod \"coredns-66bc5c9577-7mbhq\" (UID: \"f1008d00-1ff3-49fa-a9f7-78f1563cd91d\") " pod="kube-system/coredns-66bc5c9577-7mbhq" Jan 23 23:55:20.099247 kubelet[3232]: I0123 23:55:20.096551 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wt24\" (UniqueName: \"kubernetes.io/projected/f1008d00-1ff3-49fa-a9f7-78f1563cd91d-kube-api-access-4wt24\") pod \"coredns-66bc5c9577-7mbhq\" (UID: \"f1008d00-1ff3-49fa-a9f7-78f1563cd91d\") " pod="kube-system/coredns-66bc5c9577-7mbhq" Jan 23 23:55:20.099247 kubelet[3232]: I0123 23:55:20.096595 3232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z74jl\" (UniqueName: \"kubernetes.io/projected/9058fc0e-2092-4aa4-962a-dd4abb957330-kube-api-access-z74jl\") pod \"coredns-66bc5c9577-7jn92\" (UID: \"9058fc0e-2092-4aa4-962a-dd4abb957330\") " pod="kube-system/coredns-66bc5c9577-7jn92" Jan 23 23:55:20.118951 systemd[1]: Created slice kubepods-burstable-podf1008d00_1ff3_49fa_a9f7_78f1563cd91d.slice - libcontainer container kubepods-burstable-podf1008d00_1ff3_49fa_a9f7_78f1563cd91d.slice. Jan 23 23:55:20.232492 containerd[1924]: time="2026-01-23T23:55:20.232031714Z" level=info msg="shim disconnected" id=3459c8591671c87f68b3bd17ecd38a3de1b72265b59311e33f5c391a19d7958b namespace=k8s.io Jan 23 23:55:20.232492 containerd[1924]: time="2026-01-23T23:55:20.232121411Z" level=warning msg="cleaning up after shim disconnected" id=3459c8591671c87f68b3bd17ecd38a3de1b72265b59311e33f5c391a19d7958b namespace=k8s.io Jan 23 23:55:20.232492 containerd[1924]: time="2026-01-23T23:55:20.232143682Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:55:20.411925 containerd[1924]: time="2026-01-23T23:55:20.411713319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7jn92,Uid:9058fc0e-2092-4aa4-962a-dd4abb957330,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:20.448052 containerd[1924]: time="2026-01-23T23:55:20.447957428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7mbhq,Uid:f1008d00-1ff3-49fa-a9f7-78f1563cd91d,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:20.499328 containerd[1924]: time="2026-01-23T23:55:20.499164579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7jn92,Uid:9058fc0e-2092-4aa4-962a-dd4abb957330,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14fbe59637993f6759998b772fbcefa91b29b14ae4bc1b44781a28d541f18182\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 23:55:20.500064 kubelet[3232]: E0123 23:55:20.499600 3232 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14fbe59637993f6759998b772fbcefa91b29b14ae4bc1b44781a28d541f18182\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 23:55:20.500064 kubelet[3232]: E0123 23:55:20.499700 3232 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14fbe59637993f6759998b772fbcefa91b29b14ae4bc1b44781a28d541f18182\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-7jn92" Jan 23 23:55:20.500064 kubelet[3232]: E0123 23:55:20.499742 3232 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14fbe59637993f6759998b772fbcefa91b29b14ae4bc1b44781a28d541f18182\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-7jn92" Jan 23 23:55:20.500064 kubelet[3232]: E0123 23:55:20.499826 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7jn92_kube-system(9058fc0e-2092-4aa4-962a-dd4abb957330)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7jn92_kube-system(9058fc0e-2092-4aa4-962a-dd4abb957330)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14fbe59637993f6759998b772fbcefa91b29b14ae4bc1b44781a28d541f18182\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-7jn92" podUID="9058fc0e-2092-4aa4-962a-dd4abb957330" Jan 23 23:55:20.526738 containerd[1924]: time="2026-01-23T23:55:20.526545271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7mbhq,Uid:f1008d00-1ff3-49fa-a9f7-78f1563cd91d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cdd51f66d19131bec1d6063abeec4822fbfa7c5c7f559ba09a73ad6c2503070f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 23:55:20.526991 kubelet[3232]: E0123 23:55:20.526902 3232 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd51f66d19131bec1d6063abeec4822fbfa7c5c7f559ba09a73ad6c2503070f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 23:55:20.527115 kubelet[3232]: E0123 23:55:20.527003 3232 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd51f66d19131bec1d6063abeec4822fbfa7c5c7f559ba09a73ad6c2503070f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-7mbhq" Jan 23 23:55:20.527115 kubelet[3232]: E0123 23:55:20.527061 3232 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd51f66d19131bec1d6063abeec4822fbfa7c5c7f559ba09a73ad6c2503070f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-7mbhq" Jan 23 23:55:20.527303 kubelet[3232]: E0123 23:55:20.527137 3232 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7mbhq_kube-system(f1008d00-1ff3-49fa-a9f7-78f1563cd91d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7mbhq_kube-system(f1008d00-1ff3-49fa-a9f7-78f1563cd91d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdd51f66d19131bec1d6063abeec4822fbfa7c5c7f559ba09a73ad6c2503070f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-7mbhq" podUID="f1008d00-1ff3-49fa-a9f7-78f1563cd91d" Jan 23 23:55:20.958775 containerd[1924]: time="2026-01-23T23:55:20.958467763Z" level=info msg="CreateContainer within sandbox \"8c8ef63a6044bea01d98ff241dfd0079d7b33801eff4ed3df9c3672203c05576\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 23 23:55:20.999908 containerd[1924]: time="2026-01-23T23:55:20.999821590Z" level=info msg="CreateContainer within sandbox \"8c8ef63a6044bea01d98ff241dfd0079d7b33801eff4ed3df9c3672203c05576\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e8267512fe29bf29332408d42cff4ad358cba4bef74060b7eeba377563ca3c53\"" Jan 23 23:55:21.002236 containerd[1924]: time="2026-01-23T23:55:21.002134354Z" level=info msg="StartContainer for \"e8267512fe29bf29332408d42cff4ad358cba4bef74060b7eeba377563ca3c53\"" Jan 23 23:55:21.077516 systemd[1]: Started cri-containerd-e8267512fe29bf29332408d42cff4ad358cba4bef74060b7eeba377563ca3c53.scope - libcontainer container e8267512fe29bf29332408d42cff4ad358cba4bef74060b7eeba377563ca3c53. Jan 23 23:55:21.129602 containerd[1924]: time="2026-01-23T23:55:21.129538713Z" level=info msg="StartContainer for \"e8267512fe29bf29332408d42cff4ad358cba4bef74060b7eeba377563ca3c53\" returns successfully" Jan 23 23:55:21.821313 systemd[1]: run-containerd-runc-k8s.io-e8267512fe29bf29332408d42cff4ad358cba4bef74060b7eeba377563ca3c53-runc.aXomeZ.mount: Deactivated successfully. Jan 23 23:55:22.244356 (udev-worker)[3804]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:22.277166 systemd-networkd[1841]: flannel.1: Link UP Jan 23 23:55:22.277227 systemd-networkd[1841]: flannel.1: Gained carrier Jan 23 23:55:23.373590 systemd-networkd[1841]: flannel.1: Gained IPv6LL Jan 23 23:55:25.980014 ntpd[1908]: Listen normally on 8 flannel.1 192.168.0.0:123 Jan 23 23:55:25.980720 ntpd[1908]: 23 Jan 23:55:25 ntpd[1908]: Listen normally on 8 flannel.1 192.168.0.0:123 Jan 23 23:55:25.980720 ntpd[1908]: 23 Jan 23:55:25 ntpd[1908]: Listen normally on 9 flannel.1 [fe80::9440:92ff:fea0:2016%4]:123 Jan 23 23:55:25.980253 ntpd[1908]: Listen normally on 9 flannel.1 [fe80::9440:92ff:fea0:2016%4]:123 Jan 23 23:55:34.769455 containerd[1924]: time="2026-01-23T23:55:34.769394168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7mbhq,Uid:f1008d00-1ff3-49fa-a9f7-78f1563cd91d,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:34.773900 containerd[1924]: time="2026-01-23T23:55:34.773825774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7jn92,Uid:9058fc0e-2092-4aa4-962a-dd4abb957330,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:34.836894 systemd-networkd[1841]: cni0: Link UP Jan 23 23:55:34.836913 systemd-networkd[1841]: cni0: Gained carrier Jan 23 23:55:34.850795 systemd-networkd[1841]: vethf7fa926d: Link UP Jan 23 23:55:34.853497 (udev-worker)[3922]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:34.855593 systemd-networkd[1841]: cni0: Lost carrier Jan 23 23:55:34.860458 kernel: cni0: port 1(vethf7fa926d) entered blocking state Jan 23 23:55:34.860663 kernel: cni0: port 1(vethf7fa926d) entered disabled state Jan 23 23:55:34.863620 kernel: vethf7fa926d: entered allmulticast mode Jan 23 23:55:34.865143 kernel: vethf7fa926d: entered promiscuous mode Jan 23 23:55:34.869476 (udev-worker)[3925]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:34.880447 (udev-worker)[3923]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:34.894918 kernel: cni0: port 1(vethf7fa926d) entered blocking state Jan 23 23:55:34.894979 kernel: cni0: port 1(vethf7fa926d) entered forwarding state Jan 23 23:55:34.895021 kernel: cni0: port 2(veth1ba9ec24) entered blocking state Jan 23 23:55:34.885115 systemd-networkd[1841]: vethf7fa926d: Gained carrier Jan 23 23:55:34.887349 systemd-networkd[1841]: cni0: Gained carrier Jan 23 23:55:34.890632 systemd-networkd[1841]: veth1ba9ec24: Link UP Jan 23 23:55:34.898655 kernel: cni0: port 2(veth1ba9ec24) entered disabled state Jan 23 23:55:34.898817 containerd[1924]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000082950), "name":"cbr0", "type":"bridge"} Jan 23 23:55:34.898817 containerd[1924]: delegateAdd: netconf sent to delegate plugin: Jan 23 23:55:34.906222 kernel: veth1ba9ec24: entered allmulticast mode Jan 23 23:55:34.911230 kernel: veth1ba9ec24: entered promiscuous mode Jan 23 23:55:34.918851 kernel: cni0: port 2(veth1ba9ec24) entered blocking state Jan 23 23:55:34.918959 kernel: cni0: port 2(veth1ba9ec24) entered forwarding state Jan 23 23:55:34.929570 systemd-networkd[1841]: veth1ba9ec24: Gained carrier Jan 23 23:55:34.939642 containerd[1924]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"} Jan 23 23:55:34.939642 containerd[1924]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000082950), "name":"cbr0", "type":"bridge"} Jan 23 23:55:34.939642 containerd[1924]: delegateAdd: netconf sent to delegate plugin: Jan 23 23:55:34.996155 containerd[1924]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-23T23:55:34.995670939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:34.997086 containerd[1924]: time="2026-01-23T23:55:34.995795573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:34.998629 containerd[1924]: time="2026-01-23T23:55:34.997857940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:35.000974 containerd[1924]: time="2026-01-23T23:55:35.000761771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:35.031374 containerd[1924]: time="2026-01-23T23:55:35.031064015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:35.034746 containerd[1924]: time="2026-01-23T23:55:35.034302225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:35.034746 containerd[1924]: time="2026-01-23T23:55:35.034377047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:35.034971 containerd[1924]: time="2026-01-23T23:55:35.034786535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:35.069038 systemd[1]: Started cri-containerd-c7bc89dc13b7cc0d889592fc2c6e33f58076f4915dae00aae2c8cd68d623c0f4.scope - libcontainer container c7bc89dc13b7cc0d889592fc2c6e33f58076f4915dae00aae2c8cd68d623c0f4. Jan 23 23:55:35.083581 systemd[1]: Started cri-containerd-1c47b3adb3826efc7a34f2563e635cfefd56984df15a533c385c780ac59a8e42.scope - libcontainer container 1c47b3adb3826efc7a34f2563e635cfefd56984df15a533c385c780ac59a8e42. Jan 23 23:55:35.182419 containerd[1924]: time="2026-01-23T23:55:35.182337839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7jn92,Uid:9058fc0e-2092-4aa4-962a-dd4abb957330,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7bc89dc13b7cc0d889592fc2c6e33f58076f4915dae00aae2c8cd68d623c0f4\"" Jan 23 23:55:35.206321 containerd[1924]: time="2026-01-23T23:55:35.206172067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7mbhq,Uid:f1008d00-1ff3-49fa-a9f7-78f1563cd91d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c47b3adb3826efc7a34f2563e635cfefd56984df15a533c385c780ac59a8e42\"" Jan 23 23:55:35.209743 containerd[1924]: time="2026-01-23T23:55:35.209490465Z" level=info msg="CreateContainer within sandbox \"c7bc89dc13b7cc0d889592fc2c6e33f58076f4915dae00aae2c8cd68d623c0f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:55:35.223167 containerd[1924]: time="2026-01-23T23:55:35.223023640Z" level=info msg="CreateContainer within sandbox \"1c47b3adb3826efc7a34f2563e635cfefd56984df15a533c385c780ac59a8e42\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:55:35.248203 containerd[1924]: time="2026-01-23T23:55:35.248117813Z" level=info msg="CreateContainer within sandbox \"c7bc89dc13b7cc0d889592fc2c6e33f58076f4915dae00aae2c8cd68d623c0f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79bccd3f930563b5cbec7f17b12934d4b497f6f60793442aad7d2493ee2b65f8\"" Jan 23 23:55:35.251335 containerd[1924]: time="2026-01-23T23:55:35.251252052Z" level=info msg="StartContainer for \"79bccd3f930563b5cbec7f17b12934d4b497f6f60793442aad7d2493ee2b65f8\"" Jan 23 23:55:35.256178 containerd[1924]: time="2026-01-23T23:55:35.256009753Z" level=info msg="CreateContainer within sandbox \"1c47b3adb3826efc7a34f2563e635cfefd56984df15a533c385c780ac59a8e42\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb56d4be27fe03f15b34afb5ad2c1d0ea1d6e41580f44b700e67680ddbe25988\"" Jan 23 23:55:35.257498 containerd[1924]: time="2026-01-23T23:55:35.257221051Z" level=info msg="StartContainer for \"cb56d4be27fe03f15b34afb5ad2c1d0ea1d6e41580f44b700e67680ddbe25988\"" Jan 23 23:55:35.322395 systemd[1]: Started cri-containerd-79bccd3f930563b5cbec7f17b12934d4b497f6f60793442aad7d2493ee2b65f8.scope - libcontainer container 79bccd3f930563b5cbec7f17b12934d4b497f6f60793442aad7d2493ee2b65f8. Jan 23 23:55:35.335560 systemd[1]: Started cri-containerd-cb56d4be27fe03f15b34afb5ad2c1d0ea1d6e41580f44b700e67680ddbe25988.scope - libcontainer container cb56d4be27fe03f15b34afb5ad2c1d0ea1d6e41580f44b700e67680ddbe25988. Jan 23 23:55:35.414342 containerd[1924]: time="2026-01-23T23:55:35.414269509Z" level=info msg="StartContainer for \"79bccd3f930563b5cbec7f17b12934d4b497f6f60793442aad7d2493ee2b65f8\" returns successfully" Jan 23 23:55:35.424020 containerd[1924]: time="2026-01-23T23:55:35.423935659Z" level=info msg="StartContainer for \"cb56d4be27fe03f15b34afb5ad2c1d0ea1d6e41580f44b700e67680ddbe25988\" returns successfully" Jan 23 23:55:36.018071 kubelet[3232]: I0123 23:55:36.017921 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7mbhq" podStartSLOduration=23.01785684 podStartE2EDuration="23.01785684s" podCreationTimestamp="2026-01-23 23:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:36.016386753 +0000 UTC m=+28.558019489" watchObservedRunningTime="2026-01-23 23:55:36.01785684 +0000 UTC m=+28.559489552" Jan 23 23:55:36.018985 kubelet[3232]: I0123 23:55:36.018457 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-zzd5q" podStartSLOduration=17.500995146 podStartE2EDuration="23.018442649s" podCreationTimestamp="2026-01-23 23:55:13 +0000 UTC" firstStartedPulling="2026-01-23 23:55:14.273656642 +0000 UTC m=+6.815289354" lastFinishedPulling="2026-01-23 23:55:19.791104145 +0000 UTC m=+12.332736857" observedRunningTime="2026-01-23 23:55:21.974330044 +0000 UTC m=+14.515962780" watchObservedRunningTime="2026-01-23 23:55:36.018442649 +0000 UTC m=+28.560075349" Jan 23 23:55:36.076967 kubelet[3232]: I0123 23:55:36.076325 3232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7jn92" podStartSLOduration=23.076166008 podStartE2EDuration="23.076166008s" podCreationTimestamp="2026-01-23 23:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:36.044445496 +0000 UTC m=+28.586078232" watchObservedRunningTime="2026-01-23 23:55:36.076166008 +0000 UTC m=+28.617798780" Jan 23 23:55:36.237672 systemd-networkd[1841]: veth1ba9ec24: Gained IPv6LL Jan 23 23:55:36.557595 systemd-networkd[1841]: cni0: Gained IPv6LL Jan 23 23:55:36.621512 systemd-networkd[1841]: vethf7fa926d: Gained IPv6LL Jan 23 23:55:38.979997 ntpd[1908]: Listen normally on 10 cni0 192.168.0.1:123 Jan 23 23:55:38.980161 ntpd[1908]: Listen normally on 11 cni0 [fe80::a81c:f1ff:fe99:3849%5]:123 Jan 23 23:55:38.980654 ntpd[1908]: 23 Jan 23:55:38 ntpd[1908]: Listen normally on 10 cni0 192.168.0.1:123 Jan 23 23:55:38.980654 ntpd[1908]: 23 Jan 23:55:38 ntpd[1908]: Listen normally on 11 cni0 [fe80::a81c:f1ff:fe99:3849%5]:123 Jan 23 23:55:38.980654 ntpd[1908]: 23 Jan 23:55:38 ntpd[1908]: Listen normally on 12 vethf7fa926d [fe80::ccf0:26ff:fe2e:4cc%6]:123 Jan 23 23:55:38.980654 ntpd[1908]: 23 Jan 23:55:38 ntpd[1908]: Listen normally on 13 veth1ba9ec24 [fe80::4fa:62ff:feaf:8096%7]:123 Jan 23 23:55:38.980300 ntpd[1908]: Listen normally on 12 vethf7fa926d [fe80::ccf0:26ff:fe2e:4cc%6]:123 Jan 23 23:55:38.980374 ntpd[1908]: Listen normally on 13 veth1ba9ec24 [fe80::4fa:62ff:feaf:8096%7]:123 Jan 23 23:55:46.313825 systemd[1]: Started sshd@7-172.31.21.163:22-4.153.228.146:50312.service - OpenSSH per-connection server daemon (4.153.228.146:50312). Jan 23 23:55:46.808357 sshd[4176]: Accepted publickey for core from 4.153.228.146 port 50312 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:46.811528 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:46.820347 systemd-logind[1913]: New session 8 of user core. Jan 23 23:55:46.827483 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:55:47.365497 sshd[4176]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:47.372312 systemd[1]: sshd@7-172.31.21.163:22-4.153.228.146:50312.service: Deactivated successfully. Jan 23 23:55:47.379669 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:55:47.383174 systemd-logind[1913]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:55:47.385629 systemd-logind[1913]: Removed session 8. Jan 23 23:55:52.462755 systemd[1]: Started sshd@8-172.31.21.163:22-4.153.228.146:50314.service - OpenSSH per-connection server daemon (4.153.228.146:50314). Jan 23 23:55:52.971342 sshd[4215]: Accepted publickey for core from 4.153.228.146 port 50314 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:52.974238 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:52.983288 systemd-logind[1913]: New session 9 of user core. Jan 23 23:55:52.994658 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:55:53.443454 sshd[4215]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:53.450587 systemd[1]: sshd@8-172.31.21.163:22-4.153.228.146:50314.service: Deactivated successfully. Jan 23 23:55:53.455692 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:55:53.457708 systemd-logind[1913]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:55:53.459550 systemd-logind[1913]: Removed session 9. Jan 23 23:55:58.553760 systemd[1]: Started sshd@9-172.31.21.163:22-4.153.228.146:42452.service - OpenSSH per-connection server daemon (4.153.228.146:42452). Jan 23 23:55:59.095837 sshd[4269]: Accepted publickey for core from 4.153.228.146 port 42452 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:59.098668 sshd[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:59.108277 systemd-logind[1913]: New session 10 of user core. Jan 23 23:55:59.116497 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:55:59.591997 sshd[4269]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:59.597778 systemd[1]: sshd@9-172.31.21.163:22-4.153.228.146:42452.service: Deactivated successfully. Jan 23 23:55:59.602907 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:55:59.607325 systemd-logind[1913]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:55:59.609152 systemd-logind[1913]: Removed session 10. Jan 23 23:55:59.679787 systemd[1]: Started sshd@10-172.31.21.163:22-4.153.228.146:42460.service - OpenSSH per-connection server daemon (4.153.228.146:42460). Jan 23 23:56:00.181563 sshd[4284]: Accepted publickey for core from 4.153.228.146 port 42460 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:00.184569 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:00.193360 systemd-logind[1913]: New session 11 of user core. Jan 23 23:56:00.204484 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:56:00.727529 sshd[4284]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:00.734651 systemd[1]: sshd@10-172.31.21.163:22-4.153.228.146:42460.service: Deactivated successfully. Jan 23 23:56:00.739807 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:56:00.741489 systemd-logind[1913]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:56:00.744614 systemd-logind[1913]: Removed session 11. Jan 23 23:56:00.832731 systemd[1]: Started sshd@11-172.31.21.163:22-4.153.228.146:42466.service - OpenSSH per-connection server daemon (4.153.228.146:42466). Jan 23 23:56:01.374949 sshd[4295]: Accepted publickey for core from 4.153.228.146 port 42466 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:01.378016 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:01.387954 systemd-logind[1913]: New session 12 of user core. Jan 23 23:56:01.395521 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:56:01.885686 sshd[4295]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:01.892650 systemd[1]: sshd@11-172.31.21.163:22-4.153.228.146:42466.service: Deactivated successfully. Jan 23 23:56:01.896630 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:56:01.898173 systemd-logind[1913]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:56:01.900077 systemd-logind[1913]: Removed session 12. Jan 23 23:56:06.978703 systemd[1]: Started sshd@12-172.31.21.163:22-4.153.228.146:59580.service - OpenSSH per-connection server daemon (4.153.228.146:59580). Jan 23 23:56:07.487653 sshd[4329]: Accepted publickey for core from 4.153.228.146 port 59580 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:07.490390 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:07.499680 systemd-logind[1913]: New session 13 of user core. Jan 23 23:56:07.505539 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:56:07.960550 sshd[4329]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:07.970171 systemd[1]: sshd@12-172.31.21.163:22-4.153.228.146:59580.service: Deactivated successfully. Jan 23 23:56:07.975115 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:56:07.980665 systemd-logind[1913]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:56:07.982977 systemd-logind[1913]: Removed session 13. Jan 23 23:56:08.065789 systemd[1]: Started sshd@13-172.31.21.163:22-4.153.228.146:59584.service - OpenSSH per-connection server daemon (4.153.228.146:59584). Jan 23 23:56:08.607755 sshd[4364]: Accepted publickey for core from 4.153.228.146 port 59584 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:08.610547 sshd[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:08.619601 systemd-logind[1913]: New session 14 of user core. Jan 23 23:56:08.627495 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:56:09.200545 sshd[4364]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:09.207817 systemd[1]: sshd@13-172.31.21.163:22-4.153.228.146:59584.service: Deactivated successfully. Jan 23 23:56:09.212304 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:56:09.215565 systemd-logind[1913]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:56:09.217486 systemd-logind[1913]: Removed session 14. Jan 23 23:56:09.286772 systemd[1]: Started sshd@14-172.31.21.163:22-4.153.228.146:59590.service - OpenSSH per-connection server daemon (4.153.228.146:59590). Jan 23 23:56:09.788969 sshd[4375]: Accepted publickey for core from 4.153.228.146 port 59590 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:09.791707 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:09.800942 systemd-logind[1913]: New session 15 of user core. Jan 23 23:56:09.807067 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:56:10.982438 sshd[4375]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:10.989063 systemd-logind[1913]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:56:10.990641 systemd[1]: sshd@14-172.31.21.163:22-4.153.228.146:59590.service: Deactivated successfully. Jan 23 23:56:10.993982 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:56:10.997819 systemd-logind[1913]: Removed session 15. Jan 23 23:56:11.098739 systemd[1]: Started sshd@15-172.31.21.163:22-4.153.228.146:59596.service - OpenSSH per-connection server daemon (4.153.228.146:59596). Jan 23 23:56:11.640941 sshd[4391]: Accepted publickey for core from 4.153.228.146 port 59596 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:11.643754 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:11.651907 systemd-logind[1913]: New session 16 of user core. Jan 23 23:56:11.666517 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:56:12.411598 sshd[4391]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:12.419609 systemd[1]: sshd@15-172.31.21.163:22-4.153.228.146:59596.service: Deactivated successfully. Jan 23 23:56:12.419677 systemd-logind[1913]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:56:12.424644 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:56:12.427751 systemd-logind[1913]: Removed session 16. Jan 23 23:56:12.501824 systemd[1]: Started sshd@16-172.31.21.163:22-4.153.228.146:59600.service - OpenSSH per-connection server daemon (4.153.228.146:59600). Jan 23 23:56:13.004720 sshd[4404]: Accepted publickey for core from 4.153.228.146 port 59600 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:13.009561 sshd[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:13.023893 systemd-logind[1913]: New session 17 of user core. Jan 23 23:56:13.031499 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:56:13.481685 sshd[4404]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:13.488375 systemd[1]: sshd@16-172.31.21.163:22-4.153.228.146:59600.service: Deactivated successfully. Jan 23 23:56:13.493781 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:56:13.497475 systemd-logind[1913]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:56:13.500231 systemd-logind[1913]: Removed session 17. Jan 23 23:56:18.593728 systemd[1]: Started sshd@17-172.31.21.163:22-4.153.228.146:47862.service - OpenSSH per-connection server daemon (4.153.228.146:47862). Jan 23 23:56:19.140273 sshd[4463]: Accepted publickey for core from 4.153.228.146 port 47862 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:19.142871 sshd[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:19.152141 systemd-logind[1913]: New session 18 of user core. Jan 23 23:56:19.157484 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:56:19.633792 sshd[4463]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:19.640380 systemd[1]: sshd@17-172.31.21.163:22-4.153.228.146:47862.service: Deactivated successfully. Jan 23 23:56:19.644870 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:56:19.646703 systemd-logind[1913]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:56:19.649230 systemd-logind[1913]: Removed session 18. Jan 23 23:56:24.726711 systemd[1]: Started sshd@18-172.31.21.163:22-4.153.228.146:52472.service - OpenSSH per-connection server daemon (4.153.228.146:52472). Jan 23 23:56:25.245778 sshd[4496]: Accepted publickey for core from 4.153.228.146 port 52472 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:25.248848 sshd[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:25.258884 systemd-logind[1913]: New session 19 of user core. Jan 23 23:56:25.266533 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:56:25.722587 sshd[4496]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:25.730404 systemd[1]: sshd@18-172.31.21.163:22-4.153.228.146:52472.service: Deactivated successfully. Jan 23 23:56:25.735028 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:56:25.739432 systemd-logind[1913]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:56:25.742255 systemd-logind[1913]: Removed session 19. Jan 23 23:56:39.520861 systemd[1]: cri-containerd-277a3e040b79e40011534ea4f0378e37467f84bcb2b971534b09695ee6431d6a.scope: Deactivated successfully. Jan 23 23:56:39.522314 systemd[1]: cri-containerd-277a3e040b79e40011534ea4f0378e37467f84bcb2b971534b09695ee6431d6a.scope: Consumed 4.657s CPU time, 18.5M memory peak, 0B memory swap peak. Jan 23 23:56:39.564770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-277a3e040b79e40011534ea4f0378e37467f84bcb2b971534b09695ee6431d6a-rootfs.mount: Deactivated successfully. Jan 23 23:56:39.576740 containerd[1924]: time="2026-01-23T23:56:39.576406623Z" level=info msg="shim disconnected" id=277a3e040b79e40011534ea4f0378e37467f84bcb2b971534b09695ee6431d6a namespace=k8s.io Jan 23 23:56:39.576740 containerd[1924]: time="2026-01-23T23:56:39.576480796Z" level=warning msg="cleaning up after shim disconnected" id=277a3e040b79e40011534ea4f0378e37467f84bcb2b971534b09695ee6431d6a namespace=k8s.io Jan 23 23:56:39.576740 containerd[1924]: time="2026-01-23T23:56:39.576503968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:39.604055 kubelet[3232]: E0123 23:56:39.603924 3232 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-163?timeout=10s\": context deadline exceeded" Jan 23 23:56:40.196330 kubelet[3232]: I0123 23:56:40.195829 3232 scope.go:117] "RemoveContainer" containerID="277a3e040b79e40011534ea4f0378e37467f84bcb2b971534b09695ee6431d6a" Jan 23 23:56:40.201079 containerd[1924]: time="2026-01-23T23:56:40.201016040Z" level=info msg="CreateContainer within sandbox \"ac224f223e01f6bdfc9dca7d6dc84330a92c0d5a598bab0c1707b137d230827a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 23:56:40.234159 containerd[1924]: time="2026-01-23T23:56:40.234081717Z" level=info msg="CreateContainer within sandbox \"ac224f223e01f6bdfc9dca7d6dc84330a92c0d5a598bab0c1707b137d230827a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"09175eaf09161ab3c0e89041edb09844c062e71e9e494b3186355b65d80d4e5b\"" Jan 23 23:56:40.234978 containerd[1924]: time="2026-01-23T23:56:40.234924934Z" level=info msg="StartContainer for \"09175eaf09161ab3c0e89041edb09844c062e71e9e494b3186355b65d80d4e5b\"" Jan 23 23:56:40.294529 systemd[1]: Started cri-containerd-09175eaf09161ab3c0e89041edb09844c062e71e9e494b3186355b65d80d4e5b.scope - libcontainer container 09175eaf09161ab3c0e89041edb09844c062e71e9e494b3186355b65d80d4e5b. Jan 23 23:56:40.374217 containerd[1924]: time="2026-01-23T23:56:40.374091800Z" level=info msg="StartContainer for \"09175eaf09161ab3c0e89041edb09844c062e71e9e494b3186355b65d80d4e5b\" returns successfully" Jan 23 23:56:45.087128 systemd[1]: cri-containerd-d482863789020a0b1117947034f19db913fdcc64eba59a3b85c09f579ace969d.scope: Deactivated successfully. Jan 23 23:56:45.088224 systemd[1]: cri-containerd-d482863789020a0b1117947034f19db913fdcc64eba59a3b85c09f579ace969d.scope: Consumed 4.382s CPU time, 16.0M memory peak, 0B memory swap peak. Jan 23 23:56:45.135618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d482863789020a0b1117947034f19db913fdcc64eba59a3b85c09f579ace969d-rootfs.mount: Deactivated successfully. Jan 23 23:56:45.146986 containerd[1924]: time="2026-01-23T23:56:45.146636321Z" level=info msg="shim disconnected" id=d482863789020a0b1117947034f19db913fdcc64eba59a3b85c09f579ace969d namespace=k8s.io Jan 23 23:56:45.146986 containerd[1924]: time="2026-01-23T23:56:45.146712715Z" level=warning msg="cleaning up after shim disconnected" id=d482863789020a0b1117947034f19db913fdcc64eba59a3b85c09f579ace969d namespace=k8s.io Jan 23 23:56:45.146986 containerd[1924]: time="2026-01-23T23:56:45.146732405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:45.218517 kubelet[3232]: I0123 23:56:45.218467 3232 scope.go:117] "RemoveContainer" containerID="d482863789020a0b1117947034f19db913fdcc64eba59a3b85c09f579ace969d" Jan 23 23:56:45.221669 containerd[1924]: time="2026-01-23T23:56:45.221608941Z" level=info msg="CreateContainer within sandbox \"ac10e7a832b7f2ee82ab474fb21b5fcc595c698081704ad8583c3d0beee3f727\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 23:56:45.253634 containerd[1924]: time="2026-01-23T23:56:45.253435287Z" level=info msg="CreateContainer within sandbox \"ac10e7a832b7f2ee82ab474fb21b5fcc595c698081704ad8583c3d0beee3f727\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a84cd944b0497fc6720aa1ffce1e44485dbfdba5d58df4384e366058c6782d2a\"" Jan 23 23:56:45.256240 containerd[1924]: time="2026-01-23T23:56:45.254179635Z" level=info msg="StartContainer for \"a84cd944b0497fc6720aa1ffce1e44485dbfdba5d58df4384e366058c6782d2a\"" Jan 23 23:56:45.310609 systemd[1]: Started cri-containerd-a84cd944b0497fc6720aa1ffce1e44485dbfdba5d58df4384e366058c6782d2a.scope - libcontainer container a84cd944b0497fc6720aa1ffce1e44485dbfdba5d58df4384e366058c6782d2a. Jan 23 23:56:45.387130 containerd[1924]: time="2026-01-23T23:56:45.386803734Z" level=info msg="StartContainer for \"a84cd944b0497fc6720aa1ffce1e44485dbfdba5d58df4384e366058c6782d2a\" returns successfully" Jan 23 23:56:49.604979 kubelet[3232]: E0123 23:56:49.604893 3232 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-163?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 23:56:59.606097 kubelet[3232]: E0123 23:56:59.605977 3232 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-163?timeout=10s\": context deadline exceeded"