Jan 23 23:53:59.278983 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 23:53:59.279031 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:53:59.279056 kernel: KASLR disabled due to lack of seed Jan 23 23:53:59.279073 kernel: efi: EFI v2.7 by EDK II Jan 23 23:53:59.279089 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 23 23:53:59.279105 kernel: ACPI: Early table checksum verification disabled Jan 23 23:53:59.279123 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 23:53:59.279138 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 23:53:59.279155 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 23:53:59.279170 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 23:53:59.279191 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 23:53:59.279208 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 23:53:59.279224 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 23:53:59.279241 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 23:53:59.286328 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 23:53:59.286369 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 23:53:59.286388 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 23:53:59.286405 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 23:53:59.286422 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 23:53:59.286439 kernel: printk: bootconsole [uart0] enabled Jan 23 23:53:59.286455 kernel: NUMA: Failed to initialise from firmware Jan 23 23:53:59.286473 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:53:59.286490 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 23 23:53:59.286507 kernel: Zone ranges: Jan 23 23:53:59.286523 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 23:53:59.286540 kernel: DMA32 empty Jan 23 23:53:59.286561 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 23:53:59.286578 kernel: Movable zone start for each node Jan 23 23:53:59.286595 kernel: Early memory node ranges Jan 23 23:53:59.286611 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 23:53:59.286628 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 23:53:59.286645 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 23:53:59.286662 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 23:53:59.286678 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 23:53:59.286695 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 23:53:59.286712 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 23:53:59.286728 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 23:53:59.286745 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:53:59.286800 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 23:53:59.286820 kernel: psci: probing for conduit method from ACPI. Jan 23 23:53:59.286845 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 23:53:59.286863 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:53:59.286882 kernel: psci: Trusted OS migration not required Jan 23 23:53:59.286904 kernel: psci: SMC Calling Convention v1.1 Jan 23 23:53:59.289596 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 23:53:59.289640 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:53:59.289659 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:53:59.289678 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:53:59.289697 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:53:59.289715 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:53:59.289733 kernel: CPU features: detected: Spectre-v2 Jan 23 23:53:59.289751 kernel: CPU features: detected: Spectre-v3a Jan 23 23:53:59.289769 kernel: CPU features: detected: Spectre-BHB Jan 23 23:53:59.289786 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 23:53:59.289815 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 23:53:59.289834 kernel: alternatives: applying boot alternatives Jan 23 23:53:59.289855 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:53:59.289874 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:53:59.289892 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:53:59.289910 kernel: Fallback order for Node 0: 0 Jan 23 23:53:59.289928 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 23 23:53:59.289946 kernel: Policy zone: Normal Jan 23 23:53:59.289964 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:53:59.289982 kernel: software IO TLB: area num 2. Jan 23 23:53:59.290000 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 23 23:53:59.290024 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 23 23:53:59.290042 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:53:59.290060 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:53:59.290078 kernel: rcu: RCU event tracing is enabled. Jan 23 23:53:59.290097 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:53:59.290115 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:53:59.290132 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:53:59.290150 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:53:59.290168 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:53:59.290186 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:53:59.290203 kernel: GICv3: 96 SPIs implemented Jan 23 23:53:59.290226 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:53:59.290244 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:53:59.290294 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 23:53:59.290315 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 23:53:59.290333 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 23:53:59.290351 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 23:53:59.290369 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 23 23:53:59.290387 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 23 23:53:59.290405 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 23:53:59.290423 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 23 23:53:59.290441 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:53:59.290459 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 23:53:59.290485 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 23:53:59.290504 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 23:53:59.290522 kernel: Console: colour dummy device 80x25 Jan 23 23:53:59.290540 kernel: printk: console [tty1] enabled Jan 23 23:53:59.290558 kernel: ACPI: Core revision 20230628 Jan 23 23:53:59.290576 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 23:53:59.290594 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:53:59.290612 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:53:59.290630 kernel: landlock: Up and running. Jan 23 23:53:59.290653 kernel: SELinux: Initializing. Jan 23 23:53:59.290672 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:53:59.290691 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:53:59.290709 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:53:59.290728 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:53:59.290746 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:53:59.290766 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:53:59.290784 kernel: Platform MSI: ITS@0x10080000 domain created Jan 23 23:53:59.290802 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 23 23:53:59.290825 kernel: Remapping and enabling EFI services. Jan 23 23:53:59.290844 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:53:59.290862 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:53:59.290879 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 23:53:59.290898 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 23 23:53:59.290916 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 23:53:59.290933 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:53:59.290952 kernel: SMP: Total of 2 processors activated. Jan 23 23:53:59.290970 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:53:59.290993 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 23:53:59.291012 kernel: CPU features: detected: CRC32 instructions Jan 23 23:53:59.291030 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:53:59.291060 kernel: alternatives: applying system-wide alternatives Jan 23 23:53:59.291085 kernel: devtmpfs: initialized Jan 23 23:53:59.291105 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:53:59.291124 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:53:59.291144 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:53:59.291163 kernel: SMBIOS 3.0.0 present. Jan 23 23:53:59.291189 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 23:53:59.291208 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:53:59.291227 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:53:59.291247 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:53:59.293338 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:53:59.293363 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:53:59.293383 kernel: audit: type=2000 audit(0.292:1): state=initialized audit_enabled=0 res=1 Jan 23 23:53:59.293403 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:53:59.293432 kernel: cpuidle: using governor menu Jan 23 23:53:59.293451 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:53:59.293470 kernel: ASID allocator initialised with 65536 entries Jan 23 23:53:59.293489 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:53:59.293508 kernel: Serial: AMBA PL011 UART driver Jan 23 23:53:59.293527 kernel: Modules: 17488 pages in range for non-PLT usage Jan 23 23:53:59.293547 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:53:59.293566 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:53:59.293585 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:53:59.293609 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:53:59.293628 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:53:59.293647 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:53:59.293666 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:53:59.293684 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:53:59.293703 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:53:59.293722 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:53:59.293741 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:53:59.293760 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:53:59.293783 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:53:59.293802 kernel: ACPI: Interpreter enabled Jan 23 23:53:59.293821 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:53:59.293839 kernel: ACPI: MCFG table detected, 1 entries Jan 23 23:53:59.293858 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 23:53:59.294182 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 23:53:59.294451 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 23:53:59.294675 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 23:53:59.294899 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 23:53:59.295108 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 23:53:59.295135 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 23:53:59.295155 kernel: acpiphp: Slot [1] registered Jan 23 23:53:59.295174 kernel: acpiphp: Slot [2] registered Jan 23 23:53:59.295193 kernel: acpiphp: Slot [3] registered Jan 23 23:53:59.295212 kernel: acpiphp: Slot [4] registered Jan 23 23:53:59.295231 kernel: acpiphp: Slot [5] registered Jan 23 23:53:59.297301 kernel: acpiphp: Slot [6] registered Jan 23 23:53:59.297331 kernel: acpiphp: Slot [7] registered Jan 23 23:53:59.297351 kernel: acpiphp: Slot [8] registered Jan 23 23:53:59.297370 kernel: acpiphp: Slot [9] registered Jan 23 23:53:59.297389 kernel: acpiphp: Slot [10] registered Jan 23 23:53:59.297408 kernel: acpiphp: Slot [11] registered Jan 23 23:53:59.297426 kernel: acpiphp: Slot [12] registered Jan 23 23:53:59.297445 kernel: acpiphp: Slot [13] registered Jan 23 23:53:59.297464 kernel: acpiphp: Slot [14] registered Jan 23 23:53:59.297482 kernel: acpiphp: Slot [15] registered Jan 23 23:53:59.297511 kernel: acpiphp: Slot [16] registered Jan 23 23:53:59.297530 kernel: acpiphp: Slot [17] registered Jan 23 23:53:59.297549 kernel: acpiphp: Slot [18] registered Jan 23 23:53:59.297568 kernel: acpiphp: Slot [19] registered Jan 23 23:53:59.297587 kernel: acpiphp: Slot [20] registered Jan 23 23:53:59.297605 kernel: acpiphp: Slot [21] registered Jan 23 23:53:59.297624 kernel: acpiphp: Slot [22] registered Jan 23 23:53:59.297643 kernel: acpiphp: Slot [23] registered Jan 23 23:53:59.297662 kernel: acpiphp: Slot [24] registered Jan 23 23:53:59.297686 kernel: acpiphp: Slot [25] registered Jan 23 23:53:59.297706 kernel: acpiphp: Slot [26] registered Jan 23 23:53:59.297725 kernel: acpiphp: Slot [27] registered Jan 23 23:53:59.297744 kernel: acpiphp: Slot [28] registered Jan 23 23:53:59.297762 kernel: acpiphp: Slot [29] registered Jan 23 23:53:59.297780 kernel: acpiphp: Slot [30] registered Jan 23 23:53:59.297799 kernel: acpiphp: Slot [31] registered Jan 23 23:53:59.297818 kernel: PCI host bridge to bus 0000:00 Jan 23 23:53:59.298072 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 23:53:59.298309 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 23:53:59.298507 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 23:53:59.298699 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 23:53:59.298955 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 23 23:53:59.299211 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 23 23:53:59.301615 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 23 23:53:59.301900 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 23 23:53:59.302123 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 23 23:53:59.302445 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:53:59.302682 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 23 23:53:59.302900 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 23 23:53:59.303117 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 23 23:53:59.308483 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 23 23:53:59.308751 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:53:59.308975 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 23:53:59.309170 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 23:53:59.309448 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 23:53:59.309481 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 23:53:59.309503 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 23:53:59.309523 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 23:53:59.309543 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 23:53:59.309574 kernel: iommu: Default domain type: Translated Jan 23 23:53:59.309595 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:53:59.309614 kernel: efivars: Registered efivars operations Jan 23 23:53:59.309633 kernel: vgaarb: loaded Jan 23 23:53:59.309653 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:53:59.309674 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:53:59.309694 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:53:59.309713 kernel: pnp: PnP ACPI init Jan 23 23:53:59.309970 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 23:53:59.310010 kernel: pnp: PnP ACPI: found 1 devices Jan 23 23:53:59.310039 kernel: NET: Registered PF_INET protocol family Jan 23 23:53:59.310063 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:53:59.310082 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:53:59.310102 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:53:59.310121 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:53:59.310140 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:53:59.310159 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:53:59.310183 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:53:59.310203 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:53:59.310222 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:53:59.310241 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:53:59.313365 kernel: kvm [1]: HYP mode not available Jan 23 23:53:59.313400 kernel: Initialise system trusted keyrings Jan 23 23:53:59.313422 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:53:59.313441 kernel: Key type asymmetric registered Jan 23 23:53:59.313460 kernel: Asymmetric key parser 'x509' registered Jan 23 23:53:59.313491 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:53:59.313512 kernel: io scheduler mq-deadline registered Jan 23 23:53:59.313530 kernel: io scheduler kyber registered Jan 23 23:53:59.313549 kernel: io scheduler bfq registered Jan 23 23:53:59.313829 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 23:53:59.313861 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 23:53:59.313881 kernel: ACPI: button: Power Button [PWRB] Jan 23 23:53:59.313901 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 23:53:59.313920 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 23:53:59.313946 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:53:59.313966 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 23:53:59.314185 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 23:53:59.314214 kernel: printk: console [ttyS0] disabled Jan 23 23:53:59.314234 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 23:53:59.314293 kernel: printk: console [ttyS0] enabled Jan 23 23:53:59.314320 kernel: printk: bootconsole [uart0] disabled Jan 23 23:53:59.314339 kernel: thunder_xcv, ver 1.0 Jan 23 23:53:59.314358 kernel: thunder_bgx, ver 1.0 Jan 23 23:53:59.314384 kernel: nicpf, ver 1.0 Jan 23 23:53:59.314403 kernel: nicvf, ver 1.0 Jan 23 23:53:59.314638 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:53:59.314841 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:53:58 UTC (1769212438) Jan 23 23:53:59.314869 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:53:59.314889 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 23 23:53:59.314908 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:53:59.314926 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:53:59.314952 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:53:59.314971 kernel: Segment Routing with IPv6 Jan 23 23:53:59.314991 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:53:59.315010 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:53:59.315029 kernel: Key type dns_resolver registered Jan 23 23:53:59.315048 kernel: registered taskstats version 1 Jan 23 23:53:59.315067 kernel: Loading compiled-in X.509 certificates Jan 23 23:53:59.315086 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:53:59.315104 kernel: Key type .fscrypt registered Jan 23 23:53:59.315129 kernel: Key type fscrypt-provisioning registered Jan 23 23:53:59.315147 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:53:59.315166 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:53:59.315185 kernel: ima: No architecture policies found Jan 23 23:53:59.315203 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:53:59.315222 kernel: clk: Disabling unused clocks Jan 23 23:53:59.315242 kernel: Freeing unused kernel memory: 39424K Jan 23 23:53:59.316244 kernel: Run /init as init process Jan 23 23:53:59.316319 kernel: with arguments: Jan 23 23:53:59.316355 kernel: /init Jan 23 23:53:59.316378 kernel: with environment: Jan 23 23:53:59.316398 kernel: HOME=/ Jan 23 23:53:59.316417 kernel: TERM=linux Jan 23 23:53:59.316442 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:53:59.316467 systemd[1]: Detected virtualization amazon. Jan 23 23:53:59.316488 systemd[1]: Detected architecture arm64. Jan 23 23:53:59.316508 systemd[1]: Running in initrd. Jan 23 23:53:59.316536 systemd[1]: No hostname configured, using default hostname. Jan 23 23:53:59.316556 systemd[1]: Hostname set to . Jan 23 23:53:59.316577 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:53:59.316597 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:53:59.316618 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:53:59.316639 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:53:59.316662 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:53:59.316683 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:53:59.316710 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:53:59.316732 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:53:59.316756 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:53:59.316778 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:53:59.316799 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:53:59.316820 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:53:59.316846 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:53:59.316867 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:53:59.316888 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:53:59.316908 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:53:59.316929 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:53:59.316950 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:53:59.316971 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:53:59.316991 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:53:59.317012 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:53:59.317038 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:53:59.317059 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:53:59.317079 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:53:59.317100 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:53:59.317120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:53:59.317141 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:53:59.317161 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:53:59.317182 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:53:59.317203 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:53:59.317229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:53:59.317249 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:53:59.317307 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:53:59.317329 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:53:59.317406 systemd-journald[252]: Collecting audit messages is disabled. Jan 23 23:53:59.317458 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:53:59.317480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:53:59.317501 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:53:59.317527 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:53:59.317547 systemd-journald[252]: Journal started Jan 23 23:53:59.317585 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2647e78ab791cad33e7f76fe1b22da) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:53:59.323400 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:53:59.268344 systemd-modules-load[253]: Inserted module 'overlay' Jan 23 23:53:59.330440 kernel: Bridge firewalling registered Jan 23 23:53:59.330482 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:53:59.324738 systemd-modules-load[253]: Inserted module 'br_netfilter' Jan 23 23:53:59.333945 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:53:59.352547 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:53:59.365823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:53:59.375981 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:53:59.389393 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:53:59.411538 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:53:59.418605 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:53:59.435239 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:53:59.446528 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:53:59.465681 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:53:59.489358 dracut-cmdline[285]: dracut-dracut-053 Jan 23 23:53:59.497878 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:53:59.560066 systemd-resolved[290]: Positive Trust Anchors: Jan 23 23:53:59.560109 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:53:59.560175 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:53:59.686314 kernel: SCSI subsystem initialized Jan 23 23:53:59.694383 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:53:59.707379 kernel: iscsi: registered transport (tcp) Jan 23 23:53:59.730706 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:53:59.730794 kernel: QLogic iSCSI HBA Driver Jan 23 23:53:59.801381 kernel: random: crng init done Jan 23 23:53:59.802193 systemd-resolved[290]: Defaulting to hostname 'linux'. Jan 23 23:53:59.806941 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:53:59.809829 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:53:59.846332 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:53:59.857605 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:53:59.903237 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:53:59.903371 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:53:59.905292 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:53:59.971305 kernel: raid6: neonx8 gen() 6767 MB/s Jan 23 23:53:59.988292 kernel: raid6: neonx4 gen() 6587 MB/s Jan 23 23:54:00.005296 kernel: raid6: neonx2 gen() 5459 MB/s Jan 23 23:54:00.023296 kernel: raid6: neonx1 gen() 3965 MB/s Jan 23 23:54:00.040294 kernel: raid6: int64x8 gen() 3826 MB/s Jan 23 23:54:00.057298 kernel: raid6: int64x4 gen() 3732 MB/s Jan 23 23:54:00.074300 kernel: raid6: int64x2 gen() 3621 MB/s Jan 23 23:54:00.092415 kernel: raid6: int64x1 gen() 2765 MB/s Jan 23 23:54:00.092465 kernel: raid6: using algorithm neonx8 gen() 6767 MB/s Jan 23 23:54:00.111384 kernel: raid6: .... xor() 4816 MB/s, rmw enabled Jan 23 23:54:00.111440 kernel: raid6: using neon recovery algorithm Jan 23 23:54:00.120615 kernel: xor: measuring software checksum speed Jan 23 23:54:00.120676 kernel: 8regs : 10599 MB/sec Jan 23 23:54:00.121883 kernel: 32regs : 11970 MB/sec Jan 23 23:54:00.123248 kernel: arm64_neon : 9586 MB/sec Jan 23 23:54:00.123296 kernel: xor: using function: 32regs (11970 MB/sec) Jan 23 23:54:00.209312 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:54:00.229082 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:54:00.242637 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:54:00.291861 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jan 23 23:54:00.300462 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:54:00.313926 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:54:00.360823 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Jan 23 23:54:00.423805 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:54:00.434612 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:54:00.560331 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:54:00.575680 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:54:00.618329 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:54:00.624587 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:54:00.628421 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:54:00.638125 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:54:00.649071 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:54:00.699716 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:54:00.751296 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 23:54:00.751373 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 23:54:00.763776 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 23:54:00.764218 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 23:54:00.786317 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:0d:1a:80:af:af Jan 23 23:54:00.795702 (udev-worker)[543]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:54:00.801686 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:54:00.802371 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:54:00.808785 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:54:00.813797 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:54:00.814105 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:00.817508 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:54:00.845520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:54:00.851952 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 23:54:00.851996 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 23:54:00.867304 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 23:54:00.878708 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 23:54:00.878810 kernel: GPT:9289727 != 33554431 Jan 23 23:54:00.878852 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 23:54:00.881493 kernel: GPT:9289727 != 33554431 Jan 23 23:54:00.881580 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:54:00.881623 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:00.882938 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:00.893730 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:54:00.936697 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:54:01.012058 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (517) Jan 23 23:54:01.020328 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (521) Jan 23 23:54:01.116052 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 23:54:01.137041 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 23:54:01.169218 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 23:54:01.172224 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 23:54:01.192432 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:54:01.203746 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:54:01.227305 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:01.228550 disk-uuid[661]: Primary Header is updated. Jan 23 23:54:01.228550 disk-uuid[661]: Secondary Entries is updated. Jan 23 23:54:01.228550 disk-uuid[661]: Secondary Header is updated. Jan 23 23:54:01.262310 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:01.271292 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:02.278304 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:02.282652 disk-uuid[662]: The operation has completed successfully. Jan 23 23:54:02.487898 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:54:02.493647 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:54:02.559568 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:54:02.571896 sh[1005]: Success Jan 23 23:54:02.589330 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:54:02.683247 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:54:02.708633 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:54:02.720127 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:54:02.747067 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:54:02.747134 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:02.747163 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:54:02.750820 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:54:02.750906 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:54:02.861308 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 23:54:02.872580 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:54:02.873677 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:54:02.888644 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:54:02.895611 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:54:02.920964 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:02.921028 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:02.921056 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:02.936285 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:02.954487 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:54:02.959322 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:02.969556 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:54:02.982668 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:54:03.112136 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:54:03.124853 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:54:03.180490 systemd-networkd[1198]: lo: Link UP Jan 23 23:54:03.180513 systemd-networkd[1198]: lo: Gained carrier Jan 23 23:54:03.183711 systemd-networkd[1198]: Enumeration completed Jan 23 23:54:03.184919 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:03.184926 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:54:03.186710 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:54:03.198143 systemd-networkd[1198]: eth0: Link UP Jan 23 23:54:03.198150 systemd-networkd[1198]: eth0: Gained carrier Jan 23 23:54:03.198168 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:03.201398 systemd[1]: Reached target network.target - Network. Jan 23 23:54:03.228442 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.31.113/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:54:03.467635 ignition[1108]: Ignition 2.19.0 Jan 23 23:54:03.469529 ignition[1108]: Stage: fetch-offline Jan 23 23:54:03.473172 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:03.473224 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:03.478790 ignition[1108]: Ignition finished successfully Jan 23 23:54:03.483716 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:54:03.495758 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:54:03.529779 ignition[1208]: Ignition 2.19.0 Jan 23 23:54:03.529801 ignition[1208]: Stage: fetch Jan 23 23:54:03.530489 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:03.530516 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:03.530695 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:03.551792 ignition[1208]: PUT result: OK Jan 23 23:54:03.556279 ignition[1208]: parsed url from cmdline: "" Jan 23 23:54:03.556305 ignition[1208]: no config URL provided Jan 23 23:54:03.556323 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:54:03.556354 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:54:03.556396 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:03.563176 ignition[1208]: PUT result: OK Jan 23 23:54:03.565773 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 23:54:03.575310 ignition[1208]: GET result: OK Jan 23 23:54:03.577299 ignition[1208]: parsing config with SHA512: 19ce4baee7fc670dd9c0e1e1c33562c049387d6744d0f3e260e7df6d67ca7611ed6729e0264ed39d39547e4da47f86dd2216d4d14c0269af0ff795d2cf7c9e04 Jan 23 23:54:03.588128 unknown[1208]: fetched base config from "system" Jan 23 23:54:03.589347 ignition[1208]: fetch: fetch complete Jan 23 23:54:03.588168 unknown[1208]: fetched base config from "system" Jan 23 23:54:03.589362 ignition[1208]: fetch: fetch passed Jan 23 23:54:03.588194 unknown[1208]: fetched user config from "aws" Jan 23 23:54:03.589493 ignition[1208]: Ignition finished successfully Jan 23 23:54:03.601774 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:54:03.616741 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:54:03.657080 ignition[1215]: Ignition 2.19.0 Jan 23 23:54:03.657117 ignition[1215]: Stage: kargs Jan 23 23:54:03.659114 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:03.659148 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:03.659372 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:03.669008 ignition[1215]: PUT result: OK Jan 23 23:54:03.674624 ignition[1215]: kargs: kargs passed Jan 23 23:54:03.674761 ignition[1215]: Ignition finished successfully Jan 23 23:54:03.681624 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:54:03.692172 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:54:03.725471 ignition[1221]: Ignition 2.19.0 Jan 23 23:54:03.725503 ignition[1221]: Stage: disks Jan 23 23:54:03.727695 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:03.727776 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:03.729185 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:03.734869 ignition[1221]: PUT result: OK Jan 23 23:54:03.742718 ignition[1221]: disks: disks passed Jan 23 23:54:03.743085 ignition[1221]: Ignition finished successfully Jan 23 23:54:03.745856 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:54:03.751500 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:54:03.755856 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:54:03.761642 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:54:03.764561 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:54:03.769772 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:54:03.785081 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:54:03.838059 systemd-fsck[1229]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 23 23:54:03.847400 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:54:03.864685 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:54:03.957816 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:54:03.959004 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:54:03.962025 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:54:03.983543 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:54:03.991762 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:54:03.996970 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 23:54:04.001701 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:54:04.002354 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:54:04.026319 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1248) Jan 23 23:54:04.032247 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:04.032341 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:04.034696 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:04.038249 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:54:04.048719 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:54:04.058314 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:04.062877 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:54:04.399005 initrd-setup-root[1272]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:54:04.421096 initrd-setup-root[1279]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:54:04.431398 initrd-setup-root[1286]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:54:04.443326 initrd-setup-root[1293]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:54:04.798477 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:54:04.810596 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:54:04.816501 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:54:04.818658 systemd-networkd[1198]: eth0: Gained IPv6LL Jan 23 23:54:04.841106 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:54:04.846360 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:04.887275 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:54:04.896230 ignition[1361]: INFO : Ignition 2.19.0 Jan 23 23:54:04.896230 ignition[1361]: INFO : Stage: mount Jan 23 23:54:04.900042 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:04.900042 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:04.900042 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:04.908416 ignition[1361]: INFO : PUT result: OK Jan 23 23:54:04.913574 ignition[1361]: INFO : mount: mount passed Jan 23 23:54:04.915433 ignition[1361]: INFO : Ignition finished successfully Jan 23 23:54:04.919848 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:54:04.930608 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:54:04.969408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:54:04.989306 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1374) Jan 23 23:54:04.994546 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:04.994595 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:04.994623 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:05.001297 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:05.005377 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:54:05.041470 ignition[1391]: INFO : Ignition 2.19.0 Jan 23 23:54:05.041470 ignition[1391]: INFO : Stage: files Jan 23 23:54:05.045563 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:05.045563 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:05.045563 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:05.053868 ignition[1391]: INFO : PUT result: OK Jan 23 23:54:05.057902 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:54:05.062213 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:54:05.062213 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:54:05.119116 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:54:05.122468 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:54:05.127099 unknown[1391]: wrote ssh authorized keys file for user: core Jan 23 23:54:05.129972 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:54:05.134805 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:54:05.134805 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 23:54:05.260207 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:54:05.459335 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:54:05.459335 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:54:05.459335 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:54:05.459335 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:54:05.459335 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:54:05.459335 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:54:05.486438 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:54:05.486438 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:54:05.486438 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:54:05.486438 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:54:05.486438 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:54:05.486438 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:54:05.486438 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:54:05.486438 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:54:05.486438 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 23 23:54:05.929339 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 23:54:06.332437 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:54:06.332437 ignition[1391]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 23:54:06.341400 ignition[1391]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:54:06.341400 ignition[1391]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:54:06.341400 ignition[1391]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 23:54:06.341400 ignition[1391]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:54:06.341400 ignition[1391]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:54:06.341400 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:54:06.341400 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:54:06.341400 ignition[1391]: INFO : files: files passed Jan 23 23:54:06.341400 ignition[1391]: INFO : Ignition finished successfully Jan 23 23:54:06.375034 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:54:06.389618 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:54:06.401635 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:54:06.413524 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:54:06.413990 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:54:06.437720 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:06.437720 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:06.447696 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:06.453414 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:54:06.457753 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:54:06.476750 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:54:06.538477 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:54:06.538975 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:54:06.548641 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:54:06.551486 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:54:06.558804 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:54:06.570712 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:54:06.604127 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:54:06.614600 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:54:06.649070 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:54:06.656061 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:54:06.661743 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:54:06.668087 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:54:06.668639 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:54:06.677752 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:54:06.680950 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:54:06.688007 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:54:06.690818 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:54:06.697493 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:54:06.701761 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:54:06.704977 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:54:06.710202 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:54:06.713151 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:54:06.716664 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:54:06.720176 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:54:06.720559 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:54:06.734389 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:54:06.743555 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:54:06.746576 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:54:06.750931 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:54:06.751413 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:54:06.751765 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:54:06.762805 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:54:06.763207 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:54:06.769401 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:54:06.769819 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:54:06.792223 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:54:06.794554 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:54:06.795193 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:54:06.815226 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:54:06.820473 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:54:06.824317 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:54:06.831031 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:54:06.831317 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:54:06.848890 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:54:06.851597 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:54:06.864219 ignition[1443]: INFO : Ignition 2.19.0 Jan 23 23:54:06.864219 ignition[1443]: INFO : Stage: umount Jan 23 23:54:06.864219 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:06.864219 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:06.864219 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:06.864219 ignition[1443]: INFO : PUT result: OK Jan 23 23:54:06.885454 ignition[1443]: INFO : umount: umount passed Jan 23 23:54:06.885454 ignition[1443]: INFO : Ignition finished successfully Jan 23 23:54:06.895742 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:54:06.897143 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:54:06.899350 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:54:06.905593 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:54:06.905829 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:54:06.910790 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:54:06.910997 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:54:06.915731 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:54:06.915859 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:54:06.917582 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:54:06.917687 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:54:06.924442 systemd[1]: Stopped target network.target - Network. Jan 23 23:54:06.926584 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:54:06.926717 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:54:06.929616 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:54:06.932272 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:54:06.939191 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:54:06.942331 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:54:06.944658 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:54:06.947160 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:54:06.947869 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:54:06.958770 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:54:06.958866 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:54:06.961515 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:54:06.961633 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:54:06.964115 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:54:06.964232 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:54:06.967384 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:54:06.967521 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:54:06.988526 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:54:06.993424 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:54:06.997362 systemd-networkd[1198]: eth0: DHCPv6 lease lost Jan 23 23:54:07.003118 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:54:07.003464 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:54:07.012670 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:54:07.012760 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:54:07.038128 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:54:07.056155 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:54:07.056317 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:54:07.064025 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:54:07.082143 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:54:07.083119 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:54:07.101562 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:54:07.102685 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:07.108354 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:54:07.108454 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:54:07.111199 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:54:07.111296 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:54:07.133231 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:54:07.133796 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:54:07.142576 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:54:07.142730 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:54:07.150734 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:54:07.150831 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:54:07.151700 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:54:07.151810 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:54:07.164172 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:54:07.164308 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:54:07.167898 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:54:07.168016 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:54:07.185565 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:54:07.189041 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:54:07.189158 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:54:07.199227 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:54:07.199369 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:07.202800 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:54:07.202998 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:54:07.217180 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:54:07.219316 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:54:07.226467 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:54:07.242244 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:54:07.260295 systemd[1]: Switching root. Jan 23 23:54:07.317593 systemd-journald[252]: Journal stopped Jan 23 23:54:09.740868 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Jan 23 23:54:09.740999 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:54:09.741055 kernel: SELinux: policy capability open_perms=1 Jan 23 23:54:09.741088 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:54:09.741131 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:54:09.741163 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:54:09.741194 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:54:09.741232 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:54:09.742330 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:54:09.742394 kernel: audit: type=1403 audit(1769212447.831:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:54:09.742442 systemd[1]: Successfully loaded SELinux policy in 64.457ms. Jan 23 23:54:09.742493 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.520ms. Jan 23 23:54:09.742530 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:54:09.742563 systemd[1]: Detected virtualization amazon. Jan 23 23:54:09.742621 systemd[1]: Detected architecture arm64. Jan 23 23:54:09.742661 systemd[1]: Detected first boot. Jan 23 23:54:09.742711 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:54:09.742748 zram_generator::config[1485]: No configuration found. Jan 23 23:54:09.742783 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:54:09.742818 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 23:54:09.751618 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 23:54:09.751657 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 23:54:09.751694 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:54:09.751727 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:54:09.751770 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:54:09.751803 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:54:09.751837 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:54:09.751884 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:54:09.751916 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:54:09.751951 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:54:09.751984 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:54:09.752018 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:54:09.752049 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:54:09.752088 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:54:09.752124 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:54:09.752158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:54:09.752192 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 23:54:09.752225 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:54:09.752284 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 23:54:09.752324 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 23:54:09.752358 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 23:54:09.752397 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:54:09.752430 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:54:09.752463 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:54:09.752495 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:54:09.752530 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:54:09.752561 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:54:09.752599 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:54:09.752629 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:54:09.752661 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:54:09.752701 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:54:09.752732 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:54:09.752764 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:54:09.752799 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:54:09.752829 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:54:09.752861 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:54:09.752893 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:54:09.752924 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:54:09.753910 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:54:09.753962 systemd[1]: Reached target machines.target - Containers. Jan 23 23:54:09.753993 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:54:09.754026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:09.754059 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:54:09.754091 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:54:09.754125 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:54:09.754158 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:54:09.754188 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:54:09.754224 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:54:09.754300 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:54:09.754336 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:54:09.754367 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 23:54:09.754398 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 23:54:09.754432 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 23:54:09.754462 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 23:54:09.757154 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:54:09.757223 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:54:09.757272 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:54:09.757313 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:54:09.757344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:54:09.757376 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 23:54:09.757407 systemd[1]: Stopped verity-setup.service. Jan 23 23:54:09.757436 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:54:09.757468 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:54:09.757498 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:54:09.757535 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:54:09.757565 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:54:09.757599 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:54:09.757685 systemd-journald[1567]: Collecting audit messages is disabled. Jan 23 23:54:09.757752 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:54:09.757784 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:54:09.757815 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:54:09.757854 systemd-journald[1567]: Journal started Jan 23 23:54:09.757905 systemd-journald[1567]: Runtime Journal (/run/log/journal/ec2647e78ab791cad33e7f76fe1b22da) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:54:09.155704 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:54:09.224516 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 23:54:09.225363 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 23:54:09.765271 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:54:09.769550 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:54:09.773322 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:54:09.776956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:54:09.778450 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:54:09.784111 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:54:09.788829 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:54:09.818284 kernel: loop: module loaded Jan 23 23:54:09.825961 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:54:09.830675 kernel: fuse: init (API version 7.39) Jan 23 23:54:09.830446 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:54:09.836755 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:54:09.839051 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:54:09.848588 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:54:09.860816 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:54:09.873685 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:54:09.891303 kernel: ACPI: bus type drm_connector registered Jan 23 23:54:09.899548 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:54:09.902482 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:54:09.902548 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:54:09.912212 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:54:09.924548 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:54:09.937600 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:54:09.940242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:09.947651 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:54:09.958685 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:54:09.961468 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:54:09.965628 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:54:09.968524 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:54:09.976629 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:54:09.984993 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:54:09.994009 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:54:09.998115 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:54:09.999878 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:54:10.003051 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:54:10.006652 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:54:10.010206 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:54:10.034787 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:54:10.054584 systemd-journald[1567]: Time spent on flushing to /var/log/journal/ec2647e78ab791cad33e7f76fe1b22da is 150.207ms for 899 entries. Jan 23 23:54:10.054584 systemd-journald[1567]: System Journal (/var/log/journal/ec2647e78ab791cad33e7f76fe1b22da) is 8.0M, max 195.6M, 187.6M free. Jan 23 23:54:10.232586 systemd-journald[1567]: Received client request to flush runtime journal. Jan 23 23:54:10.232693 kernel: loop0: detected capacity change from 0 to 114432 Jan 23 23:54:10.086313 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:54:10.089677 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:54:10.098823 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:54:10.281161 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:54:10.299468 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:54:10.303638 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:54:10.307448 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:10.318036 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:54:10.327368 kernel: loop1: detected capacity change from 0 to 52536 Jan 23 23:54:10.322218 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:54:10.328931 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:54:10.349571 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:54:10.375991 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:54:10.409524 systemd-tmpfiles[1632]: ACLs are not supported, ignoring. Jan 23 23:54:10.409560 systemd-tmpfiles[1632]: ACLs are not supported, ignoring. Jan 23 23:54:10.423793 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:54:10.438957 udevadm[1633]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 23 23:54:10.466324 kernel: loop2: detected capacity change from 0 to 114328 Jan 23 23:54:10.580326 kernel: loop3: detected capacity change from 0 to 211168 Jan 23 23:54:10.784299 kernel: loop4: detected capacity change from 0 to 114432 Jan 23 23:54:10.810942 kernel: loop5: detected capacity change from 0 to 52536 Jan 23 23:54:10.832517 kernel: loop6: detected capacity change from 0 to 114328 Jan 23 23:54:10.850303 kernel: loop7: detected capacity change from 0 to 211168 Jan 23 23:54:10.879948 (sd-merge)[1639]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 23:54:10.881017 (sd-merge)[1639]: Merged extensions into '/usr'. Jan 23 23:54:10.892742 systemd[1]: Reloading requested from client PID 1612 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:54:10.892777 systemd[1]: Reloading... Jan 23 23:54:11.074304 zram_generator::config[1662]: No configuration found. Jan 23 23:54:11.427109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:11.551308 systemd[1]: Reloading finished in 657 ms. Jan 23 23:54:11.599327 ldconfig[1607]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:54:11.603395 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:54:11.607219 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:54:11.612083 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:54:11.633607 systemd[1]: Starting ensure-sysext.service... Jan 23 23:54:11.638621 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:54:11.655815 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:54:11.676506 systemd[1]: Reloading requested from client PID 1718 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:54:11.676537 systemd[1]: Reloading... Jan 23 23:54:11.701495 systemd-tmpfiles[1719]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:54:11.702248 systemd-tmpfiles[1719]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:54:11.708666 systemd-tmpfiles[1719]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:54:11.709385 systemd-tmpfiles[1719]: ACLs are not supported, ignoring. Jan 23 23:54:11.709560 systemd-tmpfiles[1719]: ACLs are not supported, ignoring. Jan 23 23:54:11.722322 systemd-tmpfiles[1719]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:54:11.722350 systemd-tmpfiles[1719]: Skipping /boot Jan 23 23:54:11.766120 systemd-udevd[1720]: Using default interface naming scheme 'v255'. Jan 23 23:54:11.771032 systemd-tmpfiles[1719]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:54:11.771242 systemd-tmpfiles[1719]: Skipping /boot Jan 23 23:54:11.948303 zram_generator::config[1771]: No configuration found. Jan 23 23:54:12.174774 (udev-worker)[1770]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:54:12.431309 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1750) Jan 23 23:54:12.461864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:12.635391 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 23:54:12.636562 systemd[1]: Reloading finished in 959 ms. Jan 23 23:54:12.690428 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:54:12.702364 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:54:12.797171 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:54:12.803667 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:54:12.810704 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:54:12.823672 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:54:12.833662 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:54:12.844960 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:54:12.858598 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:12.863967 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:54:12.872488 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:54:12.881936 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:54:12.884814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:12.890632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:12.891113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:12.901333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:12.905892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:54:12.913716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:12.914187 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:54:12.928390 systemd[1]: Finished ensure-sysext.service. Jan 23 23:54:12.973395 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:54:12.995575 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:54:13.012735 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:54:13.021728 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:54:13.046523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:54:13.061980 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:54:13.062781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:54:13.082513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:54:13.083007 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:54:13.088913 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:54:13.132110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:54:13.132512 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:54:13.135830 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:54:13.150460 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:54:13.180173 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:54:13.182693 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:54:13.227410 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:54:13.233113 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:54:13.239076 augenrules[1950]: No rules Jan 23 23:54:13.245212 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:54:13.268859 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:54:13.277364 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:54:13.297611 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:54:13.303491 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:54:13.308650 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:54:13.378375 lvm[1959]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:54:13.391670 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:54:13.439829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:13.449390 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:54:13.455896 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:54:13.469614 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:54:13.508430 lvm[1973]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:54:13.519471 systemd-networkd[1916]: lo: Link UP Jan 23 23:54:13.519496 systemd-networkd[1916]: lo: Gained carrier Jan 23 23:54:13.522844 systemd-networkd[1916]: Enumeration completed Jan 23 23:54:13.523095 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:54:13.531547 systemd-networkd[1916]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:13.531573 systemd-networkd[1916]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:54:13.534207 systemd-networkd[1916]: eth0: Link UP Jan 23 23:54:13.534680 systemd-networkd[1916]: eth0: Gained carrier Jan 23 23:54:13.534744 systemd-networkd[1916]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:13.541554 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:54:13.551419 systemd-networkd[1916]: eth0: DHCPv4 address 172.31.31.113/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:54:13.567464 systemd-resolved[1917]: Positive Trust Anchors: Jan 23 23:54:13.568069 systemd-resolved[1917]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:54:13.568299 systemd-resolved[1917]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:54:13.585704 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:54:13.596046 systemd-resolved[1917]: Defaulting to hostname 'linux'. Jan 23 23:54:13.600248 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:54:13.603247 systemd[1]: Reached target network.target - Network. Jan 23 23:54:13.605788 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:54:13.608676 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:54:13.611483 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:54:13.614586 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:54:13.618015 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:54:13.620844 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:54:13.623891 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:54:13.626853 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:54:13.627057 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:54:13.629351 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:54:13.633284 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:54:13.639035 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:54:13.655475 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:54:13.659226 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:54:13.662548 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:54:13.665373 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:54:13.667778 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:54:13.667853 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:54:13.675592 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:54:13.687790 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:54:13.696667 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:54:13.709682 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:54:13.717681 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:54:13.720592 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:54:13.725811 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:54:13.734653 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 23:54:13.742536 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:54:13.748713 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 23:54:13.756181 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:54:13.764768 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:54:13.777721 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:54:13.784907 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:54:13.786128 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:54:13.793311 jq[1982]: false Jan 23 23:54:13.794737 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:54:13.803614 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:54:13.814359 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:54:13.814816 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:54:13.919612 extend-filesystems[1983]: Found loop4 Jan 23 23:54:13.919612 extend-filesystems[1983]: Found loop5 Jan 23 23:54:13.919612 extend-filesystems[1983]: Found loop6 Jan 23 23:54:13.919612 extend-filesystems[1983]: Found loop7 Jan 23 23:54:13.919612 extend-filesystems[1983]: Found nvme0n1 Jan 23 23:54:13.919612 extend-filesystems[1983]: Found nvme0n1p1 Jan 23 23:54:13.919612 extend-filesystems[1983]: Found nvme0n1p2 Jan 23 23:54:13.919612 extend-filesystems[1983]: Found nvme0n1p3 Jan 23 23:54:13.919612 extend-filesystems[1983]: Found usr Jan 23 23:54:13.945379 extend-filesystems[1983]: Found nvme0n1p4 Jan 23 23:54:13.945379 extend-filesystems[1983]: Found nvme0n1p6 Jan 23 23:54:13.945379 extend-filesystems[1983]: Found nvme0n1p7 Jan 23 23:54:13.945379 extend-filesystems[1983]: Found nvme0n1p9 Jan 23 23:54:13.945379 extend-filesystems[1983]: Checking size of /dev/nvme0n1p9 Jan 23 23:54:13.967200 ntpd[1985]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:54:13.971424 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:54:13.968320 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:54:13.975039 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:54:13.975039 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:54:13.975039 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: ---------------------------------------------------- Jan 23 23:54:13.975039 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:54:13.975039 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:54:13.975039 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: corporation. Support and training for ntp-4 are Jan 23 23:54:13.975039 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: available at https://www.nwtime.org/support Jan 23 23:54:13.975039 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: ---------------------------------------------------- Jan 23 23:54:13.972513 ntpd[1985]: ---------------------------------------------------- Jan 23 23:54:13.968757 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:54:13.972567 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:54:13.972589 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:54:13.972609 ntpd[1985]: corporation. Support and training for ntp-4 are Jan 23 23:54:13.972629 ntpd[1985]: available at https://www.nwtime.org/support Jan 23 23:54:13.972650 ntpd[1985]: ---------------------------------------------------- Jan 23 23:54:13.984619 ntpd[1985]: proto: precision = 0.108 usec (-23) Jan 23 23:54:13.986490 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: proto: precision = 0.108 usec (-23) Jan 23 23:54:13.988695 ntpd[1985]: basedate set to 2026-01-11 Jan 23 23:54:13.990440 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: basedate set to 2026-01-11 Jan 23 23:54:13.990440 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: gps base set to 2026-01-11 (week 2401) Jan 23 23:54:13.988754 ntpd[1985]: gps base set to 2026-01-11 (week 2401) Jan 23 23:54:13.992068 jq[1992]: true Jan 23 23:54:14.010871 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:54:14.010871 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:54:14.010871 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:54:14.010871 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: Listen normally on 3 eth0 172.31.31.113:123 Jan 23 23:54:14.010871 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: Listen normally on 4 lo [::1]:123 Jan 23 23:54:14.010871 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: bind(21) AF_INET6 fe80::40d:1aff:fe80:afaf%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:54:14.010871 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: unable to create socket on eth0 (5) for fe80::40d:1aff:fe80:afaf%2#123 Jan 23 23:54:14.010871 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: failed to init interface for address fe80::40d:1aff:fe80:afaf%2 Jan 23 23:54:14.010871 ntpd[1985]: 23 Jan 23:54:13 ntpd[1985]: Listening on routing socket on fd #21 for interface updates Jan 23 23:54:13.996592 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:54:13.996697 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:54:13.997082 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:54:13.997160 ntpd[1985]: Listen normally on 3 eth0 172.31.31.113:123 Jan 23 23:54:13.997232 ntpd[1985]: Listen normally on 4 lo [::1]:123 Jan 23 23:54:13.997350 ntpd[1985]: bind(21) AF_INET6 fe80::40d:1aff:fe80:afaf%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:54:13.997392 ntpd[1985]: unable to create socket on eth0 (5) for fe80::40d:1aff:fe80:afaf%2#123 Jan 23 23:54:13.997421 ntpd[1985]: failed to init interface for address fe80::40d:1aff:fe80:afaf%2 Jan 23 23:54:13.997480 ntpd[1985]: Listening on routing socket on fd #21 for interface updates Jan 23 23:54:14.026244 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:14.045567 ntpd[1985]: 23 Jan 23:54:14 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:14.045567 ntpd[1985]: 23 Jan 23:54:14 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:14.038418 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:14.052565 (ntainerd)[2008]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:54:14.071432 tar[1996]: linux-arm64/LICENSE Jan 23 23:54:14.080664 tar[1996]: linux-arm64/helm Jan 23 23:54:14.076940 dbus-daemon[1981]: [system] SELinux support is enabled Jan 23 23:54:14.074642 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:54:14.076944 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:54:14.080164 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:54:14.091515 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:54:14.091608 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:54:14.094842 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:54:14.094884 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:54:14.098756 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 23:54:14.110713 jq[2016]: true Jan 23 23:54:14.115967 dbus-daemon[1981]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1916 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 23:54:14.119642 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:54:14.164362 extend-filesystems[1983]: Resized partition /dev/nvme0n1p9 Jan 23 23:54:14.171660 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 23:54:14.192103 update_engine[1991]: I20260123 23:54:14.191654 1991 main.cc:92] Flatcar Update Engine starting Jan 23 23:54:14.200294 extend-filesystems[2034]: resize2fs 1.47.1 (20-May-2024) Jan 23 23:54:14.221329 update_engine[1991]: I20260123 23:54:14.215934 1991 update_check_scheduler.cc:74] Next update check in 11m32s Jan 23 23:54:14.216397 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:54:14.226659 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:54:14.232834 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 23:54:14.292960 systemd-logind[1990]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 23:54:14.293031 systemd-logind[1990]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 23:54:14.295320 systemd-logind[1990]: New seat seat0. Jan 23 23:54:14.303374 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:54:14.440794 bash[2056]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:54:14.446506 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 23:54:14.444538 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:54:14.457520 coreos-metadata[1980]: Jan 23 23:54:14.444 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.463 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.464 INFO Fetch successful Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.464 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.478 INFO Fetch successful Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.478 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.478 INFO Fetch successful Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.478 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.478 INFO Fetch successful Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.478 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.481 INFO Fetch failed with 404: resource not found Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.481 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.481 INFO Fetch successful Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.481 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.481 INFO Fetch successful Jan 23 23:54:14.482037 coreos-metadata[1980]: Jan 23 23:54:14.481 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 23:54:14.470148 systemd[1]: Starting sshkeys.service... Jan 23 23:54:14.488215 extend-filesystems[2034]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 23:54:14.488215 extend-filesystems[2034]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 23:54:14.488215 extend-filesystems[2034]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 23:54:14.506149 extend-filesystems[1983]: Resized filesystem in /dev/nvme0n1p9 Jan 23 23:54:14.500484 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:54:14.528073 coreos-metadata[1980]: Jan 23 23:54:14.498 INFO Fetch successful Jan 23 23:54:14.528073 coreos-metadata[1980]: Jan 23 23:54:14.498 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 23:54:14.528073 coreos-metadata[1980]: Jan 23 23:54:14.498 INFO Fetch successful Jan 23 23:54:14.528073 coreos-metadata[1980]: Jan 23 23:54:14.498 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 23:54:14.528073 coreos-metadata[1980]: Jan 23 23:54:14.498 INFO Fetch successful Jan 23 23:54:14.500928 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:54:14.571645 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1770) Jan 23 23:54:14.652083 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:54:14.665344 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:54:14.696040 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 23:54:14.705955 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:54:14.720070 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 23:54:14.737814 systemd-networkd[1916]: eth0: Gained IPv6LL Jan 23 23:54:14.760006 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:54:14.770937 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:54:14.788070 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 23:54:14.804605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:14.815070 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:54:14.933292 coreos-metadata[2102]: Jan 23 23:54:14.929 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:54:14.933292 coreos-metadata[2102]: Jan 23 23:54:14.931 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 23:54:14.933292 coreos-metadata[2102]: Jan 23 23:54:14.931 INFO Fetch successful Jan 23 23:54:14.933292 coreos-metadata[2102]: Jan 23 23:54:14.931 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 23:54:14.933292 coreos-metadata[2102]: Jan 23 23:54:14.932 INFO Fetch successful Jan 23 23:54:14.937704 unknown[2102]: wrote ssh authorized keys file for user: core Jan 23 23:54:15.009689 locksmithd[2037]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:54:15.079767 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 23:54:15.080728 dbus-daemon[1981]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2035 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 23:54:15.099924 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 23:54:15.168556 update-ssh-keys[2146]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:54:15.176204 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 23:54:15.184397 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:54:15.193357 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 23:54:15.238411 polkitd[2172]: Started polkitd version 121 Jan 23 23:54:15.241773 systemd[1]: Finished sshkeys.service. Jan 23 23:54:15.332501 polkitd[2172]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 23:54:15.332664 polkitd[2172]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 23:54:15.340758 polkitd[2172]: Finished loading, compiling and executing 2 rules Jan 23 23:54:15.351048 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 23:54:15.353599 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 23:54:15.359617 amazon-ssm-agent[2113]: Initializing new seelog logger Jan 23 23:54:15.360293 amazon-ssm-agent[2113]: New Seelog Logger Creation Complete Jan 23 23:54:15.360293 amazon-ssm-agent[2113]: 2026/01/23 23:54:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:15.360293 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:15.361778 polkitd[2172]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 23:54:15.374956 amazon-ssm-agent[2113]: 2026/01/23 23:54:15 processing appconfig overrides Jan 23 23:54:15.375733 amazon-ssm-agent[2113]: 2026/01/23 23:54:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:15.375733 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:15.375958 amazon-ssm-agent[2113]: 2026/01/23 23:54:15 processing appconfig overrides Jan 23 23:54:15.376324 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO Proxy environment variables: Jan 23 23:54:15.390656 amazon-ssm-agent[2113]: 2026/01/23 23:54:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:15.390656 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:15.390840 amazon-ssm-agent[2113]: 2026/01/23 23:54:15 processing appconfig overrides Jan 23 23:54:15.401307 amazon-ssm-agent[2113]: 2026/01/23 23:54:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:15.401307 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:15.401307 amazon-ssm-agent[2113]: 2026/01/23 23:54:15 processing appconfig overrides Jan 23 23:54:15.444888 systemd-hostnamed[2035]: Hostname set to (transient) Jan 23 23:54:15.456860 systemd-resolved[1917]: System hostname changed to 'ip-172-31-31-113'. Jan 23 23:54:15.479994 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO https_proxy: Jan 23 23:54:15.497984 containerd[2008]: time="2026-01-23T23:54:15.497733876Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:54:15.580469 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO http_proxy: Jan 23 23:54:15.670509 containerd[2008]: time="2026-01-23T23:54:15.669559489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:15.680040 containerd[2008]: time="2026-01-23T23:54:15.679460797Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:15.680040 containerd[2008]: time="2026-01-23T23:54:15.679541857Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:54:15.680040 containerd[2008]: time="2026-01-23T23:54:15.679586029Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:54:15.680040 containerd[2008]: time="2026-01-23T23:54:15.679955677Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:54:15.680040 containerd[2008]: time="2026-01-23T23:54:15.680006197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:15.680414 containerd[2008]: time="2026-01-23T23:54:15.680174893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:15.680414 containerd[2008]: time="2026-01-23T23:54:15.680214589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:15.682314 containerd[2008]: time="2026-01-23T23:54:15.680778853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:15.682314 containerd[2008]: time="2026-01-23T23:54:15.680837953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:15.682314 containerd[2008]: time="2026-01-23T23:54:15.680997229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:15.682314 containerd[2008]: time="2026-01-23T23:54:15.681031789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:15.682314 containerd[2008]: time="2026-01-23T23:54:15.681827077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:15.682629 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO no_proxy: Jan 23 23:54:15.683658 containerd[2008]: time="2026-01-23T23:54:15.682736161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:15.683658 containerd[2008]: time="2026-01-23T23:54:15.683049793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:15.683658 containerd[2008]: time="2026-01-23T23:54:15.683092897Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:54:15.683658 containerd[2008]: time="2026-01-23T23:54:15.683364973Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:54:15.683658 containerd[2008]: time="2026-01-23T23:54:15.683525161Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:54:15.705418 containerd[2008]: time="2026-01-23T23:54:15.705327313Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:54:15.705635 containerd[2008]: time="2026-01-23T23:54:15.705465721Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:54:15.705635 containerd[2008]: time="2026-01-23T23:54:15.705525625Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:54:15.705744 containerd[2008]: time="2026-01-23T23:54:15.705580801Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:54:15.705744 containerd[2008]: time="2026-01-23T23:54:15.705692425Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:54:15.707311 containerd[2008]: time="2026-01-23T23:54:15.706021705Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:54:15.707311 containerd[2008]: time="2026-01-23T23:54:15.706648897Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:54:15.712174 containerd[2008]: time="2026-01-23T23:54:15.712111021Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:54:15.714101 containerd[2008]: time="2026-01-23T23:54:15.712808881Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:54:15.714101 containerd[2008]: time="2026-01-23T23:54:15.712877857Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:54:15.714101 containerd[2008]: time="2026-01-23T23:54:15.712918021Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:54:15.714101 containerd[2008]: time="2026-01-23T23:54:15.712956733Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:54:15.714101 containerd[2008]: time="2026-01-23T23:54:15.712992061Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:54:15.714101 containerd[2008]: time="2026-01-23T23:54:15.713027917Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:54:15.714101 containerd[2008]: time="2026-01-23T23:54:15.713062549Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:54:15.714101 containerd[2008]: time="2026-01-23T23:54:15.713095477Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:54:15.714101 containerd[2008]: time="2026-01-23T23:54:15.713128165Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:54:15.714101 containerd[2008]: time="2026-01-23T23:54:15.713157673Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:54:15.714101 containerd[2008]: time="2026-01-23T23:54:15.713218093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.713285521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.715409521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.715468045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.715506409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.715553473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.715586449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.715620097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.715658485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.715700461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.715732861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.715763305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.715795453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.715845445Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.716091985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.717304 containerd[2008]: time="2026-01-23T23:54:15.716137489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.718094 containerd[2008]: time="2026-01-23T23:54:15.716169553Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:54:15.726330 containerd[2008]: time="2026-01-23T23:54:15.722416873Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:54:15.726330 containerd[2008]: time="2026-01-23T23:54:15.722489881Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:54:15.726330 containerd[2008]: time="2026-01-23T23:54:15.722518417Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:54:15.726330 containerd[2008]: time="2026-01-23T23:54:15.722587945Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:54:15.726330 containerd[2008]: time="2026-01-23T23:54:15.722614237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.726330 containerd[2008]: time="2026-01-23T23:54:15.722656153Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:54:15.726330 containerd[2008]: time="2026-01-23T23:54:15.722686069Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:54:15.726330 containerd[2008]: time="2026-01-23T23:54:15.722715865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:54:15.726853 containerd[2008]: time="2026-01-23T23:54:15.723462073Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:54:15.726853 containerd[2008]: time="2026-01-23T23:54:15.723600949Z" level=info msg="Connect containerd service" Jan 23 23:54:15.726853 containerd[2008]: time="2026-01-23T23:54:15.723673993Z" level=info msg="using legacy CRI server" Jan 23 23:54:15.726853 containerd[2008]: time="2026-01-23T23:54:15.723698377Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:54:15.726853 containerd[2008]: time="2026-01-23T23:54:15.723857473Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:54:15.735678 containerd[2008]: time="2026-01-23T23:54:15.735611437Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:54:15.739292 containerd[2008]: time="2026-01-23T23:54:15.736649257Z" level=info msg="Start subscribing containerd event" Jan 23 23:54:15.739292 containerd[2008]: time="2026-01-23T23:54:15.736775881Z" level=info msg="Start recovering state" Jan 23 23:54:15.739292 containerd[2008]: time="2026-01-23T23:54:15.736939801Z" level=info msg="Start event monitor" Jan 23 23:54:15.739292 containerd[2008]: time="2026-01-23T23:54:15.736969741Z" level=info msg="Start snapshots syncer" Jan 23 23:54:15.739292 containerd[2008]: time="2026-01-23T23:54:15.736994257Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:54:15.739292 containerd[2008]: time="2026-01-23T23:54:15.737014105Z" level=info msg="Start streaming server" Jan 23 23:54:15.740591 containerd[2008]: time="2026-01-23T23:54:15.739964089Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:54:15.740591 containerd[2008]: time="2026-01-23T23:54:15.740130085Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:54:15.740416 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:54:15.742403 containerd[2008]: time="2026-01-23T23:54:15.741314641Z" level=info msg="containerd successfully booted in 0.249323s" Jan 23 23:54:15.793202 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO Checking if agent identity type OnPrem can be assumed Jan 23 23:54:15.894471 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO Checking if agent identity type EC2 can be assumed Jan 23 23:54:15.948388 sshd_keygen[2032]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:54:15.995312 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO Agent will take identity from EC2 Jan 23 23:54:16.064668 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:54:16.086748 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:54:16.092539 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:16.101603 systemd[1]: Started sshd@0-172.31.31.113:22-4.153.228.146:48692.service - OpenSSH per-connection server daemon (4.153.228.146:48692). Jan 23 23:54:16.147003 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:54:16.147505 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:54:16.167903 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:54:16.190808 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:16.241560 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:54:16.261465 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:54:16.274968 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 23:54:16.281016 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:54:16.291320 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:16.391456 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 23 23:54:16.493019 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 23:54:16.532554 tar[1996]: linux-arm64/README.md Jan 23 23:54:16.564136 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:54:16.594403 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 23:54:16.694783 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 23 23:54:16.749816 sshd[2214]: Accepted publickey for core from 4.153.228.146 port 48692 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:16.756697 sshd[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:16.774863 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:54:16.793826 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:54:16.808372 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO [Registrar] Starting registrar module Jan 23 23:54:16.813518 systemd-logind[1990]: New session 1 of user core. Jan 23 23:54:16.845230 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:54:16.865997 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:54:16.893662 (systemd)[2228]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:54:16.898850 amazon-ssm-agent[2113]: 2026-01-23 23:54:15 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 23 23:54:16.973628 ntpd[1985]: Listen normally on 6 eth0 [fe80::40d:1aff:fe80:afaf%2]:123 Jan 23 23:54:16.975878 ntpd[1985]: 23 Jan 23:54:16 ntpd[1985]: Listen normally on 6 eth0 [fe80::40d:1aff:fe80:afaf%2]:123 Jan 23 23:54:16.999831 amazon-ssm-agent[2113]: 2026-01-23 23:54:16 INFO [EC2Identity] EC2 registration was successful. Jan 23 23:54:17.008841 amazon-ssm-agent[2113]: 2026-01-23 23:54:16 INFO [CredentialRefresher] credentialRefresher has started Jan 23 23:54:17.008957 amazon-ssm-agent[2113]: 2026-01-23 23:54:16 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 23:54:17.008957 amazon-ssm-agent[2113]: 2026-01-23 23:54:17 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 23:54:17.100536 amazon-ssm-agent[2113]: 2026-01-23 23:54:17 INFO [CredentialRefresher] Next credential rotation will be in 30.416646000366665 minutes Jan 23 23:54:17.158122 systemd[2228]: Queued start job for default target default.target. Jan 23 23:54:17.167074 systemd[2228]: Created slice app.slice - User Application Slice. Jan 23 23:54:17.167538 systemd[2228]: Reached target paths.target - Paths. Jan 23 23:54:17.167576 systemd[2228]: Reached target timers.target - Timers. Jan 23 23:54:17.170153 systemd[2228]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:54:17.199825 systemd[2228]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:54:17.200278 systemd[2228]: Reached target sockets.target - Sockets. Jan 23 23:54:17.200471 systemd[2228]: Reached target basic.target - Basic System. Jan 23 23:54:17.200563 systemd[2228]: Reached target default.target - Main User Target. Jan 23 23:54:17.200629 systemd[2228]: Startup finished in 288ms. Jan 23 23:54:17.200668 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:54:17.215592 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:54:17.620075 systemd[1]: Started sshd@1-172.31.31.113:22-4.153.228.146:52806.service - OpenSSH per-connection server daemon (4.153.228.146:52806). Jan 23 23:54:18.038497 amazon-ssm-agent[2113]: 2026-01-23 23:54:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 23:54:18.139916 amazon-ssm-agent[2113]: 2026-01-23 23:54:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2242) started Jan 23 23:54:18.169884 sshd[2239]: Accepted publickey for core from 4.153.228.146 port 52806 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:18.175928 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:18.188826 systemd-logind[1990]: New session 2 of user core. Jan 23 23:54:18.199627 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:54:18.240306 amazon-ssm-agent[2113]: 2026-01-23 23:54:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 23:54:18.490506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:18.495050 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:54:18.498631 systemd[1]: Startup finished in 1.217s (kernel) + 8.995s (initrd) + 10.731s (userspace) = 20.944s. Jan 23 23:54:18.519954 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:18.549928 sshd[2239]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:18.557501 systemd[1]: sshd@1-172.31.31.113:22-4.153.228.146:52806.service: Deactivated successfully. Jan 23 23:54:18.561155 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 23:54:18.565316 systemd-logind[1990]: Session 2 logged out. Waiting for processes to exit. Jan 23 23:54:18.568226 systemd-logind[1990]: Removed session 2. Jan 23 23:54:18.651843 systemd[1]: Started sshd@2-172.31.31.113:22-4.153.228.146:52820.service - OpenSSH per-connection server daemon (4.153.228.146:52820). Jan 23 23:54:19.203451 sshd[2267]: Accepted publickey for core from 4.153.228.146 port 52820 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:19.206086 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:19.215643 systemd-logind[1990]: New session 3 of user core. Jan 23 23:54:19.223783 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:54:19.593381 sshd[2267]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:19.599690 systemd[1]: sshd@2-172.31.31.113:22-4.153.228.146:52820.service: Deactivated successfully. Jan 23 23:54:19.605930 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 23:54:19.610529 systemd-logind[1990]: Session 3 logged out. Waiting for processes to exit. Jan 23 23:54:19.614488 systemd-logind[1990]: Removed session 3. Jan 23 23:54:19.891083 kubelet[2259]: E0123 23:54:19.890887 2259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:19.896110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:19.896523 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:19.898388 systemd[1]: kubelet.service: Consumed 1.450s CPU time. Jan 23 23:54:20.651461 systemd-resolved[1917]: Clock change detected. Flushing caches. Jan 23 23:54:29.354509 systemd[1]: Started sshd@3-172.31.31.113:22-4.153.228.146:54738.service - OpenSSH per-connection server daemon (4.153.228.146:54738). Jan 23 23:54:29.771946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:54:29.780358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:29.861619 sshd[2281]: Accepted publickey for core from 4.153.228.146 port 54738 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:29.863695 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:29.873325 systemd-logind[1990]: New session 4 of user core. Jan 23 23:54:29.881307 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:54:30.117009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:30.132632 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:30.207680 kubelet[2292]: E0123 23:54:30.207579 2292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:30.215620 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:30.216360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:30.219158 sshd[2281]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:30.225620 systemd[1]: sshd@3-172.31.31.113:22-4.153.228.146:54738.service: Deactivated successfully. Jan 23 23:54:30.231100 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:54:30.232412 systemd-logind[1990]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:54:30.234472 systemd-logind[1990]: Removed session 4. Jan 23 23:54:30.313560 systemd[1]: Started sshd@4-172.31.31.113:22-4.153.228.146:54740.service - OpenSSH per-connection server daemon (4.153.228.146:54740). Jan 23 23:54:30.822490 sshd[2303]: Accepted publickey for core from 4.153.228.146 port 54740 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:30.825198 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:30.832552 systemd-logind[1990]: New session 5 of user core. Jan 23 23:54:30.844233 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:54:31.175434 sshd[2303]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:31.181639 systemd-logind[1990]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:54:31.183518 systemd[1]: sshd@4-172.31.31.113:22-4.153.228.146:54740.service: Deactivated successfully. Jan 23 23:54:31.187950 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:54:31.190417 systemd-logind[1990]: Removed session 5. Jan 23 23:54:31.268226 systemd[1]: Started sshd@5-172.31.31.113:22-4.153.228.146:54746.service - OpenSSH per-connection server daemon (4.153.228.146:54746). Jan 23 23:54:31.781701 sshd[2310]: Accepted publickey for core from 4.153.228.146 port 54746 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:31.784348 sshd[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:31.791571 systemd-logind[1990]: New session 6 of user core. Jan 23 23:54:31.802237 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:54:32.144239 sshd[2310]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:32.149824 systemd-logind[1990]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:54:32.150607 systemd[1]: sshd@5-172.31.31.113:22-4.153.228.146:54746.service: Deactivated successfully. Jan 23 23:54:32.154603 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:54:32.158890 systemd-logind[1990]: Removed session 6. Jan 23 23:54:32.252456 systemd[1]: Started sshd@6-172.31.31.113:22-4.153.228.146:54762.service - OpenSSH per-connection server daemon (4.153.228.146:54762). Jan 23 23:54:32.779345 sshd[2317]: Accepted publickey for core from 4.153.228.146 port 54762 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:32.781939 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:32.789292 systemd-logind[1990]: New session 7 of user core. Jan 23 23:54:32.798229 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:54:33.090835 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:54:33.092155 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:54:33.722888 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:54:33.722889 (dockerd)[2335]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:54:34.228791 dockerd[2335]: time="2026-01-23T23:54:34.228686087Z" level=info msg="Starting up" Jan 23 23:54:34.447401 systemd[1]: var-lib-docker-metacopy\x2dcheck2220721085-merged.mount: Deactivated successfully. Jan 23 23:54:34.465669 dockerd[2335]: time="2026-01-23T23:54:34.465609324Z" level=info msg="Loading containers: start." Jan 23 23:54:34.669041 kernel: Initializing XFRM netlink socket Jan 23 23:54:34.735133 (udev-worker)[2358]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:54:34.835300 systemd-networkd[1916]: docker0: Link UP Jan 23 23:54:34.872528 dockerd[2335]: time="2026-01-23T23:54:34.872379770Z" level=info msg="Loading containers: done." Jan 23 23:54:34.904658 dockerd[2335]: time="2026-01-23T23:54:34.904516886Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:54:34.905046 dockerd[2335]: time="2026-01-23T23:54:34.904736402Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:54:34.905046 dockerd[2335]: time="2026-01-23T23:54:34.904928342Z" level=info msg="Daemon has completed initialization" Jan 23 23:54:34.979104 dockerd[2335]: time="2026-01-23T23:54:34.978475035Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:54:34.980109 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:54:36.222036 containerd[2008]: time="2026-01-23T23:54:36.221620837Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 23:54:36.879961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262194032.mount: Deactivated successfully. Jan 23 23:54:38.398085 containerd[2008]: time="2026-01-23T23:54:38.397954012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:38.400171 containerd[2008]: time="2026-01-23T23:54:38.400117444Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Jan 23 23:54:38.401029 containerd[2008]: time="2026-01-23T23:54:38.400528840Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:38.406516 containerd[2008]: time="2026-01-23T23:54:38.406422376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:38.408901 containerd[2008]: time="2026-01-23T23:54:38.408847552Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.187163775s" Jan 23 23:54:38.410169 containerd[2008]: time="2026-01-23T23:54:38.409079164Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 23 23:54:38.412570 containerd[2008]: time="2026-01-23T23:54:38.412521976Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 23:54:40.110134 containerd[2008]: time="2026-01-23T23:54:40.109959196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:40.112498 containerd[2008]: time="2026-01-23T23:54:40.111760768Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Jan 23 23:54:40.113568 containerd[2008]: time="2026-01-23T23:54:40.113509192Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:40.121724 containerd[2008]: time="2026-01-23T23:54:40.121641160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:40.124876 containerd[2008]: time="2026-01-23T23:54:40.123921052Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.711338056s" Jan 23 23:54:40.124876 containerd[2008]: time="2026-01-23T23:54:40.124001632Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 23 23:54:40.125412 containerd[2008]: time="2026-01-23T23:54:40.125370088Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 23:54:40.284540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:54:40.291368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:40.628732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:40.647606 (kubelet)[2546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:40.732407 kubelet[2546]: E0123 23:54:40.732315 2546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:40.737496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:40.737854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:41.638501 containerd[2008]: time="2026-01-23T23:54:41.638399480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:41.640628 containerd[2008]: time="2026-01-23T23:54:41.640555964Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Jan 23 23:54:41.642572 containerd[2008]: time="2026-01-23T23:54:41.641606552Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:41.647461 containerd[2008]: time="2026-01-23T23:54:41.647406884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:41.649868 containerd[2008]: time="2026-01-23T23:54:41.649806080Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.524258452s" Jan 23 23:54:41.649963 containerd[2008]: time="2026-01-23T23:54:41.649864748Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 23 23:54:41.650576 containerd[2008]: time="2026-01-23T23:54:41.650495720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 23:54:42.931555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1021055356.mount: Deactivated successfully. Jan 23 23:54:43.516127 containerd[2008]: time="2026-01-23T23:54:43.516063465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:43.518070 containerd[2008]: time="2026-01-23T23:54:43.518018301Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Jan 23 23:54:43.518433 containerd[2008]: time="2026-01-23T23:54:43.518365929Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:43.522028 containerd[2008]: time="2026-01-23T23:54:43.521952633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:43.523712 containerd[2008]: time="2026-01-23T23:54:43.523516677Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.872964005s" Jan 23 23:54:43.523712 containerd[2008]: time="2026-01-23T23:54:43.523576161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 23 23:54:43.525034 containerd[2008]: time="2026-01-23T23:54:43.524957457Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 23:54:44.038947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736942254.mount: Deactivated successfully. Jan 23 23:54:45.154844 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 23:54:45.295462 containerd[2008]: time="2026-01-23T23:54:45.295379542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:45.297816 containerd[2008]: time="2026-01-23T23:54:45.297744898Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jan 23 23:54:45.300009 containerd[2008]: time="2026-01-23T23:54:45.299897038Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:45.306298 containerd[2008]: time="2026-01-23T23:54:45.306220918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:45.309116 containerd[2008]: time="2026-01-23T23:54:45.308597818Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.783556001s" Jan 23 23:54:45.309116 containerd[2008]: time="2026-01-23T23:54:45.308658490Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 23 23:54:45.309317 containerd[2008]: time="2026-01-23T23:54:45.309288778Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 23:54:45.816139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount225332923.mount: Deactivated successfully. Jan 23 23:54:45.828826 containerd[2008]: time="2026-01-23T23:54:45.828747973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:45.830731 containerd[2008]: time="2026-01-23T23:54:45.830663545Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 23:54:45.833296 containerd[2008]: time="2026-01-23T23:54:45.833223673Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:45.838536 containerd[2008]: time="2026-01-23T23:54:45.838440253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:45.840392 containerd[2008]: time="2026-01-23T23:54:45.840210769Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 530.875887ms" Jan 23 23:54:45.840392 containerd[2008]: time="2026-01-23T23:54:45.840263413Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 23:54:45.842030 containerd[2008]: time="2026-01-23T23:54:45.841512253Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 23:54:46.390346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895520479.mount: Deactivated successfully. Jan 23 23:54:49.328132 containerd[2008]: time="2026-01-23T23:54:49.328050986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:49.330442 containerd[2008]: time="2026-01-23T23:54:49.330368078Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Jan 23 23:54:49.332484 containerd[2008]: time="2026-01-23T23:54:49.332395430Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:49.339140 containerd[2008]: time="2026-01-23T23:54:49.339061682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:49.342013 containerd[2008]: time="2026-01-23T23:54:49.341632298Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.500067593s" Jan 23 23:54:49.342013 containerd[2008]: time="2026-01-23T23:54:49.341702990Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 23 23:54:50.784575 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 23:54:50.797156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:51.143406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:51.154116 (kubelet)[2708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:51.234022 kubelet[2708]: E0123 23:54:51.233168 2708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:51.237108 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:51.237585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:58.453916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:58.469532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:58.530962 systemd[1]: Reloading requested from client PID 2722 ('systemctl') (unit session-7.scope)... Jan 23 23:54:58.531247 systemd[1]: Reloading... Jan 23 23:54:58.767025 zram_generator::config[2765]: No configuration found. Jan 23 23:54:59.011132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:59.186678 systemd[1]: Reloading finished in 654 ms. Jan 23 23:54:59.246124 update_engine[1991]: I20260123 23:54:59.246048 1991 update_attempter.cc:509] Updating boot flags... Jan 23 23:54:59.294324 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:54:59.294533 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:54:59.295236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:59.305832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:59.372533 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (2834) Jan 23 23:54:59.732035 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (2835) Jan 23 23:55:00.045057 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (2835) Jan 23 23:55:00.145437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:00.147508 (kubelet)[3044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:55:00.311167 kubelet[3044]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:55:00.311167 kubelet[3044]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:55:00.311167 kubelet[3044]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:55:00.311694 kubelet[3044]: I0123 23:55:00.311517 3044 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:55:01.639948 kubelet[3044]: I0123 23:55:01.639745 3044 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 23:55:01.639948 kubelet[3044]: I0123 23:55:01.639864 3044 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:55:01.641283 kubelet[3044]: I0123 23:55:01.640688 3044 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:55:01.699439 kubelet[3044]: E0123 23:55:01.699290 3044 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.113:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 23:55:01.702787 kubelet[3044]: I0123 23:55:01.702659 3044 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:55:01.725418 kubelet[3044]: E0123 23:55:01.725205 3044 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:55:01.725418 kubelet[3044]: I0123 23:55:01.725389 3044 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:55:01.742682 kubelet[3044]: I0123 23:55:01.742572 3044 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:55:01.744690 kubelet[3044]: I0123 23:55:01.744555 3044 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:55:01.746195 kubelet[3044]: I0123 23:55:01.744677 3044 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-113","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:55:01.746626 kubelet[3044]: I0123 23:55:01.746427 3044 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:55:01.746626 kubelet[3044]: I0123 23:55:01.746514 3044 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 23:55:01.747307 kubelet[3044]: I0123 23:55:01.747229 3044 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:55:01.760096 kubelet[3044]: I0123 23:55:01.759963 3044 kubelet.go:480] "Attempting to sync node with API server" Jan 23 23:55:01.760096 kubelet[3044]: I0123 23:55:01.760083 3044 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:55:01.763025 kubelet[3044]: I0123 23:55:01.761839 3044 kubelet.go:386] "Adding apiserver pod source" Jan 23 23:55:01.763025 kubelet[3044]: I0123 23:55:01.762050 3044 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:55:01.769559 kubelet[3044]: E0123 23:55:01.768707 3044 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-113&limit=500&resourceVersion=0\": dial tcp 172.31.31.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:55:01.770320 kubelet[3044]: I0123 23:55:01.770245 3044 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:55:01.772321 kubelet[3044]: I0123 23:55:01.772247 3044 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:55:01.774070 kubelet[3044]: W0123 23:55:01.773078 3044 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:55:01.787697 kubelet[3044]: E0123 23:55:01.787542 3044 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:55:01.788198 kubelet[3044]: I0123 23:55:01.788146 3044 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:55:01.789131 kubelet[3044]: I0123 23:55:01.789077 3044 server.go:1289] "Started kubelet" Jan 23 23:55:01.795129 kubelet[3044]: I0123 23:55:01.794930 3044 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:55:01.801375 kubelet[3044]: E0123 23:55:01.797853 3044 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.113:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.113:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-113.188d81658d468bb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-113,UID:ip-172-31-31-113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-113,},FirstTimestamp:2026-01-23 23:55:01.788363704 +0000 UTC m=+1.620510069,LastTimestamp:2026-01-23 23:55:01.788363704 +0000 UTC m=+1.620510069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-113,}" Jan 23 23:55:01.810044 kubelet[3044]: I0123 23:55:01.808817 3044 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:55:01.810471 kubelet[3044]: I0123 23:55:01.810179 3044 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:55:01.810780 kubelet[3044]: E0123 23:55:01.810680 3044 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-113\" not found" Jan 23 23:55:01.811584 kubelet[3044]: I0123 23:55:01.811493 3044 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:55:01.811764 kubelet[3044]: I0123 23:55:01.811671 3044 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:55:01.812628 kubelet[3044]: I0123 23:55:01.812559 3044 server.go:317] "Adding debug handlers to kubelet server" Jan 23 23:55:01.814160 kubelet[3044]: E0123 23:55:01.814080 3044 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:55:01.821383 kubelet[3044]: I0123 23:55:01.821284 3044 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:55:01.822560 kubelet[3044]: I0123 23:55:01.822514 3044 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:55:01.823876 kubelet[3044]: I0123 23:55:01.823825 3044 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:55:01.828628 kubelet[3044]: E0123 23:55:01.828546 3044 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:55:01.828877 kubelet[3044]: E0123 23:55:01.828795 3044 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-113?timeout=10s\": dial tcp 172.31.31.113:6443: connect: connection refused" interval="200ms" Jan 23 23:55:01.835497 kubelet[3044]: I0123 23:55:01.835406 3044 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:55:01.835497 kubelet[3044]: I0123 23:55:01.835501 3044 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:55:01.835860 kubelet[3044]: I0123 23:55:01.835706 3044 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:55:01.852871 kubelet[3044]: I0123 23:55:01.852589 3044 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 23:55:01.855154 kubelet[3044]: I0123 23:55:01.855099 3044 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 23:55:01.855333 kubelet[3044]: I0123 23:55:01.855313 3044 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 23:55:01.855502 kubelet[3044]: I0123 23:55:01.855478 3044 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:55:01.855621 kubelet[3044]: I0123 23:55:01.855600 3044 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 23:55:01.855850 kubelet[3044]: E0123 23:55:01.855810 3044 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:55:01.873701 kubelet[3044]: E0123 23:55:01.873635 3044 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:55:01.889021 kubelet[3044]: I0123 23:55:01.888572 3044 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:55:01.889021 kubelet[3044]: I0123 23:55:01.888606 3044 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:55:01.889021 kubelet[3044]: I0123 23:55:01.888643 3044 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:55:01.894970 kubelet[3044]: I0123 23:55:01.894480 3044 policy_none.go:49] "None policy: Start" Jan 23 23:55:01.894970 kubelet[3044]: I0123 23:55:01.894519 3044 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:55:01.894970 kubelet[3044]: I0123 23:55:01.894542 3044 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:55:01.906515 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 23:55:01.911189 kubelet[3044]: E0123 23:55:01.911139 3044 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-113\" not found" Jan 23 23:55:01.921519 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 23:55:01.929567 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 23:55:01.939085 kubelet[3044]: E0123 23:55:01.939029 3044 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:55:01.939372 kubelet[3044]: I0123 23:55:01.939337 3044 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:55:01.939480 kubelet[3044]: I0123 23:55:01.939374 3044 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:55:01.939966 kubelet[3044]: I0123 23:55:01.939928 3044 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:55:01.941563 kubelet[3044]: E0123 23:55:01.941527 3044 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:55:01.941869 kubelet[3044]: E0123 23:55:01.941833 3044 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-113\" not found" Jan 23 23:55:01.978327 systemd[1]: Created slice kubepods-burstable-podca243271cf0554ffa71ccff18099a819.slice - libcontainer container kubepods-burstable-podca243271cf0554ffa71ccff18099a819.slice. Jan 23 23:55:01.998833 kubelet[3044]: E0123 23:55:01.998788 3044 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-113\" not found" node="ip-172-31-31-113" Jan 23 23:55:02.005323 systemd[1]: Created slice kubepods-burstable-pod1a620fcda989f9be99c3cbb99c9cf54f.slice - libcontainer container kubepods-burstable-pod1a620fcda989f9be99c3cbb99c9cf54f.slice. Jan 23 23:55:02.013138 kubelet[3044]: I0123 23:55:02.012546 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a620fcda989f9be99c3cbb99c9cf54f-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-113\" (UID: \"1a620fcda989f9be99c3cbb99c9cf54f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-113" Jan 23 23:55:02.013138 kubelet[3044]: I0123 23:55:02.012602 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1a620fcda989f9be99c3cbb99c9cf54f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-113\" (UID: \"1a620fcda989f9be99c3cbb99c9cf54f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-113" Jan 23 23:55:02.013138 kubelet[3044]: I0123 23:55:02.012645 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a620fcda989f9be99c3cbb99c9cf54f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-113\" (UID: \"1a620fcda989f9be99c3cbb99c9cf54f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-113" Jan 23 23:55:02.013138 kubelet[3044]: I0123 23:55:02.012705 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a620fcda989f9be99c3cbb99c9cf54f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-113\" (UID: \"1a620fcda989f9be99c3cbb99c9cf54f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-113" Jan 23 23:55:02.013138 kubelet[3044]: I0123 23:55:02.012748 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca243271cf0554ffa71ccff18099a819-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-113\" (UID: \"ca243271cf0554ffa71ccff18099a819\") " pod="kube-system/kube-apiserver-ip-172-31-31-113" Jan 23 23:55:02.013503 kubelet[3044]: I0123 23:55:02.012789 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca243271cf0554ffa71ccff18099a819-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-113\" (UID: \"ca243271cf0554ffa71ccff18099a819\") " pod="kube-system/kube-apiserver-ip-172-31-31-113" Jan 23 23:55:02.013503 kubelet[3044]: I0123 23:55:02.012828 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a620fcda989f9be99c3cbb99c9cf54f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-113\" (UID: \"1a620fcda989f9be99c3cbb99c9cf54f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-113" Jan 23 23:55:02.013503 kubelet[3044]: I0123 23:55:02.012882 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec5ae28126efcf2233f0b0abc40876f3-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-113\" (UID: \"ec5ae28126efcf2233f0b0abc40876f3\") " pod="kube-system/kube-scheduler-ip-172-31-31-113" Jan 23 23:55:02.013503 kubelet[3044]: I0123 23:55:02.012938 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca243271cf0554ffa71ccff18099a819-ca-certs\") pod \"kube-apiserver-ip-172-31-31-113\" (UID: \"ca243271cf0554ffa71ccff18099a819\") " pod="kube-system/kube-apiserver-ip-172-31-31-113" Jan 23 23:55:02.016907 kubelet[3044]: E0123 23:55:02.016570 3044 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-113\" not found" node="ip-172-31-31-113" Jan 23 23:55:02.021895 systemd[1]: Created slice kubepods-burstable-podec5ae28126efcf2233f0b0abc40876f3.slice - libcontainer container kubepods-burstable-podec5ae28126efcf2233f0b0abc40876f3.slice. Jan 23 23:55:02.027168 kubelet[3044]: E0123 23:55:02.027126 3044 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-113\" not found" node="ip-172-31-31-113" Jan 23 23:55:02.030426 kubelet[3044]: E0123 23:55:02.030365 3044 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-113?timeout=10s\": dial tcp 172.31.31.113:6443: connect: connection refused" interval="400ms" Jan 23 23:55:02.041567 kubelet[3044]: I0123 23:55:02.041511 3044 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-113" Jan 23 23:55:02.042421 kubelet[3044]: E0123 23:55:02.042360 3044 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.113:6443/api/v1/nodes\": dial tcp 172.31.31.113:6443: connect: connection refused" node="ip-172-31-31-113" Jan 23 23:55:02.245080 kubelet[3044]: I0123 23:55:02.244905 3044 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-113" Jan 23 23:55:02.245461 kubelet[3044]: E0123 23:55:02.245417 3044 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.113:6443/api/v1/nodes\": dial tcp 172.31.31.113:6443: connect: connection refused" node="ip-172-31-31-113" Jan 23 23:55:02.301836 containerd[2008]: time="2026-01-23T23:55:02.301763954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-113,Uid:ca243271cf0554ffa71ccff18099a819,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:02.319881 containerd[2008]: time="2026-01-23T23:55:02.319805210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-113,Uid:1a620fcda989f9be99c3cbb99c9cf54f,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:02.335673 containerd[2008]: time="2026-01-23T23:55:02.335298195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-113,Uid:ec5ae28126efcf2233f0b0abc40876f3,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:02.431455 kubelet[3044]: E0123 23:55:02.431375 3044 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-113?timeout=10s\": dial tcp 172.31.31.113:6443: connect: connection refused" interval="800ms" Jan 23 23:55:02.648721 kubelet[3044]: I0123 23:55:02.648653 3044 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-113" Jan 23 23:55:02.649529 kubelet[3044]: E0123 23:55:02.649297 3044 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.113:6443/api/v1/nodes\": dial tcp 172.31.31.113:6443: connect: connection refused" node="ip-172-31-31-113" Jan 23 23:55:02.725947 kubelet[3044]: E0123 23:55:02.725870 3044 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:55:02.726764 kubelet[3044]: E0123 23:55:02.726698 3044 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:55:02.821551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691637985.mount: Deactivated successfully. Jan 23 23:55:02.836695 containerd[2008]: time="2026-01-23T23:55:02.836614457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:55:02.838876 containerd[2008]: time="2026-01-23T23:55:02.838803893Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:55:02.841039 containerd[2008]: time="2026-01-23T23:55:02.840623789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:55:02.842774 containerd[2008]: time="2026-01-23T23:55:02.842706569Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:55:02.844961 containerd[2008]: time="2026-01-23T23:55:02.844890233Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:55:02.847929 containerd[2008]: time="2026-01-23T23:55:02.847665629Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:55:02.849368 containerd[2008]: time="2026-01-23T23:55:02.849257933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:55:02.853725 containerd[2008]: time="2026-01-23T23:55:02.853640789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:55:02.858046 containerd[2008]: time="2026-01-23T23:55:02.857720045Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 537.797859ms" Jan 23 23:55:02.861998 containerd[2008]: time="2026-01-23T23:55:02.861900845Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.019483ms" Jan 23 23:55:02.880059 containerd[2008]: time="2026-01-23T23:55:02.879780797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.373054ms" Jan 23 23:55:02.897721 kubelet[3044]: E0123 23:55:02.897632 3044 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-113&limit=500&resourceVersion=0\": dial tcp 172.31.31.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:55:02.909733 kubelet[3044]: E0123 23:55:02.909564 3044 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:55:03.163302 containerd[2008]: time="2026-01-23T23:55:03.163003803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:03.164077 containerd[2008]: time="2026-01-23T23:55:03.163142079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:03.164363 containerd[2008]: time="2026-01-23T23:55:03.164236887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:03.166973 containerd[2008]: time="2026-01-23T23:55:03.166811655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:03.169409 containerd[2008]: time="2026-01-23T23:55:03.166934619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:03.169409 containerd[2008]: time="2026-01-23T23:55:03.169308435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:03.170636 containerd[2008]: time="2026-01-23T23:55:03.170562327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:03.170636 containerd[2008]: time="2026-01-23T23:55:03.170162091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:03.170636 containerd[2008]: time="2026-01-23T23:55:03.170266755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:03.170636 containerd[2008]: time="2026-01-23T23:55:03.170304291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:03.170636 containerd[2008]: time="2026-01-23T23:55:03.170446287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:03.171533 containerd[2008]: time="2026-01-23T23:55:03.171402387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:03.231337 systemd[1]: Started cri-containerd-42135b1e3f903b5fbcdabdff4e5476f93ddcae7077d7e4640216f417c2859346.scope - libcontainer container 42135b1e3f903b5fbcdabdff4e5476f93ddcae7077d7e4640216f417c2859346. Jan 23 23:55:03.233838 kubelet[3044]: E0123 23:55:03.232601 3044 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-113?timeout=10s\": dial tcp 172.31.31.113:6443: connect: connection refused" interval="1.6s" Jan 23 23:55:03.236587 systemd[1]: Started cri-containerd-fc8ae080bb82719c5be7c576f7d9cb4f025e7dc68c82cf5f2fdc02f8831f4660.scope - libcontainer container fc8ae080bb82719c5be7c576f7d9cb4f025e7dc68c82cf5f2fdc02f8831f4660. Jan 23 23:55:03.254348 systemd[1]: Started cri-containerd-bc122f3207726c104787c26a156b198a606c1e8fe747fc790a7f3c59e03bd1f1.scope - libcontainer container bc122f3207726c104787c26a156b198a606c1e8fe747fc790a7f3c59e03bd1f1. Jan 23 23:55:03.353966 containerd[2008]: time="2026-01-23T23:55:03.353912440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-113,Uid:ec5ae28126efcf2233f0b0abc40876f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"42135b1e3f903b5fbcdabdff4e5476f93ddcae7077d7e4640216f417c2859346\"" Jan 23 23:55:03.371190 containerd[2008]: time="2026-01-23T23:55:03.369931048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-113,Uid:1a620fcda989f9be99c3cbb99c9cf54f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc8ae080bb82719c5be7c576f7d9cb4f025e7dc68c82cf5f2fdc02f8831f4660\"" Jan 23 23:55:03.372701 containerd[2008]: time="2026-01-23T23:55:03.372525076Z" level=info msg="CreateContainer within sandbox \"42135b1e3f903b5fbcdabdff4e5476f93ddcae7077d7e4640216f417c2859346\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:55:03.383651 containerd[2008]: time="2026-01-23T23:55:03.383506936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-113,Uid:ca243271cf0554ffa71ccff18099a819,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc122f3207726c104787c26a156b198a606c1e8fe747fc790a7f3c59e03bd1f1\"" Jan 23 23:55:03.387917 containerd[2008]: time="2026-01-23T23:55:03.387397948Z" level=info msg="CreateContainer within sandbox \"fc8ae080bb82719c5be7c576f7d9cb4f025e7dc68c82cf5f2fdc02f8831f4660\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:55:03.398362 containerd[2008]: time="2026-01-23T23:55:03.397944940Z" level=info msg="CreateContainer within sandbox \"bc122f3207726c104787c26a156b198a606c1e8fe747fc790a7f3c59e03bd1f1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:55:03.416288 containerd[2008]: time="2026-01-23T23:55:03.416117056Z" level=info msg="CreateContainer within sandbox \"42135b1e3f903b5fbcdabdff4e5476f93ddcae7077d7e4640216f417c2859346\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aa0969693baef5429a3d69c5b69ba3a559ee32df487c13b0554f599a9483d004\"" Jan 23 23:55:03.418994 containerd[2008]: time="2026-01-23T23:55:03.418786972Z" level=info msg="StartContainer for \"aa0969693baef5429a3d69c5b69ba3a559ee32df487c13b0554f599a9483d004\"" Jan 23 23:55:03.425044 containerd[2008]: time="2026-01-23T23:55:03.424346116Z" level=info msg="CreateContainer within sandbox \"fc8ae080bb82719c5be7c576f7d9cb4f025e7dc68c82cf5f2fdc02f8831f4660\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e49c003884580ca413bd41a5c8a3f1dded0755ee4b7d1dd9a3fc854f099615a0\"" Jan 23 23:55:03.427225 containerd[2008]: time="2026-01-23T23:55:03.426192928Z" level=info msg="StartContainer for \"e49c003884580ca413bd41a5c8a3f1dded0755ee4b7d1dd9a3fc854f099615a0\"" Jan 23 23:55:03.441346 containerd[2008]: time="2026-01-23T23:55:03.441291268Z" level=info msg="CreateContainer within sandbox \"bc122f3207726c104787c26a156b198a606c1e8fe747fc790a7f3c59e03bd1f1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7f2bb857d3b43a5fab820fdf062ebee33ea612c68090b50f21b5d081537ffacc\"" Jan 23 23:55:03.442208 containerd[2008]: time="2026-01-23T23:55:03.442113988Z" level=info msg="StartContainer for \"7f2bb857d3b43a5fab820fdf062ebee33ea612c68090b50f21b5d081537ffacc\"" Jan 23 23:55:03.452243 kubelet[3044]: I0123 23:55:03.452197 3044 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-113" Jan 23 23:55:03.453873 kubelet[3044]: E0123 23:55:03.453809 3044 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.113:6443/api/v1/nodes\": dial tcp 172.31.31.113:6443: connect: connection refused" node="ip-172-31-31-113" Jan 23 23:55:03.487363 systemd[1]: Started cri-containerd-aa0969693baef5429a3d69c5b69ba3a559ee32df487c13b0554f599a9483d004.scope - libcontainer container aa0969693baef5429a3d69c5b69ba3a559ee32df487c13b0554f599a9483d004. Jan 23 23:55:03.512341 systemd[1]: Started cri-containerd-e49c003884580ca413bd41a5c8a3f1dded0755ee4b7d1dd9a3fc854f099615a0.scope - libcontainer container e49c003884580ca413bd41a5c8a3f1dded0755ee4b7d1dd9a3fc854f099615a0. Jan 23 23:55:03.539156 systemd[1]: Started cri-containerd-7f2bb857d3b43a5fab820fdf062ebee33ea612c68090b50f21b5d081537ffacc.scope - libcontainer container 7f2bb857d3b43a5fab820fdf062ebee33ea612c68090b50f21b5d081537ffacc. Jan 23 23:55:03.624102 containerd[2008]: time="2026-01-23T23:55:03.623587217Z" level=info msg="StartContainer for \"e49c003884580ca413bd41a5c8a3f1dded0755ee4b7d1dd9a3fc854f099615a0\" returns successfully" Jan 23 23:55:03.656391 containerd[2008]: time="2026-01-23T23:55:03.655404389Z" level=info msg="StartContainer for \"aa0969693baef5429a3d69c5b69ba3a559ee32df487c13b0554f599a9483d004\" returns successfully" Jan 23 23:55:03.688150 containerd[2008]: time="2026-01-23T23:55:03.687928061Z" level=info msg="StartContainer for \"7f2bb857d3b43a5fab820fdf062ebee33ea612c68090b50f21b5d081537ffacc\" returns successfully" Jan 23 23:55:03.732599 kubelet[3044]: E0123 23:55:03.731703 3044 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.113:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 23:55:03.901240 kubelet[3044]: E0123 23:55:03.900817 3044 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-113\" not found" node="ip-172-31-31-113" Jan 23 23:55:03.901809 kubelet[3044]: E0123 23:55:03.901774 3044 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-113\" not found" node="ip-172-31-31-113" Jan 23 23:55:03.910180 kubelet[3044]: E0123 23:55:03.910053 3044 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-113\" not found" node="ip-172-31-31-113" Jan 23 23:55:04.912044 kubelet[3044]: E0123 23:55:04.911756 3044 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-113\" not found" node="ip-172-31-31-113" Jan 23 23:55:04.920706 kubelet[3044]: E0123 23:55:04.919597 3044 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-113\" not found" node="ip-172-31-31-113" Jan 23 23:55:04.920706 kubelet[3044]: E0123 23:55:04.920334 3044 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-113\" not found" node="ip-172-31-31-113" Jan 23 23:55:05.057049 kubelet[3044]: I0123 23:55:05.056868 3044 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-113" Jan 23 23:55:05.913662 kubelet[3044]: E0123 23:55:05.913609 3044 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-113\" not found" node="ip-172-31-31-113" Jan 23 23:55:07.763753 kubelet[3044]: I0123 23:55:07.763462 3044 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-113" Jan 23 23:55:07.763753 kubelet[3044]: E0123 23:55:07.763522 3044 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-113\": node \"ip-172-31-31-113\" not found" Jan 23 23:55:07.784005 kubelet[3044]: I0123 23:55:07.783678 3044 apiserver.go:52] "Watching apiserver" Jan 23 23:55:07.812199 kubelet[3044]: I0123 23:55:07.812063 3044 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:55:07.812366 kubelet[3044]: I0123 23:55:07.812249 3044 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-113" Jan 23 23:55:07.870402 kubelet[3044]: E0123 23:55:07.870331 3044 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-113\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-113" Jan 23 23:55:07.870402 kubelet[3044]: I0123 23:55:07.870382 3044 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-113" Jan 23 23:55:07.876508 kubelet[3044]: E0123 23:55:07.876400 3044 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-113\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-113" Jan 23 23:55:07.877626 kubelet[3044]: I0123 23:55:07.876449 3044 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-113" Jan 23 23:55:07.888302 kubelet[3044]: E0123 23:55:07.888242 3044 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-113\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-113" Jan 23 23:55:10.165503 systemd[1]: Reloading requested from client PID 3375 ('systemctl') (unit session-7.scope)... Jan 23 23:55:10.165535 systemd[1]: Reloading... Jan 23 23:55:10.363028 zram_generator::config[3424]: No configuration found. Jan 23 23:55:10.583635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:55:10.791270 systemd[1]: Reloading finished in 624 ms. Jan 23 23:55:10.877478 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:10.894891 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:55:10.896136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:10.896233 systemd[1]: kubelet.service: Consumed 2.311s CPU time, 127.5M memory peak, 0B memory swap peak. Jan 23 23:55:10.906437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:11.231850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:11.252667 (kubelet)[3475]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:55:11.363198 kubelet[3475]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:55:11.363198 kubelet[3475]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:55:11.363198 kubelet[3475]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:55:11.363198 kubelet[3475]: I0123 23:55:11.363006 3475 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:55:11.379036 kubelet[3475]: I0123 23:55:11.377846 3475 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 23:55:11.379036 kubelet[3475]: I0123 23:55:11.377893 3475 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:55:11.379036 kubelet[3475]: I0123 23:55:11.378300 3475 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:55:11.381135 kubelet[3475]: I0123 23:55:11.381009 3475 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 23:55:11.386068 kubelet[3475]: I0123 23:55:11.386022 3475 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:55:11.394567 kubelet[3475]: E0123 23:55:11.394510 3475 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:55:11.394567 kubelet[3475]: I0123 23:55:11.394567 3475 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:55:11.403882 kubelet[3475]: I0123 23:55:11.403192 3475 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:55:11.403882 kubelet[3475]: I0123 23:55:11.403661 3475 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:55:11.406575 kubelet[3475]: I0123 23:55:11.403714 3475 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-113","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:55:11.406575 kubelet[3475]: I0123 23:55:11.406526 3475 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:55:11.406575 kubelet[3475]: I0123 23:55:11.406551 3475 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 23:55:11.406920 kubelet[3475]: I0123 23:55:11.406645 3475 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:55:11.406920 kubelet[3475]: I0123 23:55:11.406915 3475 kubelet.go:480] "Attempting to sync node with API server" Jan 23 23:55:11.407072 kubelet[3475]: I0123 23:55:11.406946 3475 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:55:11.412319 kubelet[3475]: I0123 23:55:11.412125 3475 kubelet.go:386] "Adding apiserver pod source" Jan 23 23:55:11.412319 kubelet[3475]: I0123 23:55:11.412195 3475 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:55:11.418327 kubelet[3475]: I0123 23:55:11.418274 3475 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:55:11.425477 kubelet[3475]: I0123 23:55:11.425419 3475 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:55:11.438536 kubelet[3475]: I0123 23:55:11.438434 3475 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:55:11.438536 kubelet[3475]: I0123 23:55:11.438507 3475 server.go:1289] "Started kubelet" Jan 23 23:55:11.451543 kubelet[3475]: I0123 23:55:11.451293 3475 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:55:11.453641 kubelet[3475]: I0123 23:55:11.453493 3475 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:55:11.459234 kubelet[3475]: I0123 23:55:11.454941 3475 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:55:11.459806 kubelet[3475]: I0123 23:55:11.459497 3475 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:55:11.468031 kubelet[3475]: I0123 23:55:11.467534 3475 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:55:11.477448 kubelet[3475]: I0123 23:55:11.475695 3475 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:55:11.477448 kubelet[3475]: E0123 23:55:11.475911 3475 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-113\" not found" Jan 23 23:55:11.480894 kubelet[3475]: I0123 23:55:11.478541 3475 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:55:11.480894 kubelet[3475]: I0123 23:55:11.480403 3475 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:55:11.513168 kubelet[3475]: I0123 23:55:11.511564 3475 server.go:317] "Adding debug handlers to kubelet server" Jan 23 23:55:11.529605 kubelet[3475]: I0123 23:55:11.529568 3475 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:55:11.529966 kubelet[3475]: I0123 23:55:11.529931 3475 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:55:11.533786 kubelet[3475]: I0123 23:55:11.533752 3475 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:55:11.538857 kubelet[3475]: E0123 23:55:11.538647 3475 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:55:11.574095 kubelet[3475]: I0123 23:55:11.573959 3475 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 23:55:11.584768 kubelet[3475]: I0123 23:55:11.584725 3475 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 23:55:11.584768 kubelet[3475]: I0123 23:55:11.584815 3475 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 23:55:11.584768 kubelet[3475]: I0123 23:55:11.584853 3475 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:55:11.585347 kubelet[3475]: I0123 23:55:11.585187 3475 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 23:55:11.585698 kubelet[3475]: E0123 23:55:11.585479 3475 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:55:11.667070 kubelet[3475]: I0123 23:55:11.666157 3475 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:55:11.667070 kubelet[3475]: I0123 23:55:11.666185 3475 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:55:11.667070 kubelet[3475]: I0123 23:55:11.666221 3475 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:55:11.668029 kubelet[3475]: I0123 23:55:11.667479 3475 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:55:11.668029 kubelet[3475]: I0123 23:55:11.667508 3475 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:55:11.668029 kubelet[3475]: I0123 23:55:11.667539 3475 policy_none.go:49] "None policy: Start" Jan 23 23:55:11.668029 kubelet[3475]: I0123 23:55:11.667557 3475 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:55:11.668029 kubelet[3475]: I0123 23:55:11.667578 3475 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:55:11.668029 kubelet[3475]: I0123 23:55:11.667750 3475 state_mem.go:75] "Updated machine memory state" Jan 23 23:55:11.678210 kubelet[3475]: E0123 23:55:11.678118 3475 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:55:11.679694 kubelet[3475]: I0123 23:55:11.678487 3475 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:55:11.679694 kubelet[3475]: I0123 23:55:11.678526 3475 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:55:11.679694 kubelet[3475]: I0123 23:55:11.679046 3475 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:55:11.689041 kubelet[3475]: E0123 23:55:11.688876 3475 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:55:11.691390 kubelet[3475]: I0123 23:55:11.691336 3475 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-113" Jan 23 23:55:11.692150 kubelet[3475]: I0123 23:55:11.692105 3475 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-113" Jan 23 23:55:11.694620 kubelet[3475]: I0123 23:55:11.692670 3475 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-113" Jan 23 23:55:11.783442 kubelet[3475]: I0123 23:55:11.782644 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a620fcda989f9be99c3cbb99c9cf54f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-113\" (UID: \"1a620fcda989f9be99c3cbb99c9cf54f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-113" Jan 23 23:55:11.785712 kubelet[3475]: I0123 23:55:11.784343 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a620fcda989f9be99c3cbb99c9cf54f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-113\" (UID: \"1a620fcda989f9be99c3cbb99c9cf54f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-113" Jan 23 23:55:11.785712 kubelet[3475]: I0123 23:55:11.784404 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a620fcda989f9be99c3cbb99c9cf54f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-113\" (UID: \"1a620fcda989f9be99c3cbb99c9cf54f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-113" Jan 23 23:55:11.785712 kubelet[3475]: I0123 23:55:11.784445 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca243271cf0554ffa71ccff18099a819-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-113\" (UID: \"ca243271cf0554ffa71ccff18099a819\") " pod="kube-system/kube-apiserver-ip-172-31-31-113" Jan 23 23:55:11.785712 kubelet[3475]: I0123 23:55:11.784481 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1a620fcda989f9be99c3cbb99c9cf54f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-113\" (UID: \"1a620fcda989f9be99c3cbb99c9cf54f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-113" Jan 23 23:55:11.785712 kubelet[3475]: I0123 23:55:11.784518 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec5ae28126efcf2233f0b0abc40876f3-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-113\" (UID: \"ec5ae28126efcf2233f0b0abc40876f3\") " pod="kube-system/kube-scheduler-ip-172-31-31-113" Jan 23 23:55:11.786089 kubelet[3475]: I0123 23:55:11.784553 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca243271cf0554ffa71ccff18099a819-ca-certs\") pod \"kube-apiserver-ip-172-31-31-113\" (UID: \"ca243271cf0554ffa71ccff18099a819\") " pod="kube-system/kube-apiserver-ip-172-31-31-113" Jan 23 23:55:11.786089 kubelet[3475]: I0123 23:55:11.784588 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca243271cf0554ffa71ccff18099a819-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-113\" (UID: \"ca243271cf0554ffa71ccff18099a819\") " pod="kube-system/kube-apiserver-ip-172-31-31-113" Jan 23 23:55:11.786089 kubelet[3475]: I0123 23:55:11.784635 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a620fcda989f9be99c3cbb99c9cf54f-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-113\" (UID: \"1a620fcda989f9be99c3cbb99c9cf54f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-113" Jan 23 23:55:11.809134 kubelet[3475]: I0123 23:55:11.807747 3475 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-113" Jan 23 23:55:11.830908 kubelet[3475]: I0123 23:55:11.830866 3475 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-31-113" Jan 23 23:55:11.831208 kubelet[3475]: I0123 23:55:11.831188 3475 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-113" Jan 23 23:55:12.420039 kubelet[3475]: I0123 23:55:12.417968 3475 apiserver.go:52] "Watching apiserver" Jan 23 23:55:12.480461 kubelet[3475]: I0123 23:55:12.480380 3475 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:55:12.634162 kubelet[3475]: I0123 23:55:12.633952 3475 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-113" Jan 23 23:55:12.651714 kubelet[3475]: E0123 23:55:12.651579 3475 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-113\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-113" Jan 23 23:55:12.702944 kubelet[3475]: I0123 23:55:12.702777 3475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-113" podStartSLOduration=1.702755018 podStartE2EDuration="1.702755018s" podCreationTimestamp="2026-01-23 23:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:12.673323986 +0000 UTC m=+1.406970452" watchObservedRunningTime="2026-01-23 23:55:12.702755018 +0000 UTC m=+1.436401496" Jan 23 23:55:12.743448 kubelet[3475]: I0123 23:55:12.742198 3475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-113" podStartSLOduration=1.742174394 podStartE2EDuration="1.742174394s" podCreationTimestamp="2026-01-23 23:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:12.703515746 +0000 UTC m=+1.437162212" watchObservedRunningTime="2026-01-23 23:55:12.742174394 +0000 UTC m=+1.475820872" Jan 23 23:55:12.766514 kubelet[3475]: I0123 23:55:12.766401 3475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-113" podStartSLOduration=1.7663818500000001 podStartE2EDuration="1.76638185s" podCreationTimestamp="2026-01-23 23:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:12.742753094 +0000 UTC m=+1.476399608" watchObservedRunningTime="2026-01-23 23:55:12.76638185 +0000 UTC m=+1.500028340" Jan 23 23:55:13.210251 sudo[2320]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:13.293747 sshd[2317]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:13.301862 systemd[1]: sshd@6-172.31.31.113:22-4.153.228.146:54762.service: Deactivated successfully. Jan 23 23:55:13.305886 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:55:13.306748 systemd[1]: session-7.scope: Consumed 11.144s CPU time, 154.1M memory peak, 0B memory swap peak. Jan 23 23:55:13.310097 systemd-logind[1990]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:55:13.312564 systemd-logind[1990]: Removed session 7. Jan 23 23:55:16.475162 kubelet[3475]: I0123 23:55:16.474738 3475 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:55:16.476711 containerd[2008]: time="2026-01-23T23:55:16.476168753Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:55:16.480461 kubelet[3475]: I0123 23:55:16.476583 3475 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:55:17.610329 systemd[1]: Created slice kubepods-besteffort-podbbc59438_76ef_4472_a058_6e3385c00df3.slice - libcontainer container kubepods-besteffort-podbbc59438_76ef_4472_a058_6e3385c00df3.slice. Jan 23 23:55:17.625958 kubelet[3475]: I0123 23:55:17.625890 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbc59438-76ef-4472-a058-6e3385c00df3-lib-modules\") pod \"kube-proxy-dtxjm\" (UID: \"bbc59438-76ef-4472-a058-6e3385c00df3\") " pod="kube-system/kube-proxy-dtxjm" Jan 23 23:55:17.626535 kubelet[3475]: I0123 23:55:17.625968 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngk6d\" (UniqueName: \"kubernetes.io/projected/bbc59438-76ef-4472-a058-6e3385c00df3-kube-api-access-ngk6d\") pod \"kube-proxy-dtxjm\" (UID: \"bbc59438-76ef-4472-a058-6e3385c00df3\") " pod="kube-system/kube-proxy-dtxjm" Jan 23 23:55:17.626535 kubelet[3475]: I0123 23:55:17.626072 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bbc59438-76ef-4472-a058-6e3385c00df3-kube-proxy\") pod \"kube-proxy-dtxjm\" (UID: \"bbc59438-76ef-4472-a058-6e3385c00df3\") " pod="kube-system/kube-proxy-dtxjm" Jan 23 23:55:17.626535 kubelet[3475]: I0123 23:55:17.626110 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbc59438-76ef-4472-a058-6e3385c00df3-xtables-lock\") pod \"kube-proxy-dtxjm\" (UID: \"bbc59438-76ef-4472-a058-6e3385c00df3\") " pod="kube-system/kube-proxy-dtxjm" Jan 23 23:55:17.650798 systemd[1]: Created slice kubepods-burstable-pod32897ab4_0e8d_42d3_b058_fa2f3ae4762b.slice - libcontainer container kubepods-burstable-pod32897ab4_0e8d_42d3_b058_fa2f3ae4762b.slice. Jan 23 23:55:17.727150 kubelet[3475]: I0123 23:55:17.727087 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32897ab4-0e8d-42d3-b058-fa2f3ae4762b-xtables-lock\") pod \"kube-flannel-ds-qvhbb\" (UID: \"32897ab4-0e8d-42d3-b058-fa2f3ae4762b\") " pod="kube-flannel/kube-flannel-ds-qvhbb" Jan 23 23:55:17.727337 kubelet[3475]: I0123 23:55:17.727198 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/32897ab4-0e8d-42d3-b058-fa2f3ae4762b-run\") pod \"kube-flannel-ds-qvhbb\" (UID: \"32897ab4-0e8d-42d3-b058-fa2f3ae4762b\") " pod="kube-flannel/kube-flannel-ds-qvhbb" Jan 23 23:55:17.727337 kubelet[3475]: I0123 23:55:17.727263 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/32897ab4-0e8d-42d3-b058-fa2f3ae4762b-cni-plugin\") pod \"kube-flannel-ds-qvhbb\" (UID: \"32897ab4-0e8d-42d3-b058-fa2f3ae4762b\") " pod="kube-flannel/kube-flannel-ds-qvhbb" Jan 23 23:55:17.727455 kubelet[3475]: I0123 23:55:17.727326 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/32897ab4-0e8d-42d3-b058-fa2f3ae4762b-flannel-cfg\") pod \"kube-flannel-ds-qvhbb\" (UID: \"32897ab4-0e8d-42d3-b058-fa2f3ae4762b\") " pod="kube-flannel/kube-flannel-ds-qvhbb" Jan 23 23:55:17.727455 kubelet[3475]: I0123 23:55:17.727378 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn6jk\" (UniqueName: \"kubernetes.io/projected/32897ab4-0e8d-42d3-b058-fa2f3ae4762b-kube-api-access-hn6jk\") pod \"kube-flannel-ds-qvhbb\" (UID: \"32897ab4-0e8d-42d3-b058-fa2f3ae4762b\") " pod="kube-flannel/kube-flannel-ds-qvhbb" Jan 23 23:55:17.727455 kubelet[3475]: I0123 23:55:17.727440 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/32897ab4-0e8d-42d3-b058-fa2f3ae4762b-cni\") pod \"kube-flannel-ds-qvhbb\" (UID: \"32897ab4-0e8d-42d3-b058-fa2f3ae4762b\") " pod="kube-flannel/kube-flannel-ds-qvhbb" Jan 23 23:55:17.936063 containerd[2008]: time="2026-01-23T23:55:17.935203184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dtxjm,Uid:bbc59438-76ef-4472-a058-6e3385c00df3,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:17.961151 containerd[2008]: time="2026-01-23T23:55:17.956397320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-qvhbb,Uid:32897ab4-0e8d-42d3-b058-fa2f3ae4762b,Namespace:kube-flannel,Attempt:0,}" Jan 23 23:55:18.003177 containerd[2008]: time="2026-01-23T23:55:18.002870500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:18.003177 containerd[2008]: time="2026-01-23T23:55:18.003022552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:18.003177 containerd[2008]: time="2026-01-23T23:55:18.003084316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:18.006971 containerd[2008]: time="2026-01-23T23:55:18.006524920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:18.039596 containerd[2008]: time="2026-01-23T23:55:18.039414041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:18.042112 containerd[2008]: time="2026-01-23T23:55:18.039872765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:18.042335 containerd[2008]: time="2026-01-23T23:55:18.042046397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:18.042335 containerd[2008]: time="2026-01-23T23:55:18.042260885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:18.054343 systemd[1]: Started cri-containerd-524f44f786262ab84e524cae6894aba13fefac2fa3e22fa14e8e7d136c9e19fd.scope - libcontainer container 524f44f786262ab84e524cae6894aba13fefac2fa3e22fa14e8e7d136c9e19fd. Jan 23 23:55:18.109283 systemd[1]: Started cri-containerd-49f96b740edfec0ec6680d1e67178f29a39aca533828106800f43032c5ee8b6b.scope - libcontainer container 49f96b740edfec0ec6680d1e67178f29a39aca533828106800f43032c5ee8b6b. Jan 23 23:55:18.143478 containerd[2008]: time="2026-01-23T23:55:18.143399597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dtxjm,Uid:bbc59438-76ef-4472-a058-6e3385c00df3,Namespace:kube-system,Attempt:0,} returns sandbox id \"524f44f786262ab84e524cae6894aba13fefac2fa3e22fa14e8e7d136c9e19fd\"" Jan 23 23:55:18.160141 containerd[2008]: time="2026-01-23T23:55:18.159443597Z" level=info msg="CreateContainer within sandbox \"524f44f786262ab84e524cae6894aba13fefac2fa3e22fa14e8e7d136c9e19fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:55:18.213947 containerd[2008]: time="2026-01-23T23:55:18.212748893Z" level=info msg="CreateContainer within sandbox \"524f44f786262ab84e524cae6894aba13fefac2fa3e22fa14e8e7d136c9e19fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3c78ffe2b8671d1a09021f68186b932bb3863afac6821bfe470ffc5676840b41\"" Jan 23 23:55:18.213947 containerd[2008]: time="2026-01-23T23:55:18.213393965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-qvhbb,Uid:32897ab4-0e8d-42d3-b058-fa2f3ae4762b,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"49f96b740edfec0ec6680d1e67178f29a39aca533828106800f43032c5ee8b6b\"" Jan 23 23:55:18.213947 containerd[2008]: time="2026-01-23T23:55:18.213912929Z" level=info msg="StartContainer for \"3c78ffe2b8671d1a09021f68186b932bb3863afac6821bfe470ffc5676840b41\"" Jan 23 23:55:18.229590 containerd[2008]: time="2026-01-23T23:55:18.229174949Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 23 23:55:18.280917 systemd[1]: Started cri-containerd-3c78ffe2b8671d1a09021f68186b932bb3863afac6821bfe470ffc5676840b41.scope - libcontainer container 3c78ffe2b8671d1a09021f68186b932bb3863afac6821bfe470ffc5676840b41. Jan 23 23:55:18.348844 containerd[2008]: time="2026-01-23T23:55:18.348631854Z" level=info msg="StartContainer for \"3c78ffe2b8671d1a09021f68186b932bb3863afac6821bfe470ffc5676840b41\" returns successfully" Jan 23 23:55:19.076432 kubelet[3475]: I0123 23:55:19.075790 3475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dtxjm" podStartSLOduration=2.075769974 podStartE2EDuration="2.075769974s" podCreationTimestamp="2026-01-23 23:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:18.67116548 +0000 UTC m=+7.404811970" watchObservedRunningTime="2026-01-23 23:55:19.075769974 +0000 UTC m=+7.809416464" Jan 23 23:55:19.608199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833356342.mount: Deactivated successfully. Jan 23 23:55:19.711089 containerd[2008]: time="2026-01-23T23:55:19.710302965Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:19.712789 containerd[2008]: time="2026-01-23T23:55:19.712715205Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=5125564" Jan 23 23:55:19.714762 containerd[2008]: time="2026-01-23T23:55:19.714668565Z" level=info msg="ImageCreate event name:\"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:19.721390 containerd[2008]: time="2026-01-23T23:55:19.721308009Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:19.723889 containerd[2008]: time="2026-01-23T23:55:19.723830193Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"5125394\" in 1.494572672s" Jan 23 23:55:19.724505 containerd[2008]: time="2026-01-23T23:55:19.724040649Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\"" Jan 23 23:55:19.733364 containerd[2008]: time="2026-01-23T23:55:19.732949581Z" level=info msg="CreateContainer within sandbox \"49f96b740edfec0ec6680d1e67178f29a39aca533828106800f43032c5ee8b6b\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 23 23:55:19.760172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount525748661.mount: Deactivated successfully. Jan 23 23:55:19.761423 containerd[2008]: time="2026-01-23T23:55:19.761340417Z" level=info msg="CreateContainer within sandbox \"49f96b740edfec0ec6680d1e67178f29a39aca533828106800f43032c5ee8b6b\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"3c0cf3dd24cebf8fcb799fae78b79f608b5f5c7a7b1297a467b8a41c45ccec2b\"" Jan 23 23:55:19.765002 containerd[2008]: time="2026-01-23T23:55:19.763611561Z" level=info msg="StartContainer for \"3c0cf3dd24cebf8fcb799fae78b79f608b5f5c7a7b1297a467b8a41c45ccec2b\"" Jan 23 23:55:19.829331 systemd[1]: Started cri-containerd-3c0cf3dd24cebf8fcb799fae78b79f608b5f5c7a7b1297a467b8a41c45ccec2b.scope - libcontainer container 3c0cf3dd24cebf8fcb799fae78b79f608b5f5c7a7b1297a467b8a41c45ccec2b. Jan 23 23:55:19.879677 containerd[2008]: time="2026-01-23T23:55:19.878708806Z" level=info msg="StartContainer for \"3c0cf3dd24cebf8fcb799fae78b79f608b5f5c7a7b1297a467b8a41c45ccec2b\" returns successfully" Jan 23 23:55:19.884933 systemd[1]: cri-containerd-3c0cf3dd24cebf8fcb799fae78b79f608b5f5c7a7b1297a467b8a41c45ccec2b.scope: Deactivated successfully. Jan 23 23:55:19.932418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c0cf3dd24cebf8fcb799fae78b79f608b5f5c7a7b1297a467b8a41c45ccec2b-rootfs.mount: Deactivated successfully. Jan 23 23:55:19.964449 containerd[2008]: time="2026-01-23T23:55:19.964358950Z" level=info msg="shim disconnected" id=3c0cf3dd24cebf8fcb799fae78b79f608b5f5c7a7b1297a467b8a41c45ccec2b namespace=k8s.io Jan 23 23:55:19.964449 containerd[2008]: time="2026-01-23T23:55:19.964441930Z" level=warning msg="cleaning up after shim disconnected" id=3c0cf3dd24cebf8fcb799fae78b79f608b5f5c7a7b1297a467b8a41c45ccec2b namespace=k8s.io Jan 23 23:55:19.965183 containerd[2008]: time="2026-01-23T23:55:19.964463806Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:55:20.672878 containerd[2008]: time="2026-01-23T23:55:20.672058930Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 23 23:55:23.159049 containerd[2008]: time="2026-01-23T23:55:23.157701526Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:23.160524 containerd[2008]: time="2026-01-23T23:55:23.160452070Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=28419854" Jan 23 23:55:23.162566 containerd[2008]: time="2026-01-23T23:55:23.162474622Z" level=info msg="ImageCreate event name:\"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:23.172561 containerd[2008]: time="2026-01-23T23:55:23.172477666Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:23.176095 containerd[2008]: time="2026-01-23T23:55:23.175438258Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32412118\" in 2.502551748s" Jan 23 23:55:23.176095 containerd[2008]: time="2026-01-23T23:55:23.175506502Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\"" Jan 23 23:55:23.187646 containerd[2008]: time="2026-01-23T23:55:23.187287298Z" level=info msg="CreateContainer within sandbox \"49f96b740edfec0ec6680d1e67178f29a39aca533828106800f43032c5ee8b6b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 23:55:23.215261 containerd[2008]: time="2026-01-23T23:55:23.215178838Z" level=info msg="CreateContainer within sandbox \"49f96b740edfec0ec6680d1e67178f29a39aca533828106800f43032c5ee8b6b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f91ff6f06f9efc79fdda8a20f2186221c8a9a5da1ec628f9735b08cf94eb0204\"" Jan 23 23:55:23.218000 containerd[2008]: time="2026-01-23T23:55:23.216290854Z" level=info msg="StartContainer for \"f91ff6f06f9efc79fdda8a20f2186221c8a9a5da1ec628f9735b08cf94eb0204\"" Jan 23 23:55:23.271300 systemd[1]: Started cri-containerd-f91ff6f06f9efc79fdda8a20f2186221c8a9a5da1ec628f9735b08cf94eb0204.scope - libcontainer container f91ff6f06f9efc79fdda8a20f2186221c8a9a5da1ec628f9735b08cf94eb0204. Jan 23 23:55:23.319779 systemd[1]: cri-containerd-f91ff6f06f9efc79fdda8a20f2186221c8a9a5da1ec628f9735b08cf94eb0204.scope: Deactivated successfully. Jan 23 23:55:23.322875 containerd[2008]: time="2026-01-23T23:55:23.322617371Z" level=info msg="StartContainer for \"f91ff6f06f9efc79fdda8a20f2186221c8a9a5da1ec628f9735b08cf94eb0204\" returns successfully" Jan 23 23:55:23.352040 kubelet[3475]: I0123 23:55:23.350844 3475 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:55:23.390229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f91ff6f06f9efc79fdda8a20f2186221c8a9a5da1ec628f9735b08cf94eb0204-rootfs.mount: Deactivated successfully. Jan 23 23:55:23.507718 systemd[1]: Created slice kubepods-burstable-pod0d61d400_4f1e_4be5_bdd5_dbcdee64514d.slice - libcontainer container kubepods-burstable-pod0d61d400_4f1e_4be5_bdd5_dbcdee64514d.slice. Jan 23 23:55:23.536056 systemd[1]: Created slice kubepods-burstable-pod81da4f83_8e59_44f6_8f34_742aea468f5c.slice - libcontainer container kubepods-burstable-pod81da4f83_8e59_44f6_8f34_742aea468f5c.slice. Jan 23 23:55:23.553794 containerd[2008]: time="2026-01-23T23:55:23.553448856Z" level=info msg="shim disconnected" id=f91ff6f06f9efc79fdda8a20f2186221c8a9a5da1ec628f9735b08cf94eb0204 namespace=k8s.io Jan 23 23:55:23.553794 containerd[2008]: time="2026-01-23T23:55:23.553521984Z" level=warning msg="cleaning up after shim disconnected" id=f91ff6f06f9efc79fdda8a20f2186221c8a9a5da1ec628f9735b08cf94eb0204 namespace=k8s.io Jan 23 23:55:23.553794 containerd[2008]: time="2026-01-23T23:55:23.553541832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:55:23.570775 kubelet[3475]: I0123 23:55:23.570669 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mnst\" (UniqueName: \"kubernetes.io/projected/0d61d400-4f1e-4be5-bdd5-dbcdee64514d-kube-api-access-4mnst\") pod \"coredns-674b8bbfcf-qd4ts\" (UID: \"0d61d400-4f1e-4be5-bdd5-dbcdee64514d\") " pod="kube-system/coredns-674b8bbfcf-qd4ts" Jan 23 23:55:23.570775 kubelet[3475]: I0123 23:55:23.570749 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81da4f83-8e59-44f6-8f34-742aea468f5c-config-volume\") pod \"coredns-674b8bbfcf-dmhz5\" (UID: \"81da4f83-8e59-44f6-8f34-742aea468f5c\") " pod="kube-system/coredns-674b8bbfcf-dmhz5" Jan 23 23:55:23.571169 kubelet[3475]: I0123 23:55:23.570797 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj225\" (UniqueName: \"kubernetes.io/projected/81da4f83-8e59-44f6-8f34-742aea468f5c-kube-api-access-mj225\") pod \"coredns-674b8bbfcf-dmhz5\" (UID: \"81da4f83-8e59-44f6-8f34-742aea468f5c\") " pod="kube-system/coredns-674b8bbfcf-dmhz5" Jan 23 23:55:23.571169 kubelet[3475]: I0123 23:55:23.570833 3475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d61d400-4f1e-4be5-bdd5-dbcdee64514d-config-volume\") pod \"coredns-674b8bbfcf-qd4ts\" (UID: \"0d61d400-4f1e-4be5-bdd5-dbcdee64514d\") " pod="kube-system/coredns-674b8bbfcf-qd4ts" Jan 23 23:55:23.695334 containerd[2008]: time="2026-01-23T23:55:23.694590205Z" level=info msg="CreateContainer within sandbox \"49f96b740edfec0ec6680d1e67178f29a39aca533828106800f43032c5ee8b6b\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 23 23:55:23.723364 containerd[2008]: time="2026-01-23T23:55:23.723179509Z" level=info msg="CreateContainer within sandbox \"49f96b740edfec0ec6680d1e67178f29a39aca533828106800f43032c5ee8b6b\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"051bf521db516807b2c7fe2496e359c2a0315fe157329e1789d07bcedd9d6fad\"" Jan 23 23:55:23.727394 containerd[2008]: time="2026-01-23T23:55:23.727297489Z" level=info msg="StartContainer for \"051bf521db516807b2c7fe2496e359c2a0315fe157329e1789d07bcedd9d6fad\"" Jan 23 23:55:23.779329 systemd[1]: Started cri-containerd-051bf521db516807b2c7fe2496e359c2a0315fe157329e1789d07bcedd9d6fad.scope - libcontainer container 051bf521db516807b2c7fe2496e359c2a0315fe157329e1789d07bcedd9d6fad. Jan 23 23:55:23.827796 containerd[2008]: time="2026-01-23T23:55:23.827308441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qd4ts,Uid:0d61d400-4f1e-4be5-bdd5-dbcdee64514d,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:23.835121 containerd[2008]: time="2026-01-23T23:55:23.835039801Z" level=info msg="StartContainer for \"051bf521db516807b2c7fe2496e359c2a0315fe157329e1789d07bcedd9d6fad\" returns successfully" Jan 23 23:55:23.848704 containerd[2008]: time="2026-01-23T23:55:23.848609569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dmhz5,Uid:81da4f83-8e59-44f6-8f34-742aea468f5c,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:23.935159 containerd[2008]: time="2026-01-23T23:55:23.934851182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qd4ts,Uid:0d61d400-4f1e-4be5-bdd5-dbcdee64514d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1325729ab8c83293ddc67c28093fc32e259971d80d04109ebe2c6cf9ab7bd83\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 23:55:23.936081 kubelet[3475]: E0123 23:55:23.935745 3475 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1325729ab8c83293ddc67c28093fc32e259971d80d04109ebe2c6cf9ab7bd83\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 23:55:23.936253 kubelet[3475]: E0123 23:55:23.936180 3475 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1325729ab8c83293ddc67c28093fc32e259971d80d04109ebe2c6cf9ab7bd83\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-qd4ts" Jan 23 23:55:23.936320 kubelet[3475]: E0123 23:55:23.936250 3475 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1325729ab8c83293ddc67c28093fc32e259971d80d04109ebe2c6cf9ab7bd83\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-qd4ts" Jan 23 23:55:23.936806 kubelet[3475]: E0123 23:55:23.936734 3475 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qd4ts_kube-system(0d61d400-4f1e-4be5-bdd5-dbcdee64514d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qd4ts_kube-system(0d61d400-4f1e-4be5-bdd5-dbcdee64514d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1325729ab8c83293ddc67c28093fc32e259971d80d04109ebe2c6cf9ab7bd83\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-qd4ts" podUID="0d61d400-4f1e-4be5-bdd5-dbcdee64514d" Jan 23 23:55:23.965816 containerd[2008]: time="2026-01-23T23:55:23.965671526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dmhz5,Uid:81da4f83-8e59-44f6-8f34-742aea468f5c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21cc7bb4d280dcdead60a3e30e7162e6b5a39f362864a2f3b34bfaa9287d4061\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 23:55:23.966226 kubelet[3475]: E0123 23:55:23.966166 3475 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21cc7bb4d280dcdead60a3e30e7162e6b5a39f362864a2f3b34bfaa9287d4061\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 23:55:23.966315 kubelet[3475]: E0123 23:55:23.966257 3475 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21cc7bb4d280dcdead60a3e30e7162e6b5a39f362864a2f3b34bfaa9287d4061\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-dmhz5" Jan 23 23:55:23.966315 kubelet[3475]: E0123 23:55:23.966293 3475 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21cc7bb4d280dcdead60a3e30e7162e6b5a39f362864a2f3b34bfaa9287d4061\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-dmhz5" Jan 23 23:55:23.966440 kubelet[3475]: E0123 23:55:23.966377 3475 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dmhz5_kube-system(81da4f83-8e59-44f6-8f34-742aea468f5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dmhz5_kube-system(81da4f83-8e59-44f6-8f34-742aea468f5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21cc7bb4d280dcdead60a3e30e7162e6b5a39f362864a2f3b34bfaa9287d4061\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-dmhz5" podUID="81da4f83-8e59-44f6-8f34-742aea468f5c" Jan 23 23:55:24.989396 (udev-worker)[4040]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:25.008907 systemd-networkd[1916]: flannel.1: Link UP Jan 23 23:55:25.008922 systemd-networkd[1916]: flannel.1: Gained carrier Jan 23 23:55:26.351217 systemd-networkd[1916]: flannel.1: Gained IPv6LL Jan 23 23:55:28.651305 ntpd[1985]: Listen normally on 7 flannel.1 192.168.0.0:123 Jan 23 23:55:28.652090 ntpd[1985]: 23 Jan 23:55:28 ntpd[1985]: Listen normally on 7 flannel.1 192.168.0.0:123 Jan 23 23:55:28.652090 ntpd[1985]: 23 Jan 23:55:28 ntpd[1985]: Listen normally on 8 flannel.1 [fe80::490:63ff:fecb:157e%4]:123 Jan 23 23:55:28.651433 ntpd[1985]: Listen normally on 8 flannel.1 [fe80::490:63ff:fecb:157e%4]:123 Jan 23 23:55:34.586731 containerd[2008]: time="2026-01-23T23:55:34.586642871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qd4ts,Uid:0d61d400-4f1e-4be5-bdd5-dbcdee64514d,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:34.645272 systemd-networkd[1916]: cni0: Link UP Jan 23 23:55:34.645288 systemd-networkd[1916]: cni0: Gained carrier Jan 23 23:55:34.650295 (udev-worker)[4131]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:34.651484 systemd-networkd[1916]: cni0: Lost carrier Jan 23 23:55:34.665754 systemd-networkd[1916]: vethd00a06f8: Link UP Jan 23 23:55:34.675576 kernel: cni0: port 1(vethd00a06f8) entered blocking state Jan 23 23:55:34.676232 kernel: cni0: port 1(vethd00a06f8) entered disabled state Jan 23 23:55:34.676292 kernel: vethd00a06f8: entered allmulticast mode Jan 23 23:55:34.682506 kernel: vethd00a06f8: entered promiscuous mode Jan 23 23:55:34.682585 kernel: cni0: port 1(vethd00a06f8) entered blocking state Jan 23 23:55:34.682663 kernel: cni0: port 1(vethd00a06f8) entered forwarding state Jan 23 23:55:34.688124 kernel: cni0: port 1(vethd00a06f8) entered disabled state Jan 23 23:55:34.690270 (udev-worker)[4137]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:34.708273 kernel: cni0: port 1(vethd00a06f8) entered blocking state Jan 23 23:55:34.708382 kernel: cni0: port 1(vethd00a06f8) entered forwarding state Jan 23 23:55:34.708866 systemd-networkd[1916]: vethd00a06f8: Gained carrier Jan 23 23:55:34.710252 systemd-networkd[1916]: cni0: Gained carrier Jan 23 23:55:34.719136 containerd[2008]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001027f0), "name":"cbr0", "type":"bridge"} Jan 23 23:55:34.719136 containerd[2008]: delegateAdd: netconf sent to delegate plugin: Jan 23 23:55:34.759263 containerd[2008]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-23T23:55:34.757916796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:34.759442 containerd[2008]: time="2026-01-23T23:55:34.758842212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:34.759442 containerd[2008]: time="2026-01-23T23:55:34.758884812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:34.759442 containerd[2008]: time="2026-01-23T23:55:34.759072660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:34.801330 systemd[1]: Started cri-containerd-fd25e7b8185365d25ec747e8bbe7b442bfbcf2b4a809f7cf56e7ac0ceaaa10e1.scope - libcontainer container fd25e7b8185365d25ec747e8bbe7b442bfbcf2b4a809f7cf56e7ac0ceaaa10e1. Jan 23 23:55:34.866141 containerd[2008]: time="2026-01-23T23:55:34.865851660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qd4ts,Uid:0d61d400-4f1e-4be5-bdd5-dbcdee64514d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd25e7b8185365d25ec747e8bbe7b442bfbcf2b4a809f7cf56e7ac0ceaaa10e1\"" Jan 23 23:55:34.878909 containerd[2008]: time="2026-01-23T23:55:34.878843424Z" level=info msg="CreateContainer within sandbox \"fd25e7b8185365d25ec747e8bbe7b442bfbcf2b4a809f7cf56e7ac0ceaaa10e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:55:34.904945 containerd[2008]: time="2026-01-23T23:55:34.904783488Z" level=info msg="CreateContainer within sandbox \"fd25e7b8185365d25ec747e8bbe7b442bfbcf2b4a809f7cf56e7ac0ceaaa10e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7f5dbb6ff71508780284f9c83cafa758ac0379b4561414b3a03a6189228b9a3\"" Jan 23 23:55:34.906861 containerd[2008]: time="2026-01-23T23:55:34.906766704Z" level=info msg="StartContainer for \"d7f5dbb6ff71508780284f9c83cafa758ac0379b4561414b3a03a6189228b9a3\"" Jan 23 23:55:34.964326 systemd[1]: Started cri-containerd-d7f5dbb6ff71508780284f9c83cafa758ac0379b4561414b3a03a6189228b9a3.scope - libcontainer container d7f5dbb6ff71508780284f9c83cafa758ac0379b4561414b3a03a6189228b9a3. Jan 23 23:55:35.012143 containerd[2008]: time="2026-01-23T23:55:35.011910657Z" level=info msg="StartContainer for \"d7f5dbb6ff71508780284f9c83cafa758ac0379b4561414b3a03a6189228b9a3\" returns successfully" Jan 23 23:55:35.752196 kubelet[3475]: I0123 23:55:35.751597 3475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-qvhbb" podStartSLOduration=13.796143668 podStartE2EDuration="18.751576933s" podCreationTimestamp="2026-01-23 23:55:17 +0000 UTC" firstStartedPulling="2026-01-23 23:55:18.223305377 +0000 UTC m=+6.956951843" lastFinishedPulling="2026-01-23 23:55:23.17873863 +0000 UTC m=+11.912385108" observedRunningTime="2026-01-23 23:55:24.714810266 +0000 UTC m=+13.448456768" watchObservedRunningTime="2026-01-23 23:55:35.751576933 +0000 UTC m=+24.485223423" Jan 23 23:55:35.752196 kubelet[3475]: I0123 23:55:35.751837 3475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qd4ts" podStartSLOduration=18.751826053 podStartE2EDuration="18.751826053s" podCreationTimestamp="2026-01-23 23:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:35.747156325 +0000 UTC m=+24.480802815" watchObservedRunningTime="2026-01-23 23:55:35.751826053 +0000 UTC m=+24.485472555" Jan 23 23:55:36.271155 systemd-networkd[1916]: cni0: Gained IPv6LL Jan 23 23:55:36.335271 systemd-networkd[1916]: vethd00a06f8: Gained IPv6LL Jan 23 23:55:38.587381 containerd[2008]: time="2026-01-23T23:55:38.587239587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dmhz5,Uid:81da4f83-8e59-44f6-8f34-742aea468f5c,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:38.633188 systemd-networkd[1916]: veth6623d0fa: Link UP Jan 23 23:55:38.637652 kernel: cni0: port 2(veth6623d0fa) entered blocking state Jan 23 23:55:38.637781 kernel: cni0: port 2(veth6623d0fa) entered disabled state Jan 23 23:55:38.637824 kernel: veth6623d0fa: entered allmulticast mode Jan 23 23:55:38.639974 kernel: veth6623d0fa: entered promiscuous mode Jan 23 23:55:38.641271 (udev-worker)[4274]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:38.651042 kernel: cni0: port 2(veth6623d0fa) entered blocking state Jan 23 23:55:38.651121 kernel: cni0: port 2(veth6623d0fa) entered forwarding state Jan 23 23:55:38.651411 systemd-networkd[1916]: veth6623d0fa: Gained carrier Jan 23 23:55:38.665377 containerd[2008]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000082950), "name":"cbr0", "type":"bridge"} Jan 23 23:55:38.665377 containerd[2008]: delegateAdd: netconf sent to delegate plugin: Jan 23 23:55:38.705009 containerd[2008]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-23T23:55:38.704824251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:38.705316 containerd[2008]: time="2026-01-23T23:55:38.704958543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:38.705316 containerd[2008]: time="2026-01-23T23:55:38.705029907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:38.705316 containerd[2008]: time="2026-01-23T23:55:38.705215151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:38.752330 systemd[1]: Started cri-containerd-0a8a97bf8da1a6f4ece8b2f433bf56dbbb4ba4a989e6f326ed01089f937634b9.scope - libcontainer container 0a8a97bf8da1a6f4ece8b2f433bf56dbbb4ba4a989e6f326ed01089f937634b9. Jan 23 23:55:38.812287 containerd[2008]: time="2026-01-23T23:55:38.812213560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dmhz5,Uid:81da4f83-8e59-44f6-8f34-742aea468f5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a8a97bf8da1a6f4ece8b2f433bf56dbbb4ba4a989e6f326ed01089f937634b9\"" Jan 23 23:55:38.822897 containerd[2008]: time="2026-01-23T23:55:38.822709480Z" level=info msg="CreateContainer within sandbox \"0a8a97bf8da1a6f4ece8b2f433bf56dbbb4ba4a989e6f326ed01089f937634b9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:55:38.851567 containerd[2008]: time="2026-01-23T23:55:38.851299276Z" level=info msg="CreateContainer within sandbox \"0a8a97bf8da1a6f4ece8b2f433bf56dbbb4ba4a989e6f326ed01089f937634b9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7effafd19bd8ce85ca63f5ca9c67b0857564ca1526edfaa77c6983a7bbcf3aa\"" Jan 23 23:55:38.855030 containerd[2008]: time="2026-01-23T23:55:38.854556436Z" level=info msg="StartContainer for \"d7effafd19bd8ce85ca63f5ca9c67b0857564ca1526edfaa77c6983a7bbcf3aa\"" Jan 23 23:55:38.898321 systemd[1]: Started cri-containerd-d7effafd19bd8ce85ca63f5ca9c67b0857564ca1526edfaa77c6983a7bbcf3aa.scope - libcontainer container d7effafd19bd8ce85ca63f5ca9c67b0857564ca1526edfaa77c6983a7bbcf3aa. Jan 23 23:55:38.948089 containerd[2008]: time="2026-01-23T23:55:38.947861884Z" level=info msg="StartContainer for \"d7effafd19bd8ce85ca63f5ca9c67b0857564ca1526edfaa77c6983a7bbcf3aa\" returns successfully" Jan 23 23:55:39.764621 kubelet[3475]: I0123 23:55:39.763827 3475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dmhz5" podStartSLOduration=22.763806532 podStartE2EDuration="22.763806532s" podCreationTimestamp="2026-01-23 23:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:39.763243264 +0000 UTC m=+28.496889742" watchObservedRunningTime="2026-01-23 23:55:39.763806532 +0000 UTC m=+28.497453010" Jan 23 23:55:40.303301 systemd-networkd[1916]: veth6623d0fa: Gained IPv6LL Jan 23 23:55:42.651359 ntpd[1985]: Listen normally on 9 cni0 192.168.0.1:123 Jan 23 23:55:42.652147 ntpd[1985]: 23 Jan 23:55:42 ntpd[1985]: Listen normally on 9 cni0 192.168.0.1:123 Jan 23 23:55:42.652147 ntpd[1985]: 23 Jan 23:55:42 ntpd[1985]: Listen normally on 10 cni0 [fe80::5cab:3aff:fe7a:b794%5]:123 Jan 23 23:55:42.652147 ntpd[1985]: 23 Jan 23:55:42 ntpd[1985]: Listen normally on 11 vethd00a06f8 [fe80::c855:a6ff:fe53:6f72%6]:123 Jan 23 23:55:42.652147 ntpd[1985]: 23 Jan 23:55:42 ntpd[1985]: Listen normally on 12 veth6623d0fa [fe80::d0c4:3eff:fedc:a47e%7]:123 Jan 23 23:55:42.651499 ntpd[1985]: Listen normally on 10 cni0 [fe80::5cab:3aff:fe7a:b794%5]:123 Jan 23 23:55:42.651582 ntpd[1985]: Listen normally on 11 vethd00a06f8 [fe80::c855:a6ff:fe53:6f72%6]:123 Jan 23 23:55:42.651667 ntpd[1985]: Listen normally on 12 veth6623d0fa [fe80::d0c4:3eff:fedc:a47e%7]:123 Jan 23 23:55:49.138644 systemd[1]: Started sshd@7-172.31.31.113:22-4.153.228.146:34772.service - OpenSSH per-connection server daemon (4.153.228.146:34772). Jan 23 23:55:49.680391 sshd[4422]: Accepted publickey for core from 4.153.228.146 port 34772 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:49.683216 sshd[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:49.691772 systemd-logind[1990]: New session 8 of user core. Jan 23 23:55:49.698281 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:55:50.198704 sshd[4422]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:50.207754 systemd[1]: sshd@7-172.31.31.113:22-4.153.228.146:34772.service: Deactivated successfully. Jan 23 23:55:50.213417 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:55:50.215797 systemd-logind[1990]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:55:50.217936 systemd-logind[1990]: Removed session 8. Jan 23 23:55:55.301562 systemd[1]: Started sshd@8-172.31.31.113:22-4.153.228.146:38582.service - OpenSSH per-connection server daemon (4.153.228.146:38582). Jan 23 23:55:55.845958 sshd[4462]: Accepted publickey for core from 4.153.228.146 port 38582 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:55.848831 sshd[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:55.856810 systemd-logind[1990]: New session 9 of user core. Jan 23 23:55:55.865290 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:55:56.344412 sshd[4462]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:56.351199 systemd[1]: sshd@8-172.31.31.113:22-4.153.228.146:38582.service: Deactivated successfully. Jan 23 23:55:56.355371 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:55:56.357441 systemd-logind[1990]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:55:56.359432 systemd-logind[1990]: Removed session 9. Jan 23 23:56:01.437046 systemd[1]: Started sshd@9-172.31.31.113:22-4.153.228.146:38594.service - OpenSSH per-connection server daemon (4.153.228.146:38594). Jan 23 23:56:01.950564 sshd[4510]: Accepted publickey for core from 4.153.228.146 port 38594 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:01.954929 sshd[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:01.971652 systemd-logind[1990]: New session 10 of user core. Jan 23 23:56:01.979819 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:56:02.426790 sshd[4510]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:02.434554 systemd[1]: sshd@9-172.31.31.113:22-4.153.228.146:38594.service: Deactivated successfully. Jan 23 23:56:02.438401 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:56:02.442086 systemd-logind[1990]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:56:02.444669 systemd-logind[1990]: Removed session 10. Jan 23 23:56:02.538552 systemd[1]: Started sshd@10-172.31.31.113:22-4.153.228.146:38608.service - OpenSSH per-connection server daemon (4.153.228.146:38608). Jan 23 23:56:03.080590 sshd[4524]: Accepted publickey for core from 4.153.228.146 port 38608 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:03.083349 sshd[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:03.093083 systemd-logind[1990]: New session 11 of user core. Jan 23 23:56:03.103335 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:56:03.654325 sshd[4524]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:03.664970 systemd[1]: sshd@10-172.31.31.113:22-4.153.228.146:38608.service: Deactivated successfully. Jan 23 23:56:03.670178 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:56:03.675364 systemd-logind[1990]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:56:03.678788 systemd-logind[1990]: Removed session 11. Jan 23 23:56:03.741593 systemd[1]: Started sshd@11-172.31.31.113:22-4.153.228.146:38612.service - OpenSSH per-connection server daemon (4.153.228.146:38612). Jan 23 23:56:04.249188 sshd[4535]: Accepted publickey for core from 4.153.228.146 port 38612 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:04.252560 sshd[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:04.263089 systemd-logind[1990]: New session 12 of user core. Jan 23 23:56:04.271292 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:56:04.717373 sshd[4535]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:04.724273 systemd[1]: sshd@11-172.31.31.113:22-4.153.228.146:38612.service: Deactivated successfully. Jan 23 23:56:04.728592 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:56:04.730535 systemd-logind[1990]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:56:04.732468 systemd-logind[1990]: Removed session 12. Jan 23 23:56:09.814568 systemd[1]: Started sshd@12-172.31.31.113:22-4.153.228.146:43934.service - OpenSSH per-connection server daemon (4.153.228.146:43934). Jan 23 23:56:10.316312 sshd[4569]: Accepted publickey for core from 4.153.228.146 port 43934 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:10.318956 sshd[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:10.327952 systemd-logind[1990]: New session 13 of user core. Jan 23 23:56:10.338285 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:56:10.792656 sshd[4569]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:10.799444 systemd[1]: sshd@12-172.31.31.113:22-4.153.228.146:43934.service: Deactivated successfully. Jan 23 23:56:10.802569 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:56:10.805306 systemd-logind[1990]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:56:10.808357 systemd-logind[1990]: Removed session 13. Jan 23 23:56:10.890646 systemd[1]: Started sshd@13-172.31.31.113:22-4.153.228.146:43944.service - OpenSSH per-connection server daemon (4.153.228.146:43944). Jan 23 23:56:11.385444 sshd[4602]: Accepted publickey for core from 4.153.228.146 port 43944 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:11.388087 sshd[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:11.397313 systemd-logind[1990]: New session 14 of user core. Jan 23 23:56:11.409273 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:56:11.946397 sshd[4602]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:11.953864 systemd[1]: sshd@13-172.31.31.113:22-4.153.228.146:43944.service: Deactivated successfully. Jan 23 23:56:11.958628 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:56:11.960439 systemd-logind[1990]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:56:11.962275 systemd-logind[1990]: Removed session 14. Jan 23 23:56:12.041554 systemd[1]: Started sshd@14-172.31.31.113:22-4.153.228.146:43950.service - OpenSSH per-connection server daemon (4.153.228.146:43950). Jan 23 23:56:12.535135 sshd[4615]: Accepted publickey for core from 4.153.228.146 port 43950 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:12.537963 sshd[4615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:12.546495 systemd-logind[1990]: New session 15 of user core. Jan 23 23:56:12.556283 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:56:13.718386 sshd[4615]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:13.723362 systemd[1]: sshd@14-172.31.31.113:22-4.153.228.146:43950.service: Deactivated successfully. Jan 23 23:56:13.727659 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:56:13.731635 systemd-logind[1990]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:56:13.734156 systemd-logind[1990]: Removed session 15. Jan 23 23:56:13.826601 systemd[1]: Started sshd@15-172.31.31.113:22-4.153.228.146:43954.service - OpenSSH per-connection server daemon (4.153.228.146:43954). Jan 23 23:56:14.360313 sshd[4633]: Accepted publickey for core from 4.153.228.146 port 43954 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:14.363037 sshd[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:14.371260 systemd-logind[1990]: New session 16 of user core. Jan 23 23:56:14.381237 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:56:15.095544 sshd[4633]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:15.105539 systemd-logind[1990]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:56:15.106463 systemd[1]: sshd@15-172.31.31.113:22-4.153.228.146:43954.service: Deactivated successfully. Jan 23 23:56:15.110610 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:56:15.112765 systemd-logind[1990]: Removed session 16. Jan 23 23:56:15.182504 systemd[1]: Started sshd@16-172.31.31.113:22-4.153.228.146:57348.service - OpenSSH per-connection server daemon (4.153.228.146:57348). Jan 23 23:56:15.686792 sshd[4643]: Accepted publickey for core from 4.153.228.146 port 57348 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:15.689509 sshd[4643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:15.697643 systemd-logind[1990]: New session 17 of user core. Jan 23 23:56:15.706268 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:56:16.157174 sshd[4643]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:16.164161 systemd[1]: sshd@16-172.31.31.113:22-4.153.228.146:57348.service: Deactivated successfully. Jan 23 23:56:16.169285 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:56:16.170671 systemd-logind[1990]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:56:16.173836 systemd-logind[1990]: Removed session 17. Jan 23 23:56:21.266554 systemd[1]: Started sshd@17-172.31.31.113:22-4.153.228.146:57358.service - OpenSSH per-connection server daemon (4.153.228.146:57358). Jan 23 23:56:21.807549 sshd[4701]: Accepted publickey for core from 4.153.228.146 port 57358 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:21.810281 sshd[4701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:21.818243 systemd-logind[1990]: New session 18 of user core. Jan 23 23:56:21.827298 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:56:22.299715 sshd[4701]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:22.306716 systemd[1]: sshd@17-172.31.31.113:22-4.153.228.146:57358.service: Deactivated successfully. Jan 23 23:56:22.307578 systemd-logind[1990]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:56:22.312184 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:56:22.317203 systemd-logind[1990]: Removed session 18. Jan 23 23:56:27.386474 systemd[1]: Started sshd@18-172.31.31.113:22-4.153.228.146:34434.service - OpenSSH per-connection server daemon (4.153.228.146:34434). Jan 23 23:56:27.891559 sshd[4734]: Accepted publickey for core from 4.153.228.146 port 34434 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:27.894381 sshd[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:27.903113 systemd-logind[1990]: New session 19 of user core. Jan 23 23:56:27.912272 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:56:28.357249 sshd[4734]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:28.365215 systemd[1]: sshd@18-172.31.31.113:22-4.153.228.146:34434.service: Deactivated successfully. Jan 23 23:56:28.368899 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:56:28.371084 systemd-logind[1990]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:56:28.372871 systemd-logind[1990]: Removed session 19. Jan 23 23:56:42.053755 systemd[1]: cri-containerd-e49c003884580ca413bd41a5c8a3f1dded0755ee4b7d1dd9a3fc854f099615a0.scope: Deactivated successfully. Jan 23 23:56:42.054353 systemd[1]: cri-containerd-e49c003884580ca413bd41a5c8a3f1dded0755ee4b7d1dd9a3fc854f099615a0.scope: Consumed 4.184s CPU time, 17.0M memory peak, 0B memory swap peak. Jan 23 23:56:42.117115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e49c003884580ca413bd41a5c8a3f1dded0755ee4b7d1dd9a3fc854f099615a0-rootfs.mount: Deactivated successfully. Jan 23 23:56:42.122222 containerd[2008]: time="2026-01-23T23:56:42.121782002Z" level=info msg="shim disconnected" id=e49c003884580ca413bd41a5c8a3f1dded0755ee4b7d1dd9a3fc854f099615a0 namespace=k8s.io Jan 23 23:56:42.123183 containerd[2008]: time="2026-01-23T23:56:42.122181194Z" level=warning msg="cleaning up after shim disconnected" id=e49c003884580ca413bd41a5c8a3f1dded0755ee4b7d1dd9a3fc854f099615a0 namespace=k8s.io Jan 23 23:56:42.123183 containerd[2008]: time="2026-01-23T23:56:42.122853206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:42.146490 containerd[2008]: time="2026-01-23T23:56:42.146387750Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:56:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:56:42.909327 kubelet[3475]: I0123 23:56:42.908688 3475 scope.go:117] "RemoveContainer" containerID="e49c003884580ca413bd41a5c8a3f1dded0755ee4b7d1dd9a3fc854f099615a0" Jan 23 23:56:42.913026 containerd[2008]: time="2026-01-23T23:56:42.912943206Z" level=info msg="CreateContainer within sandbox \"fc8ae080bb82719c5be7c576f7d9cb4f025e7dc68c82cf5f2fdc02f8831f4660\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 23:56:42.947342 containerd[2008]: time="2026-01-23T23:56:42.947141934Z" level=info msg="CreateContainer within sandbox \"fc8ae080bb82719c5be7c576f7d9cb4f025e7dc68c82cf5f2fdc02f8831f4660\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ef92cd4948f4afa19c919bbd3566d8594462bc746b963c3ca40b32144e4f62b8\"" Jan 23 23:56:42.948247 containerd[2008]: time="2026-01-23T23:56:42.948143214Z" level=info msg="StartContainer for \"ef92cd4948f4afa19c919bbd3566d8594462bc746b963c3ca40b32144e4f62b8\"" Jan 23 23:56:43.013383 systemd[1]: Started cri-containerd-ef92cd4948f4afa19c919bbd3566d8594462bc746b963c3ca40b32144e4f62b8.scope - libcontainer container ef92cd4948f4afa19c919bbd3566d8594462bc746b963c3ca40b32144e4f62b8. Jan 23 23:56:43.091261 containerd[2008]: time="2026-01-23T23:56:43.091087119Z" level=info msg="StartContainer for \"ef92cd4948f4afa19c919bbd3566d8594462bc746b963c3ca40b32144e4f62b8\" returns successfully" Jan 23 23:56:43.349550 kubelet[3475]: E0123 23:56:43.349436 3475 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-113?timeout=10s\": context deadline exceeded" Jan 23 23:56:47.354882 systemd[1]: cri-containerd-aa0969693baef5429a3d69c5b69ba3a559ee32df487c13b0554f599a9483d004.scope: Deactivated successfully. Jan 23 23:56:47.357663 systemd[1]: cri-containerd-aa0969693baef5429a3d69c5b69ba3a559ee32df487c13b0554f599a9483d004.scope: Consumed 3.312s CPU time, 15.6M memory peak, 0B memory swap peak. Jan 23 23:56:47.398764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa0969693baef5429a3d69c5b69ba3a559ee32df487c13b0554f599a9483d004-rootfs.mount: Deactivated successfully. Jan 23 23:56:47.411039 containerd[2008]: time="2026-01-23T23:56:47.410898548Z" level=info msg="shim disconnected" id=aa0969693baef5429a3d69c5b69ba3a559ee32df487c13b0554f599a9483d004 namespace=k8s.io Jan 23 23:56:47.411039 containerd[2008]: time="2026-01-23T23:56:47.411037244Z" level=warning msg="cleaning up after shim disconnected" id=aa0969693baef5429a3d69c5b69ba3a559ee32df487c13b0554f599a9483d004 namespace=k8s.io Jan 23 23:56:47.411768 containerd[2008]: time="2026-01-23T23:56:47.411061520Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:47.436375 containerd[2008]: time="2026-01-23T23:56:47.436269069Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:56:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:56:47.930209 kubelet[3475]: I0123 23:56:47.930125 3475 scope.go:117] "RemoveContainer" containerID="aa0969693baef5429a3d69c5b69ba3a559ee32df487c13b0554f599a9483d004" Jan 23 23:56:47.934289 containerd[2008]: time="2026-01-23T23:56:47.933771227Z" level=info msg="CreateContainer within sandbox \"42135b1e3f903b5fbcdabdff4e5476f93ddcae7077d7e4640216f417c2859346\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 23:56:47.965643 containerd[2008]: time="2026-01-23T23:56:47.965345903Z" level=info msg="CreateContainer within sandbox \"42135b1e3f903b5fbcdabdff4e5476f93ddcae7077d7e4640216f417c2859346\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"19e8a7284b1c54f48b50cb91926fe933db86a9933683e0f33b916f38a0330bd0\"" Jan 23 23:56:47.966650 containerd[2008]: time="2026-01-23T23:56:47.966574643Z" level=info msg="StartContainer for \"19e8a7284b1c54f48b50cb91926fe933db86a9933683e0f33b916f38a0330bd0\"" Jan 23 23:56:48.035349 systemd[1]: Started cri-containerd-19e8a7284b1c54f48b50cb91926fe933db86a9933683e0f33b916f38a0330bd0.scope - libcontainer container 19e8a7284b1c54f48b50cb91926fe933db86a9933683e0f33b916f38a0330bd0. Jan 23 23:56:48.126094 containerd[2008]: time="2026-01-23T23:56:48.125694320Z" level=info msg="StartContainer for \"19e8a7284b1c54f48b50cb91926fe933db86a9933683e0f33b916f38a0330bd0\" returns successfully" Jan 23 23:56:53.350907 kubelet[3475]: E0123 23:56:53.350346 3475 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-113?timeout=10s\": context deadline exceeded" Jan 23 23:57:03.351415 kubelet[3475]: E0123 23:57:03.350708 3475 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-113?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"