Jan 17 00:20:48.986515 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:20:48.986560 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:20:48.986582 kernel: BIOS-provided physical RAM map: Jan 17 00:20:48.986595 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:20:48.986606 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 17 00:20:48.986619 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jan 17 00:20:48.986633 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jan 17 00:20:48.986646 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 17 00:20:48.986658 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 17 00:20:48.986675 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 17 00:20:48.986686 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 17 00:20:48.986699 kernel: NX (Execute Disable) protection: active Jan 17 00:20:48.986711 kernel: APIC: Static calls initialized Jan 17 00:20:48.986725 kernel: efi: EFI v2.7 by EDK II Jan 17 00:20:48.986740 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 17 00:20:48.986756 kernel: SMBIOS 2.7 present. Jan 17 00:20:48.986768 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 17 00:20:48.986780 kernel: Hypervisor detected: KVM Jan 17 00:20:48.986794 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:20:48.986807 kernel: kvm-clock: using sched offset of 4502093312 cycles Jan 17 00:20:48.986819 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:20:48.986834 kernel: tsc: Detected 2499.996 MHz processor Jan 17 00:20:48.986846 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:20:48.986859 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:20:48.986872 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 17 00:20:48.986889 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:20:48.986903 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:20:48.986916 kernel: Using GB pages for direct mapping Jan 17 00:20:48.986929 kernel: Secure boot disabled Jan 17 00:20:48.986943 kernel: ACPI: Early table checksum verification disabled Jan 17 00:20:48.986956 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 17 00:20:48.986969 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 17 00:20:48.986983 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 00:20:48.986999 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 17 00:20:48.987018 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 17 00:20:48.987033 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 17 00:20:48.987048 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 00:20:48.987063 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 00:20:48.987078 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 17 00:20:48.987094 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 17 00:20:48.987116 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 00:20:48.987135 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 00:20:48.987152 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 17 00:20:48.987168 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 17 00:20:48.987184 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 17 00:20:48.987199 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 17 00:20:48.987216 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 17 00:20:48.987250 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 17 00:20:48.987263 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 17 00:20:48.987278 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 17 00:20:48.987292 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 17 00:20:48.987307 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 17 00:20:48.987322 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 17 00:20:48.987337 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 17 00:20:48.987352 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:20:48.987367 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:20:48.987382 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 17 00:20:48.987400 kernel: NUMA: Initialized distance table, cnt=1 Jan 17 00:20:48.987415 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Jan 17 00:20:48.987430 kernel: Zone ranges: Jan 17 00:20:48.987445 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:20:48.987460 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 17 00:20:48.987475 kernel: Normal empty Jan 17 00:20:48.987490 kernel: Movable zone start for each node Jan 17 00:20:48.987504 kernel: Early memory node ranges Jan 17 00:20:48.987519 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:20:48.987537 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 17 00:20:48.987552 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 17 00:20:48.987567 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 17 00:20:48.987582 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:20:48.987596 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:20:48.987612 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:20:48.987627 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 17 00:20:48.987642 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:20:48.987657 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:20:48.987672 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 17 00:20:48.987689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:20:48.987704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:20:48.987719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:20:48.987734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:20:48.987749 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:20:48.987764 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:20:48.987779 kernel: TSC deadline timer available Jan 17 00:20:48.987794 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:20:48.987809 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:20:48.987827 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 17 00:20:48.987842 kernel: Booting paravirtualized kernel on KVM Jan 17 00:20:48.987857 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:20:48.987872 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:20:48.987887 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:20:48.987902 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:20:48.987917 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:20:48.987931 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:20:48.987946 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:20:48.987967 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:20:48.987982 kernel: random: crng init done Jan 17 00:20:48.987997 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:20:48.988012 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:20:48.988027 kernel: Fallback order for Node 0: 0 Jan 17 00:20:48.988042 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jan 17 00:20:48.988057 kernel: Policy zone: DMA32 Jan 17 00:20:48.988072 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:20:48.988090 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 162916K reserved, 0K cma-reserved) Jan 17 00:20:48.988106 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:20:48.988121 kernel: Kernel/User page tables isolation: enabled Jan 17 00:20:48.988136 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:20:48.988151 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:20:48.988166 kernel: Dynamic Preempt: voluntary Jan 17 00:20:48.988180 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:20:48.988197 kernel: rcu: RCU event tracing is enabled. Jan 17 00:20:48.988212 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:20:48.988230 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:20:48.990361 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:20:48.990379 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:20:48.990394 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:20:48.990407 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:20:48.990424 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:20:48.990440 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:20:48.990477 kernel: Console: colour dummy device 80x25 Jan 17 00:20:48.990495 kernel: printk: console [tty0] enabled Jan 17 00:20:48.990512 kernel: printk: console [ttyS0] enabled Jan 17 00:20:48.990528 kernel: ACPI: Core revision 20230628 Jan 17 00:20:48.990543 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 17 00:20:48.990562 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:20:48.990576 kernel: x2apic enabled Jan 17 00:20:48.990590 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:20:48.990605 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 17 00:20:48.990623 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 17 00:20:48.990638 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 00:20:48.990653 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 17 00:20:48.990667 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:20:48.990681 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:20:48.990695 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:20:48.990710 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 00:20:48.990724 kernel: RETBleed: Vulnerable Jan 17 00:20:48.990738 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:20:48.990753 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:20:48.990767 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:20:48.990786 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 00:20:48.990800 kernel: active return thunk: its_return_thunk Jan 17 00:20:48.990814 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:20:48.990829 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:20:48.990844 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:20:48.990859 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:20:48.990874 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 17 00:20:48.990888 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 17 00:20:48.990903 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 00:20:48.990917 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 00:20:48.990935 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 00:20:48.990950 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 17 00:20:48.990965 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:20:48.990980 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 17 00:20:48.990996 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 17 00:20:48.991010 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 17 00:20:48.991025 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 17 00:20:48.991040 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 17 00:20:48.991055 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 17 00:20:48.991070 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 17 00:20:48.991085 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:20:48.991100 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:20:48.991119 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:20:48.991134 kernel: landlock: Up and running. Jan 17 00:20:48.991152 kernel: SELinux: Initializing. Jan 17 00:20:48.991167 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:20:48.991182 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:20:48.991198 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 00:20:48.991213 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:20:48.991229 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:20:48.991267 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:20:48.991283 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 00:20:48.991302 kernel: signal: max sigframe size: 3632 Jan 17 00:20:48.991317 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:20:48.991333 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:20:48.991349 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:20:48.991364 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:20:48.991380 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:20:48.991395 kernel: .... node #0, CPUs: #1 Jan 17 00:20:48.991412 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:20:48.991429 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:20:48.991449 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:20:48.991464 kernel: smpboot: Max logical packages: 1 Jan 17 00:20:48.991479 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 17 00:20:48.991495 kernel: devtmpfs: initialized Jan 17 00:20:48.991510 kernel: x86/mm: Memory block size: 128MB Jan 17 00:20:48.991526 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 17 00:20:48.991543 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:20:48.991559 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:20:48.991575 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:20:48.991593 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:20:48.991607 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:20:48.991623 kernel: audit: type=2000 audit(1768609249.439:1): state=initialized audit_enabled=0 res=1 Jan 17 00:20:48.991637 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:20:48.991652 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:20:48.991668 kernel: cpuidle: using governor menu Jan 17 00:20:48.991684 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:20:48.991698 kernel: dca service started, version 1.12.1 Jan 17 00:20:48.991713 kernel: PCI: Using configuration type 1 for base access Jan 17 00:20:48.991733 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:20:48.991746 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:20:48.991761 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:20:48.991777 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:20:48.991792 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:20:48.991807 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:20:48.991823 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:20:48.991838 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:20:48.991853 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:20:48.991872 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:20:48.991886 kernel: ACPI: Interpreter enabled Jan 17 00:20:48.991901 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:20:48.991915 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:20:48.991931 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:20:48.991945 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:20:48.991959 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 00:20:48.991973 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:20:48.992217 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:20:48.993115 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:20:48.993282 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:20:48.993304 kernel: acpiphp: Slot [3] registered Jan 17 00:20:48.993321 kernel: acpiphp: Slot [4] registered Jan 17 00:20:48.993336 kernel: acpiphp: Slot [5] registered Jan 17 00:20:48.993352 kernel: acpiphp: Slot [6] registered Jan 17 00:20:48.993367 kernel: acpiphp: Slot [7] registered Jan 17 00:20:48.993389 kernel: acpiphp: Slot [8] registered Jan 17 00:20:48.993404 kernel: acpiphp: Slot [9] registered Jan 17 00:20:48.993420 kernel: acpiphp: Slot [10] registered Jan 17 00:20:48.993436 kernel: acpiphp: Slot [11] registered Jan 17 00:20:48.993451 kernel: acpiphp: Slot [12] registered Jan 17 00:20:48.993467 kernel: acpiphp: Slot [13] registered Jan 17 00:20:48.993482 kernel: acpiphp: Slot [14] registered Jan 17 00:20:48.993498 kernel: acpiphp: Slot [15] registered Jan 17 00:20:48.993514 kernel: acpiphp: Slot [16] registered Jan 17 00:20:48.993533 kernel: acpiphp: Slot [17] registered Jan 17 00:20:48.993548 kernel: acpiphp: Slot [18] registered Jan 17 00:20:48.993563 kernel: acpiphp: Slot [19] registered Jan 17 00:20:48.993576 kernel: acpiphp: Slot [20] registered Jan 17 00:20:48.993592 kernel: acpiphp: Slot [21] registered Jan 17 00:20:48.993607 kernel: acpiphp: Slot [22] registered Jan 17 00:20:48.993623 kernel: acpiphp: Slot [23] registered Jan 17 00:20:48.993638 kernel: acpiphp: Slot [24] registered Jan 17 00:20:48.993654 kernel: acpiphp: Slot [25] registered Jan 17 00:20:48.993669 kernel: acpiphp: Slot [26] registered Jan 17 00:20:48.993688 kernel: acpiphp: Slot [27] registered Jan 17 00:20:48.993703 kernel: acpiphp: Slot [28] registered Jan 17 00:20:48.993719 kernel: acpiphp: Slot [29] registered Jan 17 00:20:48.993734 kernel: acpiphp: Slot [30] registered Jan 17 00:20:48.993749 kernel: acpiphp: Slot [31] registered Jan 17 00:20:48.993765 kernel: PCI host bridge to bus 0000:00 Jan 17 00:20:48.993905 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:20:48.994026 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:20:48.994162 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:20:48.995344 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 00:20:48.995494 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 17 00:20:48.995630 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:20:48.995804 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:20:48.995971 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 00:20:48.996129 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 17 00:20:48.996316 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:20:48.996468 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 17 00:20:48.996632 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 17 00:20:48.996776 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 17 00:20:48.996928 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 17 00:20:48.997066 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 17 00:20:48.997206 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 17 00:20:48.997383 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 17 00:20:48.997517 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jan 17 00:20:48.997650 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:20:48.997780 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jan 17 00:20:48.998397 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:20:48.998578 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 00:20:48.998741 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jan 17 00:20:48.998905 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 00:20:48.999059 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jan 17 00:20:48.999081 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:20:48.999099 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:20:48.999116 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:20:48.999132 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:20:48.999149 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:20:48.999171 kernel: iommu: Default domain type: Translated Jan 17 00:20:48.999189 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:20:48.999207 kernel: efivars: Registered efivars operations Jan 17 00:20:48.999224 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:20:48.999255 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:20:48.999268 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 17 00:20:48.999283 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 17 00:20:48.999441 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 17 00:20:48.999590 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 17 00:20:48.999730 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:20:48.999751 kernel: vgaarb: loaded Jan 17 00:20:48.999768 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 17 00:20:48.999785 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 17 00:20:48.999802 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:20:48.999818 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:20:48.999835 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:20:48.999851 kernel: pnp: PnP ACPI init Jan 17 00:20:48.999872 kernel: pnp: PnP ACPI: found 5 devices Jan 17 00:20:48.999889 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:20:48.999905 kernel: NET: Registered PF_INET protocol family Jan 17 00:20:48.999921 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:20:48.999938 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 00:20:48.999955 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:20:48.999972 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:20:48.999989 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 00:20:49.000006 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 00:20:49.000026 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:20:49.000043 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:20:49.000060 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:20:49.000077 kernel: NET: Registered PF_XDP protocol family Jan 17 00:20:49.000209 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:20:49.001484 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:20:49.001630 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:20:49.001768 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 00:20:49.001897 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 17 00:20:49.002050 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:20:49.002072 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:20:49.002087 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:20:49.002103 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 17 00:20:49.002119 kernel: clocksource: Switched to clocksource tsc Jan 17 00:20:49.002134 kernel: Initialise system trusted keyrings Jan 17 00:20:49.002150 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 00:20:49.002165 kernel: Key type asymmetric registered Jan 17 00:20:49.002185 kernel: Asymmetric key parser 'x509' registered Jan 17 00:20:49.002201 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:20:49.002217 kernel: io scheduler mq-deadline registered Jan 17 00:20:49.003321 kernel: io scheduler kyber registered Jan 17 00:20:49.003342 kernel: io scheduler bfq registered Jan 17 00:20:49.003360 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:20:49.003376 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:20:49.003394 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:20:49.003411 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:20:49.003433 kernel: i8042: Warning: Keylock active Jan 17 00:20:49.003449 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:20:49.003466 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:20:49.003637 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:20:49.003772 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:20:49.003900 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:20:48 UTC (1768609248) Jan 17 00:20:49.004028 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:20:49.004049 kernel: intel_pstate: CPU model not supported Jan 17 00:20:49.004070 kernel: efifb: probing for efifb Jan 17 00:20:49.004088 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jan 17 00:20:49.004104 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 17 00:20:49.004121 kernel: efifb: scrolling: redraw Jan 17 00:20:49.004137 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:20:49.004154 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:20:49.004170 kernel: fb0: EFI VGA frame buffer device Jan 17 00:20:49.004187 kernel: pstore: Using crash dump compression: deflate Jan 17 00:20:49.004203 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:20:49.004224 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:20:49.004277 kernel: Segment Routing with IPv6 Jan 17 00:20:49.004294 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:20:49.004313 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:20:49.004330 kernel: Key type dns_resolver registered Jan 17 00:20:49.004348 kernel: IPI shorthand broadcast: enabled Jan 17 00:20:49.004397 kernel: sched_clock: Marking stable (555004382, 146682277)->(783002934, -81316275) Jan 17 00:20:49.004418 kernel: registered taskstats version 1 Jan 17 00:20:49.004436 kernel: Loading compiled-in X.509 certificates Jan 17 00:20:49.004458 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:20:49.004476 kernel: Key type .fscrypt registered Jan 17 00:20:49.004493 kernel: Key type fscrypt-provisioning registered Jan 17 00:20:49.004515 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:20:49.004534 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:20:49.004552 kernel: ima: No architecture policies found Jan 17 00:20:49.004570 kernel: clk: Disabling unused clocks Jan 17 00:20:49.004589 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:20:49.004607 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:20:49.004628 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:20:49.004644 kernel: Run /init as init process Jan 17 00:20:49.004658 kernel: with arguments: Jan 17 00:20:49.004674 kernel: /init Jan 17 00:20:49.004691 kernel: with environment: Jan 17 00:20:49.004706 kernel: HOME=/ Jan 17 00:20:49.004722 kernel: TERM=linux Jan 17 00:20:49.004742 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:20:49.004764 systemd[1]: Detected virtualization amazon. Jan 17 00:20:49.004780 systemd[1]: Detected architecture x86-64. Jan 17 00:20:49.004797 systemd[1]: Running in initrd. Jan 17 00:20:49.004814 systemd[1]: No hostname configured, using default hostname. Jan 17 00:20:49.004830 systemd[1]: Hostname set to . Jan 17 00:20:49.004971 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:20:49.004991 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:20:49.005011 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:20:49.005033 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:20:49.005053 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:20:49.005070 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:20:49.005086 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:20:49.005105 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:20:49.005126 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:20:49.005143 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:20:49.005159 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:20:49.005175 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:20:49.005191 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:20:49.005207 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:20:49.005223 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:20:49.005256 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:20:49.005270 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:20:49.005284 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:20:49.005299 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:20:49.005315 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:20:49.006293 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:20:49.006316 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:20:49.006334 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:20:49.006352 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:20:49.006374 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:20:49.006392 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:20:49.006409 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:20:49.006426 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:20:49.006444 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:20:49.006461 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:20:49.006480 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:20:49.006497 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:20:49.006554 systemd-journald[179]: Collecting audit messages is disabled. Jan 17 00:20:49.006594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:20:49.006611 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:20:49.006635 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:20:49.006654 systemd-journald[179]: Journal started Jan 17 00:20:49.006689 systemd-journald[179]: Runtime Journal (/run/log/journal/ec235b4cdfa1afac6afbd05c786384a0) is 4.7M, max 38.2M, 33.4M free. Jan 17 00:20:48.992760 systemd-modules-load[180]: Inserted module 'overlay' Jan 17 00:20:49.016210 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:20:49.028290 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:20:49.029571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:20:49.040521 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:20:49.043443 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:20:49.055253 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:20:49.055302 kernel: Bridge firewalling registered Jan 17 00:20:49.051072 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 17 00:20:49.057140 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:20:49.058081 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:20:49.065690 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:20:49.076464 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:20:49.078825 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:20:49.082167 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:20:49.090582 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:20:49.093385 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:20:49.095763 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:20:49.104444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:20:49.116729 dracut-cmdline[210]: dracut-dracut-053 Jan 17 00:20:49.120551 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:20:49.159177 systemd-resolved[214]: Positive Trust Anchors: Jan 17 00:20:49.159196 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:20:49.159270 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:20:49.166105 systemd-resolved[214]: Defaulting to hostname 'linux'. Jan 17 00:20:49.169767 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:20:49.172114 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:20:49.217279 kernel: SCSI subsystem initialized Jan 17 00:20:49.229264 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:20:49.241285 kernel: iscsi: registered transport (tcp) Jan 17 00:20:49.264366 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:20:49.264451 kernel: QLogic iSCSI HBA Driver Jan 17 00:20:49.312955 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:20:49.320454 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:20:49.347562 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:20:49.347643 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:20:49.347666 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:20:49.392266 kernel: raid6: avx512x4 gen() 17978 MB/s Jan 17 00:20:49.410262 kernel: raid6: avx512x2 gen() 17954 MB/s Jan 17 00:20:49.428262 kernel: raid6: avx512x1 gen() 17951 MB/s Jan 17 00:20:49.446266 kernel: raid6: avx2x4 gen() 16252 MB/s Jan 17 00:20:49.464277 kernel: raid6: avx2x2 gen() 15585 MB/s Jan 17 00:20:49.482337 kernel: raid6: avx2x1 gen() 11405 MB/s Jan 17 00:20:49.482416 kernel: raid6: using algorithm avx512x4 gen() 17978 MB/s Jan 17 00:20:49.502468 kernel: raid6: .... xor() 6524 MB/s, rmw enabled Jan 17 00:20:49.502548 kernel: raid6: using avx512x2 recovery algorithm Jan 17 00:20:49.526272 kernel: xor: automatically using best checksumming function avx Jan 17 00:20:49.727291 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:20:49.742515 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:20:49.747484 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:20:49.786911 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 17 00:20:49.799712 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:20:49.816455 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:20:49.858683 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 17 00:20:49.899830 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:20:49.905599 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:20:49.963930 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:20:49.974103 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:20:49.999646 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:20:50.002105 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:20:50.004090 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:20:50.004696 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:20:50.013538 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:20:50.037694 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:20:50.069346 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 00:20:50.069624 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 00:20:50.069823 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:20:50.085258 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 17 00:20:50.099916 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:20:50.100967 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:20:50.103573 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:20:50.106039 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:20:50.106260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:20:50.110366 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:20:50.119260 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 00:20:50.117999 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:20:50.123578 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 00:20:50.131262 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:ce:11:dc:3f:93 Jan 17 00:20:50.134047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:20:50.144367 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:20:50.144409 kernel: AES CTR mode by8 optimization enabled Jan 17 00:20:50.142174 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:20:50.154182 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 00:20:50.154678 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:20:50.154842 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:20:50.156047 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:20:50.183357 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:20:50.183392 kernel: GPT:9289727 != 33554431 Jan 17 00:20:50.183414 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:20:50.183435 kernel: GPT:9289727 != 33554431 Jan 17 00:20:50.183462 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:20:50.183483 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:20:50.156213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:20:50.158070 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:20:50.167789 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:20:50.169124 (udev-worker)[460]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:20:50.211471 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:20:50.224534 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:20:50.257828 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (452) Jan 17 00:20:50.262137 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:20:50.276287 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (461) Jan 17 00:20:50.298774 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 00:20:50.344470 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 00:20:50.356675 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:20:50.362549 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 00:20:50.363094 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 00:20:50.369462 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:20:50.377843 disk-uuid[637]: Primary Header is updated. Jan 17 00:20:50.377843 disk-uuid[637]: Secondary Entries is updated. Jan 17 00:20:50.377843 disk-uuid[637]: Secondary Header is updated. Jan 17 00:20:50.382315 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:20:50.388979 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:20:50.393344 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:20:51.402991 disk-uuid[638]: The operation has completed successfully. Jan 17 00:20:51.403682 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:20:51.530806 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:20:51.530924 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:20:51.537469 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:20:51.542460 sh[981]: Success Jan 17 00:20:51.571665 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:20:51.682646 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:20:51.692416 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:20:51.695713 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:20:51.731790 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:20:51.731874 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:20:51.735656 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:20:51.735739 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:20:51.737373 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:20:51.771301 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:20:51.787207 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:20:51.790769 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:20:51.802784 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:20:51.811041 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:20:51.843482 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:20:51.843657 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:20:51.843686 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:20:51.860418 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:20:51.876285 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:20:51.876688 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:20:51.884749 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:20:51.892627 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:20:51.941034 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:20:51.947491 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:20:51.972200 systemd-networkd[1173]: lo: Link UP Jan 17 00:20:51.973381 systemd-networkd[1173]: lo: Gained carrier Jan 17 00:20:51.975153 systemd-networkd[1173]: Enumeration completed Jan 17 00:20:51.975312 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:20:51.976057 systemd-networkd[1173]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:20:51.976062 systemd-networkd[1173]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:20:51.976393 systemd[1]: Reached target network.target - Network. Jan 17 00:20:51.979143 systemd-networkd[1173]: eth0: Link UP Jan 17 00:20:51.979149 systemd-networkd[1173]: eth0: Gained carrier Jan 17 00:20:51.979163 systemd-networkd[1173]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:20:51.996383 systemd-networkd[1173]: eth0: DHCPv4 address 172.31.16.10/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:20:52.208152 ignition[1110]: Ignition 2.19.0 Jan 17 00:20:52.208167 ignition[1110]: Stage: fetch-offline Jan 17 00:20:52.208454 ignition[1110]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:20:52.208467 ignition[1110]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:20:52.209305 ignition[1110]: Ignition finished successfully Jan 17 00:20:52.211305 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:20:52.216502 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:20:52.232800 ignition[1183]: Ignition 2.19.0 Jan 17 00:20:52.233292 ignition[1183]: Stage: fetch Jan 17 00:20:52.233791 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:20:52.233833 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:20:52.233956 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:20:52.287003 ignition[1183]: PUT result: OK Jan 17 00:20:52.290565 ignition[1183]: parsed url from cmdline: "" Jan 17 00:20:52.290574 ignition[1183]: no config URL provided Jan 17 00:20:52.290583 ignition[1183]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:20:52.290595 ignition[1183]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:20:52.290623 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:20:52.292117 ignition[1183]: PUT result: OK Jan 17 00:20:52.292181 ignition[1183]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 00:20:52.295832 ignition[1183]: GET result: OK Jan 17 00:20:52.296027 ignition[1183]: parsing config with SHA512: 12f9d166ae2b972f3e347bda4975db6d06692b3e0a849d37bf37c9c5bf394ad836dd17e4f60ee52331ac160e494273756a03ea879640dd0024ecba8c2d86bd3f Jan 17 00:20:52.301091 unknown[1183]: fetched base config from "system" Jan 17 00:20:52.301112 unknown[1183]: fetched base config from "system" Jan 17 00:20:52.301745 ignition[1183]: fetch: fetch complete Jan 17 00:20:52.301119 unknown[1183]: fetched user config from "aws" Jan 17 00:20:52.301753 ignition[1183]: fetch: fetch passed Jan 17 00:20:52.301814 ignition[1183]: Ignition finished successfully Jan 17 00:20:52.304056 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:20:52.312520 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:20:52.329896 ignition[1190]: Ignition 2.19.0 Jan 17 00:20:52.329911 ignition[1190]: Stage: kargs Jan 17 00:20:52.330415 ignition[1190]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:20:52.330429 ignition[1190]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:20:52.330550 ignition[1190]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:20:52.331658 ignition[1190]: PUT result: OK Jan 17 00:20:52.334785 ignition[1190]: kargs: kargs passed Jan 17 00:20:52.334868 ignition[1190]: Ignition finished successfully Jan 17 00:20:52.337196 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:20:52.344760 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:20:52.360296 ignition[1196]: Ignition 2.19.0 Jan 17 00:20:52.360312 ignition[1196]: Stage: disks Jan 17 00:20:52.360798 ignition[1196]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:20:52.360813 ignition[1196]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:20:52.361030 ignition[1196]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:20:52.361866 ignition[1196]: PUT result: OK Jan 17 00:20:52.364451 ignition[1196]: disks: disks passed Jan 17 00:20:52.364540 ignition[1196]: Ignition finished successfully Jan 17 00:20:52.366034 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:20:52.367030 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:20:52.367488 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:20:52.368028 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:20:52.368606 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:20:52.369392 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:20:52.381537 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:20:52.420364 systemd-fsck[1204]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:20:52.424424 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:20:52.430426 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:20:52.532338 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:20:52.533044 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:20:52.534368 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:20:52.553429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:20:52.556698 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:20:52.559028 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:20:52.560453 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:20:52.560492 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:20:52.572378 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:20:52.580262 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1223) Jan 17 00:20:52.582139 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:20:52.582186 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:20:52.582206 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:20:52.580484 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:20:52.599280 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:20:52.602056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:20:52.880013 initrd-setup-root[1247]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:20:52.887020 initrd-setup-root[1254]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:20:52.893326 initrd-setup-root[1261]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:20:52.898840 initrd-setup-root[1268]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:20:53.097110 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:20:53.104359 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:20:53.109533 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:20:53.114776 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:20:53.118526 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:20:53.160033 ignition[1336]: INFO : Ignition 2.19.0 Jan 17 00:20:53.160033 ignition[1336]: INFO : Stage: mount Jan 17 00:20:53.162574 ignition[1336]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:20:53.162574 ignition[1336]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:20:53.162574 ignition[1336]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:20:53.162574 ignition[1336]: INFO : PUT result: OK Jan 17 00:20:53.166369 ignition[1336]: INFO : mount: mount passed Jan 17 00:20:53.166369 ignition[1336]: INFO : Ignition finished successfully Jan 17 00:20:53.167133 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:20:53.168838 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:20:53.174386 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:20:53.188482 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:20:53.206293 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1347) Jan 17 00:20:53.210526 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:20:53.210589 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:20:53.210604 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:20:53.219276 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:20:53.219869 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:20:53.247258 ignition[1363]: INFO : Ignition 2.19.0 Jan 17 00:20:53.247258 ignition[1363]: INFO : Stage: files Jan 17 00:20:53.248767 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:20:53.248767 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:20:53.248767 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:20:53.250314 ignition[1363]: INFO : PUT result: OK Jan 17 00:20:53.252262 ignition[1363]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:20:53.253341 ignition[1363]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:20:53.253341 ignition[1363]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:20:53.277623 ignition[1363]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:20:53.278767 ignition[1363]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:20:53.278767 ignition[1363]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:20:53.278194 unknown[1363]: wrote ssh authorized keys file for user: core Jan 17 00:20:53.281897 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:20:53.281897 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:20:53.378679 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:20:53.553957 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:20:53.553957 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:20:53.556522 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:20:53.556522 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:20:53.556522 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:20:53.556522 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:20:53.556522 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:20:53.556522 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:20:53.556522 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:20:53.556522 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:20:53.556522 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:20:53.556522 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:20:53.556522 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:20:53.556522 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:20:53.556522 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 17 00:20:53.610444 systemd-networkd[1173]: eth0: Gained IPv6LL Jan 17 00:20:54.017228 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:20:55.060141 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:20:55.060141 ignition[1363]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:20:55.064206 ignition[1363]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:20:55.064206 ignition[1363]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:20:55.064206 ignition[1363]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:20:55.064206 ignition[1363]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:20:55.064206 ignition[1363]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:20:55.064206 ignition[1363]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:20:55.064206 ignition[1363]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:20:55.064206 ignition[1363]: INFO : files: files passed Jan 17 00:20:55.064206 ignition[1363]: INFO : Ignition finished successfully Jan 17 00:20:55.067318 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:20:55.076576 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:20:55.080447 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:20:55.084955 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:20:55.085107 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:20:55.100471 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:20:55.100471 initrd-setup-root-after-ignition[1392]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:20:55.104367 initrd-setup-root-after-ignition[1396]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:20:55.104558 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:20:55.107222 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:20:55.112453 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:20:55.138972 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:20:55.139115 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:20:55.140411 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:20:55.141660 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:20:55.142571 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:20:55.147500 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:20:55.162398 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:20:55.167454 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:20:55.181128 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:20:55.181916 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:20:55.182949 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:20:55.183868 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:20:55.184053 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:20:55.185422 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:20:55.186345 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:20:55.187176 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:20:55.188002 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:20:55.188882 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:20:55.189727 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:20:55.190523 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:20:55.191337 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:20:55.192527 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:20:55.193418 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:20:55.194145 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:20:55.194352 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:20:55.195450 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:20:55.196268 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:20:55.197090 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:20:55.197869 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:20:55.198370 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:20:55.198564 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:20:55.200076 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:20:55.200279 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:20:55.201135 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:20:55.201318 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:20:55.209708 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:20:55.210435 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:20:55.210745 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:20:55.214917 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:20:55.218037 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:20:55.219511 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:20:55.222020 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:20:55.225373 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:20:55.230508 ignition[1416]: INFO : Ignition 2.19.0 Jan 17 00:20:55.234055 ignition[1416]: INFO : Stage: umount Jan 17 00:20:55.234055 ignition[1416]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:20:55.234055 ignition[1416]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:20:55.234055 ignition[1416]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:20:55.234055 ignition[1416]: INFO : PUT result: OK Jan 17 00:20:55.241415 ignition[1416]: INFO : umount: umount passed Jan 17 00:20:55.241415 ignition[1416]: INFO : Ignition finished successfully Jan 17 00:20:55.234519 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:20:55.234668 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:20:55.242925 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:20:55.243077 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:20:55.244267 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:20:55.244338 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:20:55.245993 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:20:55.246064 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:20:55.247467 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:20:55.247546 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:20:55.250948 systemd[1]: Stopped target network.target - Network. Jan 17 00:20:55.251422 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:20:55.251505 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:20:55.252006 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:20:55.252463 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:20:55.253144 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:20:55.254143 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:20:55.255159 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:20:55.256208 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:20:55.256285 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:20:55.257357 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:20:55.257411 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:20:55.258347 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:20:55.258414 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:20:55.259425 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:20:55.259486 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:20:55.260155 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:20:55.260790 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:20:55.262985 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:20:55.264316 systemd-networkd[1173]: eth0: DHCPv6 lease lost Jan 17 00:20:55.266673 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:20:55.266810 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:20:55.268266 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:20:55.268370 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:20:55.276368 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:20:55.277326 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:20:55.277406 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:20:55.279916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:20:55.281213 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:20:55.281374 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:20:55.292476 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:20:55.292595 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:20:55.294752 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:20:55.295317 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:20:55.296649 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:20:55.296721 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:20:55.298029 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:20:55.298227 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:20:55.300548 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:20:55.300637 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:20:55.301422 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:20:55.301471 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:20:55.302620 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:20:55.302682 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:20:55.303768 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:20:55.303828 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:20:55.305064 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:20:55.305125 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:20:55.313588 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:20:55.314265 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:20:55.314366 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:20:55.315099 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:20:55.315163 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:20:55.318161 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:20:55.318311 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:20:55.325554 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:20:55.325690 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:20:55.417583 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:20:55.417713 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:20:55.419036 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:20:55.419576 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:20:55.419667 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:20:55.433526 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:20:55.442321 systemd[1]: Switching root. Jan 17 00:20:55.478195 systemd-journald[179]: Journal stopped Jan 17 00:20:56.955179 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 17 00:20:56.957346 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:20:56.957392 kernel: SELinux: policy capability open_perms=1 Jan 17 00:20:56.957422 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:20:56.957441 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:20:56.957469 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:20:56.957492 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:20:56.957519 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:20:56.957539 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:20:56.957567 kernel: audit: type=1403 audit(1768609255.773:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:20:56.957596 systemd[1]: Successfully loaded SELinux policy in 41.416ms. Jan 17 00:20:56.957633 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.139ms. Jan 17 00:20:56.957658 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:20:56.957679 systemd[1]: Detected virtualization amazon. Jan 17 00:20:56.957701 systemd[1]: Detected architecture x86-64. Jan 17 00:20:56.957721 systemd[1]: Detected first boot. Jan 17 00:20:56.957741 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:20:56.957760 zram_generator::config[1461]: No configuration found. Jan 17 00:20:56.957780 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:20:56.957799 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:20:56.957822 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:20:56.957843 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:20:56.957864 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:20:56.957892 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:20:56.957914 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:20:56.957934 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:20:56.957956 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:20:56.957978 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:20:56.958000 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:20:56.958023 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:20:56.958044 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:20:56.958066 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:20:56.958088 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:20:56.958111 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:20:56.958133 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:20:56.958155 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:20:56.958175 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:20:56.958196 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:20:56.958220 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:20:56.958258 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:20:56.958279 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:20:56.958300 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:20:56.958321 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:20:56.958342 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:20:56.958363 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:20:56.958383 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:20:56.958407 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:20:56.958428 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:20:56.958449 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:20:56.958470 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:20:56.958491 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:20:56.958513 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:20:56.958535 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:20:56.958556 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:20:56.958577 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:20:56.958601 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:20:56.958622 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:20:56.958643 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:20:56.958664 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:20:56.958686 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:20:56.958707 systemd[1]: Reached target machines.target - Containers. Jan 17 00:20:56.958729 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:20:56.958750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:20:56.958775 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:20:56.958796 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:20:56.958817 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:20:56.958838 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:20:56.958860 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:20:56.958881 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:20:56.958902 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:20:56.958924 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:20:56.958948 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:20:56.958967 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:20:56.958983 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:20:56.959003 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:20:56.959027 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:20:56.959046 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:20:56.959065 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:20:56.959085 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:20:56.959105 kernel: loop: module loaded Jan 17 00:20:56.959127 kernel: fuse: init (API version 7.39) Jan 17 00:20:56.959146 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:20:56.959166 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:20:56.959186 systemd[1]: Stopped verity-setup.service. Jan 17 00:20:56.959207 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:20:56.959228 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:20:56.961312 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:20:56.961340 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:20:56.961362 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:20:56.961390 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:20:56.961410 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:20:56.961428 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:20:56.961447 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:20:56.961472 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:20:56.961491 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:20:56.961510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:20:56.961529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:20:56.961549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:20:56.961569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:20:56.961592 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:20:56.961612 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:20:56.961631 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:20:56.961651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:20:56.961669 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:20:56.961689 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:20:56.961710 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:20:56.961729 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:20:56.961749 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:20:56.961771 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:20:56.961835 systemd-journald[1560]: Collecting audit messages is disabled. Jan 17 00:20:56.961877 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:20:56.961898 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:20:56.961917 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:20:56.961938 systemd-journald[1560]: Journal started Jan 17 00:20:56.961980 systemd-journald[1560]: Runtime Journal (/run/log/journal/ec235b4cdfa1afac6afbd05c786384a0) is 4.7M, max 38.2M, 33.4M free. Jan 17 00:20:56.566988 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:20:56.590754 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 00:20:56.968267 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:20:56.591185 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:20:56.980268 kernel: ACPI: bus type drm_connector registered Jan 17 00:20:56.980355 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:20:56.984437 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:20:56.984525 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:20:57.001538 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:20:57.001626 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:20:57.007268 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:20:57.014279 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:20:57.022272 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:20:57.035278 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:20:57.053275 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:20:57.057279 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:20:57.068266 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:20:57.074652 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:20:57.075326 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:20:57.080052 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:20:57.081832 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:20:57.086301 kernel: loop0: detected capacity change from 0 to 61336 Jan 17 00:20:57.087639 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:20:57.089916 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:20:57.146543 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:20:57.156080 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:20:57.169090 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:20:57.176610 systemd-journald[1560]: Time spent on flushing to /var/log/journal/ec235b4cdfa1afac6afbd05c786384a0 is 107.851ms for 993 entries. Jan 17 00:20:57.176610 systemd-journald[1560]: System Journal (/var/log/journal/ec235b4cdfa1afac6afbd05c786384a0) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:20:57.305209 systemd-journald[1560]: Received client request to flush runtime journal. Jan 17 00:20:57.306650 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:20:57.306692 kernel: loop1: detected capacity change from 0 to 229808 Jan 17 00:20:57.188831 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:20:57.207839 udevadm[1574]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:20:57.276298 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:20:57.288435 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:20:57.311718 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:20:57.318813 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:20:57.321132 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:20:57.353092 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Jan 17 00:20:57.353123 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Jan 17 00:20:57.365910 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:20:57.607271 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 00:20:57.771026 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 00:20:57.913281 kernel: loop4: detected capacity change from 0 to 61336 Jan 17 00:20:57.959964 kernel: loop5: detected capacity change from 0 to 229808 Jan 17 00:20:57.993269 kernel: loop6: detected capacity change from 0 to 140768 Jan 17 00:20:57.996417 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:20:58.004498 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:20:58.022468 ldconfig[1568]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:20:58.029270 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:20:58.034599 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 00:20:58.039532 systemd-udevd[1618]: Using default interface naming scheme 'v255'. Jan 17 00:20:58.067227 (sd-merge)[1616]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 00:20:58.067983 (sd-merge)[1616]: Merged extensions into '/usr'. Jan 17 00:20:58.074425 systemd[1]: Reloading requested from client PID 1572 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:20:58.074446 systemd[1]: Reloading... Jan 17 00:20:58.184328 zram_generator::config[1659]: No configuration found. Jan 17 00:20:58.181435 (udev-worker)[1646]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:20:58.299259 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 00:20:58.309259 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 17 00:20:58.321383 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:20:58.330260 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 17 00:20:58.358321 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 00:20:58.360260 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 17 00:20:58.491331 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:20:58.499261 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1639) Jan 17 00:20:58.564564 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:20:58.701631 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:20:58.702369 systemd[1]: Reloading finished in 627 ms. Jan 17 00:20:58.727839 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:20:58.728741 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:20:58.743016 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:20:58.764093 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:20:58.769471 systemd[1]: Starting ensure-sysext.service... Jan 17 00:20:58.773287 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:20:58.783492 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:20:58.789495 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:20:58.802523 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:20:58.810462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:20:58.815515 lvm[1805]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:20:58.822483 systemd[1]: Reloading requested from client PID 1804 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:20:58.822508 systemd[1]: Reloading... Jan 17 00:20:58.869652 systemd-tmpfiles[1808]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:20:58.870206 systemd-tmpfiles[1808]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:20:58.872151 systemd-tmpfiles[1808]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:20:58.872747 systemd-tmpfiles[1808]: ACLs are not supported, ignoring. Jan 17 00:20:58.873070 systemd-tmpfiles[1808]: ACLs are not supported, ignoring. Jan 17 00:20:58.878392 systemd-tmpfiles[1808]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:20:58.878572 systemd-tmpfiles[1808]: Skipping /boot Jan 17 00:20:58.899781 systemd-tmpfiles[1808]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:20:58.899950 systemd-tmpfiles[1808]: Skipping /boot Jan 17 00:20:58.933261 zram_generator::config[1840]: No configuration found. Jan 17 00:20:59.095311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:20:59.172198 systemd[1]: Reloading finished in 349 ms. Jan 17 00:20:59.200111 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:20:59.201654 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:20:59.203179 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:20:59.204662 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:20:59.213752 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:20:59.222664 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:20:59.226622 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:20:59.231698 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:20:59.237304 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:20:59.243551 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:20:59.246910 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:20:59.255259 lvm[1902]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:20:59.259724 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:20:59.266903 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:20:59.267203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:20:59.278707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:20:59.282524 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:20:59.288435 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:20:59.289455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:20:59.289633 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:20:59.294550 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:20:59.294865 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:20:59.295383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:20:59.295523 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:20:59.311962 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:20:59.313754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:20:59.327547 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:20:59.329492 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:20:59.329950 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:20:59.331869 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:20:59.335898 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:20:59.336117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:20:59.342998 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:20:59.351081 systemd[1]: Finished ensure-sysext.service. Jan 17 00:20:59.352419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:20:59.359517 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:20:59.361517 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:20:59.363536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:20:59.363748 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:20:59.378518 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:20:59.379292 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:20:59.382159 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:20:59.382326 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:20:59.402600 augenrules[1931]: No rules Jan 17 00:20:59.405902 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:20:59.407260 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:20:59.428652 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:20:59.430305 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:20:59.444594 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:20:59.447005 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:20:59.479295 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:20:59.514543 systemd-resolved[1906]: Positive Trust Anchors: Jan 17 00:20:59.514937 systemd-resolved[1906]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:20:59.514987 systemd-resolved[1906]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:20:59.523548 systemd-resolved[1906]: Defaulting to hostname 'linux'. Jan 17 00:20:59.525744 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:20:59.526573 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:20:59.527554 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:20:59.528341 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:20:59.528959 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:20:59.529889 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:20:59.530739 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:20:59.531414 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:20:59.531494 systemd-networkd[1807]: lo: Link UP Jan 17 00:20:59.531500 systemd-networkd[1807]: lo: Gained carrier Jan 17 00:20:59.532350 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:20:59.532395 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:20:59.533127 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:20:59.533679 systemd-networkd[1807]: Enumeration completed Jan 17 00:20:59.534558 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:20:59.534648 systemd-networkd[1807]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:20:59.535489 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:20:59.538151 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:20:59.547663 systemd-networkd[1807]: eth0: Link UP Jan 17 00:20:59.548111 systemd-networkd[1807]: eth0: Gained carrier Jan 17 00:20:59.548988 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:20:59.568900 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:20:59.571907 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:20:59.572619 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:20:59.573216 systemd[1]: Reached target network.target - Network. Jan 17 00:20:59.573368 systemd-networkd[1807]: eth0: DHCPv4 address 172.31.16.10/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:20:59.573722 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:20:59.574184 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:20:59.574742 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:20:59.574776 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:20:59.593426 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:20:59.596512 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:20:59.619526 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:20:59.622396 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:20:59.637816 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:20:59.639598 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:20:59.653498 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:20:59.681048 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 00:20:59.711407 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:20:59.723417 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 00:20:59.730451 jq[1953]: false Jan 17 00:20:59.730596 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:20:59.736322 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:20:59.748675 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:20:59.760515 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:20:59.763007 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:20:59.765785 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:20:59.775672 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:20:59.781690 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:20:59.796150 ntpd[1956]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:20:59.799104 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:20:59.799104 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:20:59.799104 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: ---------------------------------------------------- Jan 17 00:20:59.799104 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:20:59.799104 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:20:59.799104 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: corporation. Support and training for ntp-4 are Jan 17 00:20:59.799104 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: available at https://www.nwtime.org/support Jan 17 00:20:59.799104 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: ---------------------------------------------------- Jan 17 00:20:59.796190 ntpd[1956]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:20:59.796200 ntpd[1956]: ---------------------------------------------------- Jan 17 00:20:59.796211 ntpd[1956]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:20:59.796221 ntpd[1956]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:20:59.796248 ntpd[1956]: corporation. Support and training for ntp-4 are Jan 17 00:20:59.796258 ntpd[1956]: available at https://www.nwtime.org/support Jan 17 00:20:59.796269 ntpd[1956]: ---------------------------------------------------- Jan 17 00:20:59.807304 ntpd[1956]: proto: precision = 0.064 usec (-24) Jan 17 00:20:59.808423 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: proto: precision = 0.064 usec (-24) Jan 17 00:20:59.810984 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:20:59.811265 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:20:59.815608 extend-filesystems[1954]: Found loop4 Jan 17 00:20:59.815608 extend-filesystems[1954]: Found loop5 Jan 17 00:20:59.815608 extend-filesystems[1954]: Found loop6 Jan 17 00:20:59.815608 extend-filesystems[1954]: Found loop7 Jan 17 00:20:59.815608 extend-filesystems[1954]: Found nvme0n1 Jan 17 00:20:59.815608 extend-filesystems[1954]: Found nvme0n1p1 Jan 17 00:20:59.815608 extend-filesystems[1954]: Found nvme0n1p2 Jan 17 00:20:59.815608 extend-filesystems[1954]: Found nvme0n1p3 Jan 17 00:20:59.815608 extend-filesystems[1954]: Found usr Jan 17 00:20:59.815608 extend-filesystems[1954]: Found nvme0n1p4 Jan 17 00:20:59.815608 extend-filesystems[1954]: Found nvme0n1p6 Jan 17 00:20:59.815608 extend-filesystems[1954]: Found nvme0n1p7 Jan 17 00:20:59.815608 extend-filesystems[1954]: Found nvme0n1p9 Jan 17 00:20:59.815608 extend-filesystems[1954]: Checking size of /dev/nvme0n1p9 Jan 17 00:20:59.813705 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:21:00.053062 coreos-metadata[1951]: Jan 17 00:20:59.969 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:21:00.053062 coreos-metadata[1951]: Jan 17 00:20:59.979 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 00:21:00.053062 coreos-metadata[1951]: Jan 17 00:21:00.004 INFO Fetch successful Jan 17 00:21:00.053062 coreos-metadata[1951]: Jan 17 00:21:00.005 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 00:21:00.053062 coreos-metadata[1951]: Jan 17 00:21:00.005 INFO Fetch successful Jan 17 00:21:00.053062 coreos-metadata[1951]: Jan 17 00:21:00.005 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 00:21:00.053062 coreos-metadata[1951]: Jan 17 00:21:00.007 INFO Fetch successful Jan 17 00:21:00.053062 coreos-metadata[1951]: Jan 17 00:21:00.007 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 00:21:00.053062 coreos-metadata[1951]: Jan 17 00:21:00.020 INFO Fetch successful Jan 17 00:21:00.054281 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: basedate set to 2026-01-04 Jan 17 00:21:00.054281 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: gps base set to 2026-01-04 (week 2400) Jan 17 00:21:00.054281 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:21:00.054281 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:21:00.054281 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:21:00.054281 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: Listen normally on 3 eth0 172.31.16.10:123 Jan 17 00:21:00.054281 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: Listen normally on 4 lo [::1]:123 Jan 17 00:21:00.054281 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: bind(21) AF_INET6 fe80::4ce:11ff:fedc:3f93%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:21:00.054281 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: unable to create socket on eth0 (5) for fe80::4ce:11ff:fedc:3f93%2#123 Jan 17 00:21:00.054281 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: failed to init interface for address fe80::4ce:11ff:fedc:3f93%2 Jan 17 00:21:00.054281 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: Listening on routing socket on fd #21 for interface updates Jan 17 00:21:00.054281 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:00.054281 ntpd[1956]: 17 Jan 00:20:59 ntpd[1956]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:00.054813 extend-filesystems[1954]: Resized partition /dev/nvme0n1p9 Jan 17 00:20:59.818507 ntpd[1956]: basedate set to 2026-01-04 Jan 17 00:20:59.813934 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:21:00.160477 extend-filesystems[1993]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:21:00.199524 coreos-metadata[1951]: Jan 17 00:21:00.055 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 00:21:00.199524 coreos-metadata[1951]: Jan 17 00:21:00.085 INFO Fetch failed with 404: resource not found Jan 17 00:21:00.199524 coreos-metadata[1951]: Jan 17 00:21:00.085 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 00:21:00.199524 coreos-metadata[1951]: Jan 17 00:21:00.122 INFO Fetch successful Jan 17 00:21:00.199524 coreos-metadata[1951]: Jan 17 00:21:00.122 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 00:20:59.818531 ntpd[1956]: gps base set to 2026-01-04 (week 2400) Jan 17 00:20:59.865364 (ntainerd)[1988]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:21:00.200286 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 17 00:21:00.200338 jq[1969]: true Jan 17 00:20:59.837996 ntpd[1956]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:21:00.046722 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:20:59.838158 ntpd[1956]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:21:00.148463 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:21:00.211590 tar[1976]: linux-amd64/LICENSE Jan 17 00:21:00.211590 tar[1976]: linux-amd64/helm Jan 17 00:21:00.248559 coreos-metadata[1951]: Jan 17 00:21:00.205 INFO Fetch successful Jan 17 00:21:00.248559 coreos-metadata[1951]: Jan 17 00:21:00.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 00:21:00.248559 coreos-metadata[1951]: Jan 17 00:21:00.226 INFO Fetch successful Jan 17 00:21:00.248559 coreos-metadata[1951]: Jan 17 00:21:00.226 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 00:21:00.248559 coreos-metadata[1951]: Jan 17 00:21:00.232 INFO Fetch successful Jan 17 00:21:00.248559 coreos-metadata[1951]: Jan 17 00:21:00.232 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 00:21:00.248559 coreos-metadata[1951]: Jan 17 00:21:00.240 INFO Fetch successful Jan 17 00:20:59.838655 ntpd[1956]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:21:00.148716 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:21:00.249031 update_engine[1968]: I20260117 00:21:00.200729 1968 main.cc:92] Flatcar Update Engine starting Jan 17 00:20:59.838701 ntpd[1956]: Listen normally on 3 eth0 172.31.16.10:123 Jan 17 00:21:00.167378 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:20:59.838745 ntpd[1956]: Listen normally on 4 lo [::1]:123 Jan 17 00:21:00.167437 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:20:59.838797 ntpd[1956]: bind(21) AF_INET6 fe80::4ce:11ff:fedc:3f93%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:21:00.190469 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:20:59.838821 ntpd[1956]: unable to create socket on eth0 (5) for fe80::4ce:11ff:fedc:3f93%2#123 Jan 17 00:21:00.190505 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:20:59.838838 ntpd[1956]: failed to init interface for address fe80::4ce:11ff:fedc:3f93%2 Jan 17 00:20:59.838871 ntpd[1956]: Listening on routing socket on fd #21 for interface updates Jan 17 00:20:59.953787 ntpd[1956]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:20:59.953828 ntpd[1956]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:00.039656 dbus-daemon[1952]: [system] SELinux support is enabled Jan 17 00:21:00.149767 dbus-daemon[1952]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1807 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:21:00.267630 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:21:00.276853 update_engine[1968]: I20260117 00:21:00.272013 1968 update_check_scheduler.cc:74] Next update check in 10m36s Jan 17 00:21:00.276902 jq[1992]: true Jan 17 00:21:00.191095 dbus-daemon[1952]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:21:00.283030 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:21:00.300783 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:21:00.380279 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1622) Jan 17 00:21:00.437147 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 00:21:00.489271 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 17 00:21:00.547782 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:21:00.579068 extend-filesystems[1993]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 00:21:00.579068 extend-filesystems[1993]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 00:21:00.579068 extend-filesystems[1993]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 17 00:21:00.548035 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:21:00.608653 extend-filesystems[1954]: Resized filesystem in /dev/nvme0n1p9 Jan 17 00:21:00.588222 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:21:00.589798 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:21:00.783706 bash[2075]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:21:00.787295 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:21:00.796179 systemd-logind[1964]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:21:00.796220 systemd-logind[1964]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 17 00:21:00.798190 ntpd[1956]: 17 Jan 00:21:00 ntpd[1956]: bind(24) AF_INET6 fe80::4ce:11ff:fedc:3f93%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:21:00.798190 ntpd[1956]: 17 Jan 00:21:00 ntpd[1956]: unable to create socket on eth0 (6) for fe80::4ce:11ff:fedc:3f93%2#123 Jan 17 00:21:00.798190 ntpd[1956]: 17 Jan 00:21:00 ntpd[1956]: failed to init interface for address fe80::4ce:11ff:fedc:3f93%2 Jan 17 00:21:00.797731 ntpd[1956]: bind(24) AF_INET6 fe80::4ce:11ff:fedc:3f93%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:21:00.796878 systemd[1]: Starting sshkeys.service... Jan 17 00:21:00.797767 ntpd[1956]: unable to create socket on eth0 (6) for fe80::4ce:11ff:fedc:3f93%2#123 Jan 17 00:21:00.797121 systemd-logind[1964]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:21:00.797783 ntpd[1956]: failed to init interface for address fe80::4ce:11ff:fedc:3f93%2 Jan 17 00:21:00.804110 systemd-logind[1964]: New seat seat0. Jan 17 00:21:00.805873 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:21:00.885627 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:21:00.892722 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:21:01.020523 sshd_keygen[1995]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:21:01.117700 locksmithd[2015]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:21:01.149640 dbus-daemon[1952]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:21:01.149857 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:21:01.157298 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:21:01.162602 systemd-networkd[1807]: eth0: Gained IPv6LL Jan 17 00:21:01.166645 dbus-daemon[1952]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2007 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:21:01.172898 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:21:01.181660 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:21:01.185053 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:21:01.185846 coreos-metadata[2123]: Jan 17 00:21:01.185 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:21:01.190207 coreos-metadata[2123]: Jan 17 00:21:01.190 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 00:21:01.192215 coreos-metadata[2123]: Jan 17 00:21:01.191 INFO Fetch successful Jan 17 00:21:01.192215 coreos-metadata[2123]: Jan 17 00:21:01.191 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 00:21:01.193004 coreos-metadata[2123]: Jan 17 00:21:01.192 INFO Fetch successful Jan 17 00:21:01.197014 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 00:21:01.199824 unknown[2123]: wrote ssh authorized keys file for user: core Jan 17 00:21:01.208752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:01.222687 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:21:01.239050 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:21:01.247725 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:21:01.248009 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:21:01.276875 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:21:01.287229 polkitd[2155]: Started polkitd version 121 Jan 17 00:21:01.306299 containerd[1988]: time="2026-01-17T00:21:01.301466093Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:21:01.359723 polkitd[2155]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:21:01.359825 polkitd[2155]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:21:01.375003 update-ssh-keys[2156]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:21:01.374860 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:21:01.371887 polkitd[2155]: Finished loading, compiling and executing 2 rules Jan 17 00:21:01.384915 containerd[1988]: time="2026-01-17T00:21:01.378754695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:01.384915 containerd[1988]: time="2026-01-17T00:21:01.380980536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:01.384915 containerd[1988]: time="2026-01-17T00:21:01.381027249Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:21:01.384915 containerd[1988]: time="2026-01-17T00:21:01.381052863Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:21:01.384915 containerd[1988]: time="2026-01-17T00:21:01.381266571Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:21:01.384915 containerd[1988]: time="2026-01-17T00:21:01.381290245Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:01.384915 containerd[1988]: time="2026-01-17T00:21:01.381365100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:01.384915 containerd[1988]: time="2026-01-17T00:21:01.381384347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:01.384915 containerd[1988]: time="2026-01-17T00:21:01.381615459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:01.384915 containerd[1988]: time="2026-01-17T00:21:01.381638787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:01.384915 containerd[1988]: time="2026-01-17T00:21:01.381660559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:01.383762 systemd[1]: Finished sshkeys.service. Jan 17 00:21:01.385683 containerd[1988]: time="2026-01-17T00:21:01.381676734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:01.385683 containerd[1988]: time="2026-01-17T00:21:01.381771947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:01.385683 containerd[1988]: time="2026-01-17T00:21:01.382145198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:01.385683 containerd[1988]: time="2026-01-17T00:21:01.382419151Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:01.385683 containerd[1988]: time="2026-01-17T00:21:01.382475878Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:21:01.385683 containerd[1988]: time="2026-01-17T00:21:01.382666563Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:21:01.385683 containerd[1988]: time="2026-01-17T00:21:01.382767679Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:21:01.396044 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.401839930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.402087110Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.402128391Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.402156910Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.402175707Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.402402638Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.402806481Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.402954431Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.402984107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.403005041Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.403026098Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.403045753Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.403065343Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:21:01.405039 containerd[1988]: time="2026-01-17T00:21:01.403087479Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:21:01.395250 dbus-daemon[1952]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403110464Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403134051Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403156599Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403230503Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403278275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403299843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403432258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403452872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403470926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403490623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403509355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403529448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403549000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.411847 containerd[1988]: time="2026-01-17T00:21:01.403571502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.401002 polkitd[2155]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:21:01.415277 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403592542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403610559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403629867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403652926Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403686088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403705065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403721734Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403774394Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403803423Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403820290Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403839209Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403854839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403874427Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:21:01.417061 containerd[1988]: time="2026-01-17T00:21:01.403889245Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:21:01.417868 containerd[1988]: time="2026-01-17T00:21:01.403913378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:21:01.417545 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:21:01.423992 containerd[1988]: time="2026-01-17T00:21:01.407393188Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:21:01.427460 containerd[1988]: time="2026-01-17T00:21:01.424400258Z" level=info msg="Connect containerd service" Jan 17 00:21:01.427893 containerd[1988]: time="2026-01-17T00:21:01.427840023Z" level=info msg="using legacy CRI server" Jan 17 00:21:01.429014 containerd[1988]: time="2026-01-17T00:21:01.428967372Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:21:01.430691 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:21:01.433398 containerd[1988]: time="2026-01-17T00:21:01.432467314Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:21:01.433829 containerd[1988]: time="2026-01-17T00:21:01.433786808Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:21:01.434493 containerd[1988]: time="2026-01-17T00:21:01.434439396Z" level=info msg="Start subscribing containerd event" Jan 17 00:21:01.436825 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:21:01.437821 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:21:01.438613 containerd[1988]: time="2026-01-17T00:21:01.438570739Z" level=info msg="Start recovering state" Jan 17 00:21:01.439089 containerd[1988]: time="2026-01-17T00:21:01.438800093Z" level=info msg="Start event monitor" Jan 17 00:21:01.441467 containerd[1988]: time="2026-01-17T00:21:01.441423632Z" level=info msg="Start snapshots syncer" Jan 17 00:21:01.442722 containerd[1988]: time="2026-01-17T00:21:01.442687590Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:21:01.442934 containerd[1988]: time="2026-01-17T00:21:01.442918005Z" level=info msg="Start streaming server" Jan 17 00:21:01.450994 containerd[1988]: time="2026-01-17T00:21:01.448271076Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:21:01.450994 containerd[1988]: time="2026-01-17T00:21:01.448429386Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:21:01.448620 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:21:01.451461 containerd[1988]: time="2026-01-17T00:21:01.451421969Z" level=info msg="containerd successfully booted in 0.153301s" Jan 17 00:21:01.470633 systemd-hostnamed[2007]: Hostname set to (transient) Jan 17 00:21:01.470877 systemd-resolved[1906]: System hostname changed to 'ip-172-31-16-10'. Jan 17 00:21:01.491477 amazon-ssm-agent[2150]: Initializing new seelog logger Jan 17 00:21:01.491843 amazon-ssm-agent[2150]: New Seelog Logger Creation Complete Jan 17 00:21:01.491843 amazon-ssm-agent[2150]: 2026/01/17 00:21:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:01.491843 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:01.492229 amazon-ssm-agent[2150]: 2026/01/17 00:21:01 processing appconfig overrides Jan 17 00:21:01.495091 amazon-ssm-agent[2150]: 2026/01/17 00:21:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:01.495091 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:01.495380 amazon-ssm-agent[2150]: 2026/01/17 00:21:01 processing appconfig overrides Jan 17 00:21:01.497153 amazon-ssm-agent[2150]: 2026/01/17 00:21:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:01.497153 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:01.497153 amazon-ssm-agent[2150]: 2026/01/17 00:21:01 processing appconfig overrides Jan 17 00:21:01.498840 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO Proxy environment variables: Jan 17 00:21:01.503806 amazon-ssm-agent[2150]: 2026/01/17 00:21:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:01.503806 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:01.503806 amazon-ssm-agent[2150]: 2026/01/17 00:21:01 processing appconfig overrides Jan 17 00:21:01.598762 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO https_proxy: Jan 17 00:21:01.698348 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO http_proxy: Jan 17 00:21:01.795376 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO no_proxy: Jan 17 00:21:01.894229 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO Checking if agent identity type OnPrem can be assumed Jan 17 00:21:01.897644 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO Checking if agent identity type EC2 can be assumed Jan 17 00:21:01.897939 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO Agent will take identity from EC2 Jan 17 00:21:01.898041 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:21:01.898160 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:21:01.898252 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:21:01.898338 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 00:21:01.898415 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 17 00:21:01.898532 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 00:21:01.898713 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 00:21:01.898713 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO [Registrar] Starting registrar module Jan 17 00:21:01.898713 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 00:21:01.898713 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO [EC2Identity] EC2 registration was successful. Jan 17 00:21:01.898713 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO [CredentialRefresher] credentialRefresher has started Jan 17 00:21:01.899220 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 00:21:01.899220 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 00:21:01.995253 amazon-ssm-agent[2150]: 2026-01-17 00:21:01 INFO [CredentialRefresher] Next credential rotation will be in 30.641640063066667 minutes Jan 17 00:21:02.019929 tar[1976]: linux-amd64/README.md Jan 17 00:21:02.042333 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:21:02.927525 amazon-ssm-agent[2150]: 2026-01-17 00:21:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 00:21:02.971386 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:21:02.982399 systemd[1]: Started sshd@0-172.31.16.10:22-4.153.228.146:58890.service - OpenSSH per-connection server daemon (4.153.228.146:58890). Jan 17 00:21:03.026613 amazon-ssm-agent[2150]: 2026-01-17 00:21:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2196) started Jan 17 00:21:03.126926 amazon-ssm-agent[2150]: 2026-01-17 00:21:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 00:21:03.478676 sshd[2204]: Accepted publickey for core from 4.153.228.146 port 58890 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:03.481041 sshd[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:03.491961 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:21:03.503945 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:21:03.508758 systemd-logind[1964]: New session 1 of user core. Jan 17 00:21:03.521194 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:21:03.532133 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:21:03.536294 (systemd)[2213]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:21:03.664100 systemd[2213]: Queued start job for default target default.target. Jan 17 00:21:03.674604 systemd[2213]: Created slice app.slice - User Application Slice. Jan 17 00:21:03.674650 systemd[2213]: Reached target paths.target - Paths. Jan 17 00:21:03.674671 systemd[2213]: Reached target timers.target - Timers. Jan 17 00:21:03.676127 systemd[2213]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:21:03.697287 systemd[2213]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:21:03.697420 systemd[2213]: Reached target sockets.target - Sockets. Jan 17 00:21:03.697436 systemd[2213]: Reached target basic.target - Basic System. Jan 17 00:21:03.697483 systemd[2213]: Reached target default.target - Main User Target. Jan 17 00:21:03.697514 systemd[2213]: Startup finished in 153ms. Jan 17 00:21:03.697644 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:21:03.705623 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:21:03.796701 ntpd[1956]: Listen normally on 7 eth0 [fe80::4ce:11ff:fedc:3f93%2]:123 Jan 17 00:21:03.797325 ntpd[1956]: 17 Jan 00:21:03 ntpd[1956]: Listen normally on 7 eth0 [fe80::4ce:11ff:fedc:3f93%2]:123 Jan 17 00:21:04.066085 systemd[1]: Started sshd@1-172.31.16.10:22-4.153.228.146:58906.service - OpenSSH per-connection server daemon (4.153.228.146:58906). Jan 17 00:21:04.587606 sshd[2224]: Accepted publickey for core from 4.153.228.146 port 58906 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:04.589178 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:04.593874 systemd-logind[1964]: New session 2 of user core. Jan 17 00:21:04.596414 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:21:04.941350 sshd[2224]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:04.944423 systemd[1]: sshd@1-172.31.16.10:22-4.153.228.146:58906.service: Deactivated successfully. Jan 17 00:21:04.947086 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:21:04.948626 systemd-logind[1964]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:21:04.950349 systemd-logind[1964]: Removed session 2. Jan 17 00:21:05.041391 systemd[1]: Started sshd@2-172.31.16.10:22-4.153.228.146:58920.service - OpenSSH per-connection server daemon (4.153.228.146:58920). Jan 17 00:21:05.577369 sshd[2231]: Accepted publickey for core from 4.153.228.146 port 58920 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:05.578793 sshd[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:05.583954 systemd-logind[1964]: New session 3 of user core. Jan 17 00:21:05.593500 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:21:05.959036 sshd[2231]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:05.962599 systemd[1]: sshd@2-172.31.16.10:22-4.153.228.146:58920.service: Deactivated successfully. Jan 17 00:21:05.964712 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:21:05.966221 systemd-logind[1964]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:21:05.967474 systemd-logind[1964]: Removed session 3. Jan 17 00:21:06.116728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:06.118804 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:21:06.120734 systemd[1]: Startup finished in 693ms (kernel) + 7.059s (initrd) + 10.385s (userspace) = 18.138s. Jan 17 00:21:06.122907 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:21:08.429609 systemd-resolved[1906]: Clock change detected. Flushing caches. Jan 17 00:21:09.770848 kubelet[2242]: E0117 00:21:09.770794 2242 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:21:09.773778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:21:09.773984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:21:09.774370 systemd[1]: kubelet.service: Consumed 1.121s CPU time. Jan 17 00:21:17.671417 systemd[1]: Started sshd@3-172.31.16.10:22-4.153.228.146:53802.service - OpenSSH per-connection server daemon (4.153.228.146:53802). Jan 17 00:21:18.169779 sshd[2254]: Accepted publickey for core from 4.153.228.146 port 53802 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:18.171603 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:18.177264 systemd-logind[1964]: New session 4 of user core. Jan 17 00:21:18.186466 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:21:18.525475 sshd[2254]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:18.530010 systemd[1]: sshd@3-172.31.16.10:22-4.153.228.146:53802.service: Deactivated successfully. Jan 17 00:21:18.532092 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:21:18.533143 systemd-logind[1964]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:21:18.534244 systemd-logind[1964]: Removed session 4. Jan 17 00:21:18.625369 systemd[1]: Started sshd@4-172.31.16.10:22-4.153.228.146:53808.service - OpenSSH per-connection server daemon (4.153.228.146:53808). Jan 17 00:21:19.152135 sshd[2261]: Accepted publickey for core from 4.153.228.146 port 53808 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:19.153756 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:19.158384 systemd-logind[1964]: New session 5 of user core. Jan 17 00:21:19.164452 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:21:19.528416 sshd[2261]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:19.532491 systemd[1]: sshd@4-172.31.16.10:22-4.153.228.146:53808.service: Deactivated successfully. Jan 17 00:21:19.534276 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:21:19.535058 systemd-logind[1964]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:21:19.536126 systemd-logind[1964]: Removed session 5. Jan 17 00:21:19.609102 systemd[1]: Started sshd@5-172.31.16.10:22-4.153.228.146:53812.service - OpenSSH per-connection server daemon (4.153.228.146:53812). Jan 17 00:21:20.009282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:21:20.016485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:20.097030 sshd[2268]: Accepted publickey for core from 4.153.228.146 port 53812 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:20.098383 sshd[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:20.105201 systemd-logind[1964]: New session 6 of user core. Jan 17 00:21:20.109450 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:21:20.234779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:20.240867 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:21:20.296288 kubelet[2279]: E0117 00:21:20.296115 2279 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:21:20.301054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:21:20.301287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:21:20.447268 sshd[2268]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:20.451003 systemd[1]: sshd@5-172.31.16.10:22-4.153.228.146:53812.service: Deactivated successfully. Jan 17 00:21:20.452715 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:21:20.453875 systemd-logind[1964]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:21:20.455013 systemd-logind[1964]: Removed session 6. Jan 17 00:21:20.554616 systemd[1]: Started sshd@6-172.31.16.10:22-4.153.228.146:53818.service - OpenSSH per-connection server daemon (4.153.228.146:53818). Jan 17 00:21:21.079344 sshd[2290]: Accepted publickey for core from 4.153.228.146 port 53818 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:21.081018 sshd[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:21.086692 systemd-logind[1964]: New session 7 of user core. Jan 17 00:21:21.096452 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:21:21.396099 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:21:21.396481 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:21:21.766640 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:21:21.766791 (dockerd)[2308]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:21:22.134105 dockerd[2308]: time="2026-01-17T00:21:22.133965240Z" level=info msg="Starting up" Jan 17 00:21:22.277148 dockerd[2308]: time="2026-01-17T00:21:22.277094405Z" level=info msg="Loading containers: start." Jan 17 00:21:22.418612 kernel: Initializing XFRM netlink socket Jan 17 00:21:22.450208 (udev-worker)[2329]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:21:22.511047 systemd-networkd[1807]: docker0: Link UP Jan 17 00:21:22.536851 dockerd[2308]: time="2026-01-17T00:21:22.535924083Z" level=info msg="Loading containers: done." Jan 17 00:21:22.561648 dockerd[2308]: time="2026-01-17T00:21:22.561510591Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:21:22.561863 dockerd[2308]: time="2026-01-17T00:21:22.561828633Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:21:22.561964 dockerd[2308]: time="2026-01-17T00:21:22.561945099Z" level=info msg="Daemon has completed initialization" Jan 17 00:21:22.632301 dockerd[2308]: time="2026-01-17T00:21:22.632230041Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:21:22.632826 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:21:23.962208 containerd[1988]: time="2026-01-17T00:21:23.962138336Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 17 00:21:24.582717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount535137411.mount: Deactivated successfully. Jan 17 00:21:26.806989 containerd[1988]: time="2026-01-17T00:21:26.806933591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:26.808040 containerd[1988]: time="2026-01-17T00:21:26.808003689Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 17 00:21:26.810757 containerd[1988]: time="2026-01-17T00:21:26.809106597Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:26.812140 containerd[1988]: time="2026-01-17T00:21:26.811660870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:26.813020 containerd[1988]: time="2026-01-17T00:21:26.812987577Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.850802969s" Jan 17 00:21:26.813127 containerd[1988]: time="2026-01-17T00:21:26.813114221Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 17 00:21:26.813850 containerd[1988]: time="2026-01-17T00:21:26.813809496Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 17 00:21:29.247550 containerd[1988]: time="2026-01-17T00:21:29.247491176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:29.249023 containerd[1988]: time="2026-01-17T00:21:29.248852753Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 17 00:21:29.250416 containerd[1988]: time="2026-01-17T00:21:29.250088608Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:29.253357 containerd[1988]: time="2026-01-17T00:21:29.253309017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:29.254911 containerd[1988]: time="2026-01-17T00:21:29.254859819Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 2.4410122s" Jan 17 00:21:29.255082 containerd[1988]: time="2026-01-17T00:21:29.255059117Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 17 00:21:29.256126 containerd[1988]: time="2026-01-17T00:21:29.256096039Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 17 00:21:30.534495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:21:30.541476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:30.796407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:30.800531 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:21:30.876211 kubelet[2520]: E0117 00:21:30.876119 2520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:21:30.880746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:21:30.880961 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:21:31.278390 containerd[1988]: time="2026-01-17T00:21:31.278176462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:31.279755 containerd[1988]: time="2026-01-17T00:21:31.279528607Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 17 00:21:31.281998 containerd[1988]: time="2026-01-17T00:21:31.280848926Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:31.283464 containerd[1988]: time="2026-01-17T00:21:31.283423290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:31.284795 containerd[1988]: time="2026-01-17T00:21:31.284759024Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.028629027s" Jan 17 00:21:31.284864 containerd[1988]: time="2026-01-17T00:21:31.284799813Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 17 00:21:31.285270 containerd[1988]: time="2026-01-17T00:21:31.285248016Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 17 00:21:32.377643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2928389694.mount: Deactivated successfully. Jan 17 00:21:33.009293 containerd[1988]: time="2026-01-17T00:21:33.009106703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:33.010848 containerd[1988]: time="2026-01-17T00:21:33.010373817Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 17 00:21:33.011768 containerd[1988]: time="2026-01-17T00:21:33.011731163Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:33.015250 containerd[1988]: time="2026-01-17T00:21:33.014420285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:33.015250 containerd[1988]: time="2026-01-17T00:21:33.014923049Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.729643868s" Jan 17 00:21:33.015250 containerd[1988]: time="2026-01-17T00:21:33.014955904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 17 00:21:33.015663 containerd[1988]: time="2026-01-17T00:21:33.015644961Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 17 00:21:33.110538 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:21:33.542584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548294769.mount: Deactivated successfully. Jan 17 00:21:34.988854 containerd[1988]: time="2026-01-17T00:21:34.988790925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:34.991839 containerd[1988]: time="2026-01-17T00:21:34.991779523Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 17 00:21:34.995195 containerd[1988]: time="2026-01-17T00:21:34.995071178Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:35.002511 containerd[1988]: time="2026-01-17T00:21:35.001712674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:35.003423 containerd[1988]: time="2026-01-17T00:21:35.003371677Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.987640168s" Jan 17 00:21:35.003547 containerd[1988]: time="2026-01-17T00:21:35.003431447Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 17 00:21:35.004242 containerd[1988]: time="2026-01-17T00:21:35.004198588Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:21:35.835500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1344329949.mount: Deactivated successfully. Jan 17 00:21:35.841928 containerd[1988]: time="2026-01-17T00:21:35.841865954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:35.842948 containerd[1988]: time="2026-01-17T00:21:35.842886828Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 00:21:35.845202 containerd[1988]: time="2026-01-17T00:21:35.843888871Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:35.846612 containerd[1988]: time="2026-01-17T00:21:35.846564668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:35.847552 containerd[1988]: time="2026-01-17T00:21:35.847515784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 843.272855ms" Jan 17 00:21:35.847706 containerd[1988]: time="2026-01-17T00:21:35.847683620Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:21:35.848512 containerd[1988]: time="2026-01-17T00:21:35.848459996Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 17 00:21:36.323473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2783000423.mount: Deactivated successfully. Jan 17 00:21:39.241891 containerd[1988]: time="2026-01-17T00:21:39.241817696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:39.243053 containerd[1988]: time="2026-01-17T00:21:39.242929763Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 17 00:21:39.244214 containerd[1988]: time="2026-01-17T00:21:39.243930095Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:39.247214 containerd[1988]: time="2026-01-17T00:21:39.247158786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:39.248714 containerd[1988]: time="2026-01-17T00:21:39.248495594Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.399694019s" Jan 17 00:21:39.248870 containerd[1988]: time="2026-01-17T00:21:39.248848110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 17 00:21:41.034515 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:21:41.045464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:41.363463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:41.365243 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:21:41.440075 kubelet[2678]: E0117 00:21:41.440021 2678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:21:41.443894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:21:41.444272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:21:43.361240 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:43.367552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:43.401607 systemd[1]: Reloading requested from client PID 2693 ('systemctl') (unit session-7.scope)... Jan 17 00:21:43.401626 systemd[1]: Reloading... Jan 17 00:21:43.540209 zram_generator::config[2736]: No configuration found. Jan 17 00:21:43.717213 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:21:43.807937 systemd[1]: Reloading finished in 405 ms. Jan 17 00:21:43.855723 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:21:43.855836 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:21:43.856122 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:43.862635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:44.132643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:44.146072 (kubelet)[2795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:21:44.206629 kubelet[2795]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:21:44.207068 kubelet[2795]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:21:44.207068 kubelet[2795]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:21:44.210307 kubelet[2795]: I0117 00:21:44.210213 2795 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:21:45.909720 kubelet[2795]: I0117 00:21:45.909671 2795 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:21:45.909720 kubelet[2795]: I0117 00:21:45.909708 2795 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:21:45.910370 kubelet[2795]: I0117 00:21:45.910018 2795 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:21:45.969082 kubelet[2795]: I0117 00:21:45.968969 2795 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:21:45.977715 kubelet[2795]: E0117 00:21:45.977587 2795 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:21:46.020085 kubelet[2795]: E0117 00:21:46.020011 2795 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:21:46.020085 kubelet[2795]: I0117 00:21:46.020077 2795 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:21:46.033912 kubelet[2795]: I0117 00:21:46.033852 2795 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:21:46.036648 kubelet[2795]: I0117 00:21:46.036429 2795 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:21:46.040387 kubelet[2795]: I0117 00:21:46.036637 2795 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-10","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:21:46.041748 kubelet[2795]: I0117 00:21:46.041693 2795 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:21:46.041748 kubelet[2795]: I0117 00:21:46.041748 2795 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:21:46.044069 kubelet[2795]: I0117 00:21:46.044014 2795 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:21:46.049596 kubelet[2795]: I0117 00:21:46.049447 2795 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:21:46.049596 kubelet[2795]: I0117 00:21:46.049501 2795 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:21:46.050396 kubelet[2795]: I0117 00:21:46.050362 2795 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:21:46.050396 kubelet[2795]: I0117 00:21:46.050397 2795 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:21:46.054676 kubelet[2795]: E0117 00:21:46.054175 2795 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-10&limit=500&resourceVersion=0\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:21:46.062205 kubelet[2795]: I0117 00:21:46.062162 2795 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:21:46.062805 kubelet[2795]: I0117 00:21:46.062765 2795 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:21:46.064215 kubelet[2795]: W0117 00:21:46.063765 2795 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:21:46.069990 kubelet[2795]: E0117 00:21:46.069760 2795 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:21:46.071790 kubelet[2795]: I0117 00:21:46.071708 2795 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:21:46.071790 kubelet[2795]: I0117 00:21:46.071797 2795 server.go:1289] "Started kubelet" Jan 17 00:21:46.075262 kubelet[2795]: I0117 00:21:46.075199 2795 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:21:46.078401 kubelet[2795]: I0117 00:21:46.078334 2795 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:21:46.079717 kubelet[2795]: I0117 00:21:46.078860 2795 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:21:46.081497 kubelet[2795]: I0117 00:21:46.081465 2795 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:21:46.088855 kubelet[2795]: I0117 00:21:46.088815 2795 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:21:46.094507 kubelet[2795]: E0117 00:21:46.088080 2795 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.10:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-10.188b5ccb1adb3136 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-10,UID:ip-172-31-16-10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-10,},FirstTimestamp:2026-01-17 00:21:46.071740726 +0000 UTC m=+1.920073270,LastTimestamp:2026-01-17 00:21:46.071740726 +0000 UTC m=+1.920073270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-10,}" Jan 17 00:21:46.095797 kubelet[2795]: E0117 00:21:46.095749 2795 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:21:46.096941 kubelet[2795]: I0117 00:21:46.096917 2795 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:21:46.104310 kubelet[2795]: E0117 00:21:46.104280 2795 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-10\" not found" Jan 17 00:21:46.106149 kubelet[2795]: I0117 00:21:46.104613 2795 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:21:46.106149 kubelet[2795]: I0117 00:21:46.104636 2795 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:21:46.106149 kubelet[2795]: I0117 00:21:46.105008 2795 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:21:46.106149 kubelet[2795]: I0117 00:21:46.105086 2795 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:21:46.107055 kubelet[2795]: I0117 00:21:46.107036 2795 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:21:46.107269 kubelet[2795]: I0117 00:21:46.107236 2795 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:21:46.110113 kubelet[2795]: E0117 00:21:46.110080 2795 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:21:46.114619 kubelet[2795]: I0117 00:21:46.113505 2795 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:21:46.122138 kubelet[2795]: E0117 00:21:46.122089 2795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-10?timeout=10s\": dial tcp 172.31.16.10:6443: connect: connection refused" interval="200ms" Jan 17 00:21:46.133497 kubelet[2795]: I0117 00:21:46.132853 2795 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:21:46.133497 kubelet[2795]: I0117 00:21:46.132889 2795 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:21:46.133497 kubelet[2795]: I0117 00:21:46.132925 2795 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:21:46.133497 kubelet[2795]: I0117 00:21:46.132937 2795 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:21:46.133497 kubelet[2795]: E0117 00:21:46.132994 2795 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:21:46.139111 kubelet[2795]: E0117 00:21:46.139065 2795 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:21:46.147121 kubelet[2795]: I0117 00:21:46.147092 2795 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:21:46.147121 kubelet[2795]: I0117 00:21:46.147115 2795 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:21:46.147323 kubelet[2795]: I0117 00:21:46.147135 2795 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:21:46.149287 kubelet[2795]: I0117 00:21:46.149257 2795 policy_none.go:49] "None policy: Start" Jan 17 00:21:46.149287 kubelet[2795]: I0117 00:21:46.149282 2795 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:21:46.149441 kubelet[2795]: I0117 00:21:46.149303 2795 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:21:46.157172 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:21:46.167357 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:21:46.173630 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:21:46.181950 kubelet[2795]: E0117 00:21:46.181642 2795 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:21:46.182942 kubelet[2795]: I0117 00:21:46.182923 2795 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:21:46.183663 kubelet[2795]: I0117 00:21:46.183070 2795 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:21:46.183663 kubelet[2795]: I0117 00:21:46.183448 2795 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:21:46.187071 kubelet[2795]: E0117 00:21:46.187049 2795 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:21:46.187364 kubelet[2795]: E0117 00:21:46.187307 2795 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-10\" not found" Jan 17 00:21:46.251201 systemd[1]: Created slice kubepods-burstable-pod71081cc3708e116b155df6b7edbba581.slice - libcontainer container kubepods-burstable-pod71081cc3708e116b155df6b7edbba581.slice. Jan 17 00:21:46.273424 kubelet[2795]: E0117 00:21:46.271949 2795 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-10\" not found" node="ip-172-31-16-10" Jan 17 00:21:46.280249 systemd[1]: Created slice kubepods-burstable-pod2b64d7636ed6cc8ef27b7a1d0d17cc9b.slice - libcontainer container kubepods-burstable-pod2b64d7636ed6cc8ef27b7a1d0d17cc9b.slice. Jan 17 00:21:46.292101 kubelet[2795]: I0117 00:21:46.291706 2795 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-10" Jan 17 00:21:46.292330 kubelet[2795]: E0117 00:21:46.292140 2795 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.10:6443/api/v1/nodes\": dial tcp 172.31.16.10:6443: connect: connection refused" node="ip-172-31-16-10" Jan 17 00:21:46.292732 kubelet[2795]: E0117 00:21:46.292708 2795 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-10\" not found" node="ip-172-31-16-10" Jan 17 00:21:46.300417 systemd[1]: Created slice kubepods-burstable-podaef961120e80899f40c828d3281e28a1.slice - libcontainer container kubepods-burstable-podaef961120e80899f40c828d3281e28a1.slice. Jan 17 00:21:46.302923 kubelet[2795]: E0117 00:21:46.302883 2795 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-10\" not found" node="ip-172-31-16-10" Jan 17 00:21:46.323068 kubelet[2795]: E0117 00:21:46.323020 2795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-10?timeout=10s\": dial tcp 172.31.16.10:6443: connect: connection refused" interval="400ms" Jan 17 00:21:46.407063 kubelet[2795]: I0117 00:21:46.406941 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2b64d7636ed6cc8ef27b7a1d0d17cc9b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-10\" (UID: \"2b64d7636ed6cc8ef27b7a1d0d17cc9b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-10" Jan 17 00:21:46.407063 kubelet[2795]: I0117 00:21:46.406995 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b64d7636ed6cc8ef27b7a1d0d17cc9b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-10\" (UID: \"2b64d7636ed6cc8ef27b7a1d0d17cc9b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-10" Jan 17 00:21:46.407063 kubelet[2795]: I0117 00:21:46.407021 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aef961120e80899f40c828d3281e28a1-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-10\" (UID: \"aef961120e80899f40c828d3281e28a1\") " pod="kube-system/kube-scheduler-ip-172-31-16-10" Jan 17 00:21:46.407063 kubelet[2795]: I0117 00:21:46.407047 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71081cc3708e116b155df6b7edbba581-ca-certs\") pod \"kube-apiserver-ip-172-31-16-10\" (UID: \"71081cc3708e116b155df6b7edbba581\") " pod="kube-system/kube-apiserver-ip-172-31-16-10" Jan 17 00:21:46.407063 kubelet[2795]: I0117 00:21:46.407068 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71081cc3708e116b155df6b7edbba581-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-10\" (UID: \"71081cc3708e116b155df6b7edbba581\") " pod="kube-system/kube-apiserver-ip-172-31-16-10" Jan 17 00:21:46.407403 kubelet[2795]: I0117 00:21:46.407090 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71081cc3708e116b155df6b7edbba581-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-10\" (UID: \"71081cc3708e116b155df6b7edbba581\") " pod="kube-system/kube-apiserver-ip-172-31-16-10" Jan 17 00:21:46.407403 kubelet[2795]: I0117 00:21:46.407111 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b64d7636ed6cc8ef27b7a1d0d17cc9b-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-10\" (UID: \"2b64d7636ed6cc8ef27b7a1d0d17cc9b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-10" Jan 17 00:21:46.407403 kubelet[2795]: I0117 00:21:46.407147 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b64d7636ed6cc8ef27b7a1d0d17cc9b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-10\" (UID: \"2b64d7636ed6cc8ef27b7a1d0d17cc9b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-10" Jan 17 00:21:46.407403 kubelet[2795]: I0117 00:21:46.407167 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b64d7636ed6cc8ef27b7a1d0d17cc9b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-10\" (UID: \"2b64d7636ed6cc8ef27b7a1d0d17cc9b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-10" Jan 17 00:21:46.494959 kubelet[2795]: I0117 00:21:46.494839 2795 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-10" Jan 17 00:21:46.496251 kubelet[2795]: E0117 00:21:46.496210 2795 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.10:6443/api/v1/nodes\": dial tcp 172.31.16.10:6443: connect: connection refused" node="ip-172-31-16-10" Jan 17 00:21:46.578131 containerd[1988]: time="2026-01-17T00:21:46.578070184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-10,Uid:71081cc3708e116b155df6b7edbba581,Namespace:kube-system,Attempt:0,}" Jan 17 00:21:46.594155 containerd[1988]: time="2026-01-17T00:21:46.594105487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-10,Uid:2b64d7636ed6cc8ef27b7a1d0d17cc9b,Namespace:kube-system,Attempt:0,}" Jan 17 00:21:46.604438 containerd[1988]: time="2026-01-17T00:21:46.604033011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-10,Uid:aef961120e80899f40c828d3281e28a1,Namespace:kube-system,Attempt:0,}" Jan 17 00:21:46.726203 kubelet[2795]: E0117 00:21:46.724307 2795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-10?timeout=10s\": dial tcp 172.31.16.10:6443: connect: connection refused" interval="800ms" Jan 17 00:21:46.898750 kubelet[2795]: I0117 00:21:46.898718 2795 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-10" Jan 17 00:21:46.899092 kubelet[2795]: E0117 00:21:46.899059 2795 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.10:6443/api/v1/nodes\": dial tcp 172.31.16.10:6443: connect: connection refused" node="ip-172-31-16-10" Jan 17 00:21:46.958486 kubelet[2795]: E0117 00:21:46.958376 2795 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:21:47.015217 kubelet[2795]: E0117 00:21:47.015162 2795 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:21:47.016788 kubelet[2795]: E0117 00:21:47.016641 2795 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:21:47.042122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3866253757.mount: Deactivated successfully. Jan 17 00:21:47.048902 containerd[1988]: time="2026-01-17T00:21:47.048842037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:21:47.050839 containerd[1988]: time="2026-01-17T00:21:47.050775495Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:21:47.053057 containerd[1988]: time="2026-01-17T00:21:47.052994840Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:21:47.054055 containerd[1988]: time="2026-01-17T00:21:47.054006921Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:21:47.055482 containerd[1988]: time="2026-01-17T00:21:47.055440405Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:21:47.057295 containerd[1988]: time="2026-01-17T00:21:47.057231702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:21:47.058229 containerd[1988]: time="2026-01-17T00:21:47.058085137Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:21:47.061239 kubelet[2795]: E0117 00:21:47.060896 2795 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-10&limit=500&resourceVersion=0\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:21:47.062228 containerd[1988]: time="2026-01-17T00:21:47.062093417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:21:47.064214 containerd[1988]: time="2026-01-17T00:21:47.063049422Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 468.832503ms" Jan 17 00:21:47.066285 containerd[1988]: time="2026-01-17T00:21:47.066238282Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 462.032665ms" Jan 17 00:21:47.067834 containerd[1988]: time="2026-01-17T00:21:47.067788541Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 489.613659ms" Jan 17 00:21:47.267511 containerd[1988]: time="2026-01-17T00:21:47.266935041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:21:47.267511 containerd[1988]: time="2026-01-17T00:21:47.267001506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:21:47.269058 containerd[1988]: time="2026-01-17T00:21:47.267654306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:21:47.269058 containerd[1988]: time="2026-01-17T00:21:47.267719688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:21:47.269058 containerd[1988]: time="2026-01-17T00:21:47.267744212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:47.269058 containerd[1988]: time="2026-01-17T00:21:47.267841571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:47.271774 containerd[1988]: time="2026-01-17T00:21:47.267035487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:47.273659 containerd[1988]: time="2026-01-17T00:21:47.273491140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:47.275863 containerd[1988]: time="2026-01-17T00:21:47.273758191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:21:47.275863 containerd[1988]: time="2026-01-17T00:21:47.273871136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:21:47.275863 containerd[1988]: time="2026-01-17T00:21:47.273910839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:47.275863 containerd[1988]: time="2026-01-17T00:21:47.274067557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:47.321804 systemd[1]: Started cri-containerd-0ff9cca66d0873e7c69ea9bc48b486a960455ac4b6661245827ef8d50e6f6b2a.scope - libcontainer container 0ff9cca66d0873e7c69ea9bc48b486a960455ac4b6661245827ef8d50e6f6b2a. Jan 17 00:21:47.325065 systemd[1]: Started cri-containerd-382395f1b572966fe17e663bca4489d5d4a3c10326686da065e6159dc168fb85.scope - libcontainer container 382395f1b572966fe17e663bca4489d5d4a3c10326686da065e6159dc168fb85. Jan 17 00:21:47.334286 systemd[1]: Started cri-containerd-e1192efcc6c8507ac6b3e08fdd1bbbe449634755515ec25ea21bb19e54ecac28.scope - libcontainer container e1192efcc6c8507ac6b3e08fdd1bbbe449634755515ec25ea21bb19e54ecac28. Jan 17 00:21:47.419540 containerd[1988]: time="2026-01-17T00:21:47.419497594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-10,Uid:2b64d7636ed6cc8ef27b7a1d0d17cc9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1192efcc6c8507ac6b3e08fdd1bbbe449634755515ec25ea21bb19e54ecac28\"" Jan 17 00:21:47.430264 containerd[1988]: time="2026-01-17T00:21:47.430222642Z" level=info msg="CreateContainer within sandbox \"e1192efcc6c8507ac6b3e08fdd1bbbe449634755515ec25ea21bb19e54ecac28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:21:47.432447 containerd[1988]: time="2026-01-17T00:21:47.432244767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-10,Uid:71081cc3708e116b155df6b7edbba581,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ff9cca66d0873e7c69ea9bc48b486a960455ac4b6661245827ef8d50e6f6b2a\"" Jan 17 00:21:47.443962 containerd[1988]: time="2026-01-17T00:21:47.443809226Z" level=info msg="CreateContainer within sandbox \"0ff9cca66d0873e7c69ea9bc48b486a960455ac4b6661245827ef8d50e6f6b2a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:21:47.458227 containerd[1988]: time="2026-01-17T00:21:47.458146172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-10,Uid:aef961120e80899f40c828d3281e28a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"382395f1b572966fe17e663bca4489d5d4a3c10326686da065e6159dc168fb85\"" Jan 17 00:21:47.464044 containerd[1988]: time="2026-01-17T00:21:47.463996842Z" level=info msg="CreateContainer within sandbox \"382395f1b572966fe17e663bca4489d5d4a3c10326686da065e6159dc168fb85\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:21:47.494928 containerd[1988]: time="2026-01-17T00:21:47.494877554Z" level=info msg="CreateContainer within sandbox \"e1192efcc6c8507ac6b3e08fdd1bbbe449634755515ec25ea21bb19e54ecac28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6fea4fbe62a5bab4ffd307aacbcbd389343b073bfac03c91adcf7313c15169da\"" Jan 17 00:21:47.496643 containerd[1988]: time="2026-01-17T00:21:47.496413881Z" level=info msg="CreateContainer within sandbox \"382395f1b572966fe17e663bca4489d5d4a3c10326686da065e6159dc168fb85\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"efac05a0dcfc3a0f9769c79519d35f7ddd617325ff00c43efb2403e519741190\"" Jan 17 00:21:47.496839 containerd[1988]: time="2026-01-17T00:21:47.496789632Z" level=info msg="StartContainer for \"6fea4fbe62a5bab4ffd307aacbcbd389343b073bfac03c91adcf7313c15169da\"" Jan 17 00:21:47.498218 containerd[1988]: time="2026-01-17T00:21:47.497984432Z" level=info msg="CreateContainer within sandbox \"0ff9cca66d0873e7c69ea9bc48b486a960455ac4b6661245827ef8d50e6f6b2a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5455e5714a3583d0396ef11fc4532774dd697b3cd1cb278daa04032e35a7553f\"" Jan 17 00:21:47.498602 containerd[1988]: time="2026-01-17T00:21:47.498571208Z" level=info msg="StartContainer for \"efac05a0dcfc3a0f9769c79519d35f7ddd617325ff00c43efb2403e519741190\"" Jan 17 00:21:47.514590 containerd[1988]: time="2026-01-17T00:21:47.514535971Z" level=info msg="StartContainer for \"5455e5714a3583d0396ef11fc4532774dd697b3cd1cb278daa04032e35a7553f\"" Jan 17 00:21:47.525457 kubelet[2795]: E0117 00:21:47.525148 2795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-10?timeout=10s\": dial tcp 172.31.16.10:6443: connect: connection refused" interval="1.6s" Jan 17 00:21:47.554536 systemd[1]: Started cri-containerd-6fea4fbe62a5bab4ffd307aacbcbd389343b073bfac03c91adcf7313c15169da.scope - libcontainer container 6fea4fbe62a5bab4ffd307aacbcbd389343b073bfac03c91adcf7313c15169da. Jan 17 00:21:47.564921 systemd[1]: Started cri-containerd-efac05a0dcfc3a0f9769c79519d35f7ddd617325ff00c43efb2403e519741190.scope - libcontainer container efac05a0dcfc3a0f9769c79519d35f7ddd617325ff00c43efb2403e519741190. Jan 17 00:21:47.576439 systemd[1]: Started cri-containerd-5455e5714a3583d0396ef11fc4532774dd697b3cd1cb278daa04032e35a7553f.scope - libcontainer container 5455e5714a3583d0396ef11fc4532774dd697b3cd1cb278daa04032e35a7553f. Jan 17 00:21:47.615306 update_engine[1968]: I20260117 00:21:47.615237 1968 update_attempter.cc:509] Updating boot flags... Jan 17 00:21:47.655973 containerd[1988]: time="2026-01-17T00:21:47.654032853Z" level=info msg="StartContainer for \"6fea4fbe62a5bab4ffd307aacbcbd389343b073bfac03c91adcf7313c15169da\" returns successfully" Jan 17 00:21:47.710559 containerd[1988]: time="2026-01-17T00:21:47.710518891Z" level=info msg="StartContainer for \"efac05a0dcfc3a0f9769c79519d35f7ddd617325ff00c43efb2403e519741190\" returns successfully" Jan 17 00:21:47.711027 kubelet[2795]: I0117 00:21:47.710996 2795 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-10" Jan 17 00:21:47.711389 kubelet[2795]: E0117 00:21:47.711357 2795 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.10:6443/api/v1/nodes\": dial tcp 172.31.16.10:6443: connect: connection refused" node="ip-172-31-16-10" Jan 17 00:21:47.715120 containerd[1988]: time="2026-01-17T00:21:47.714778656Z" level=info msg="StartContainer for \"5455e5714a3583d0396ef11fc4532774dd697b3cd1cb278daa04032e35a7553f\" returns successfully" Jan 17 00:21:47.751222 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3070) Jan 17 00:21:48.025205 kubelet[2795]: E0117 00:21:48.023381 2795 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:21:48.065998 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (2827) Jan 17 00:21:48.160054 kubelet[2795]: E0117 00:21:48.160017 2795 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-10\" not found" node="ip-172-31-16-10" Jan 17 00:21:48.168517 kubelet[2795]: E0117 00:21:48.168483 2795 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-10\" not found" node="ip-172-31-16-10" Jan 17 00:21:48.173652 kubelet[2795]: E0117 00:21:48.173623 2795 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-10\" not found" node="ip-172-31-16-10" Jan 17 00:21:49.052859 kubelet[2795]: E0117 00:21:49.052819 2795 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:21:49.128275 kubelet[2795]: E0117 00:21:49.128219 2795 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-10?timeout=10s\": dial tcp 172.31.16.10:6443: connect: connection refused" interval="3.2s" Jan 17 00:21:49.174954 kubelet[2795]: E0117 00:21:49.174921 2795 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-10\" not found" node="ip-172-31-16-10" Jan 17 00:21:49.177031 kubelet[2795]: E0117 00:21:49.176998 2795 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-10\" not found" node="ip-172-31-16-10" Jan 17 00:21:49.313063 kubelet[2795]: I0117 00:21:49.312956 2795 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-10" Jan 17 00:21:49.315821 kubelet[2795]: E0117 00:21:49.315783 2795 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.10:6443/api/v1/nodes\": dial tcp 172.31.16.10:6443: connect: connection refused" node="ip-172-31-16-10" Jan 17 00:21:49.525566 kubelet[2795]: E0117 00:21:49.525500 2795 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:21:49.711040 kubelet[2795]: E0117 00:21:49.710588 2795 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.10:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-10.188b5ccb1adb3136 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-10,UID:ip-172-31-16-10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-10,},FirstTimestamp:2026-01-17 00:21:46.071740726 +0000 UTC m=+1.920073270,LastTimestamp:2026-01-17 00:21:46.071740726 +0000 UTC m=+1.920073270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-10,}" Jan 17 00:21:49.908280 kubelet[2795]: E0117 00:21:49.908225 2795 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-10&limit=500&resourceVersion=0\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:21:50.053095 kubelet[2795]: E0117 00:21:50.053043 2795 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:21:51.755448 kubelet[2795]: E0117 00:21:51.755413 2795 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-10\" not found" node="ip-172-31-16-10" Jan 17 00:21:51.983651 kubelet[2795]: E0117 00:21:51.983617 2795 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-10\" not found" node="ip-172-31-16-10" Jan 17 00:21:52.518819 kubelet[2795]: I0117 00:21:52.518729 2795 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-10" Jan 17 00:21:52.951482 kubelet[2795]: E0117 00:21:52.951432 2795 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-10\" not found" node="ip-172-31-16-10" Jan 17 00:21:53.094082 kubelet[2795]: I0117 00:21:53.094020 2795 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-10" Jan 17 00:21:53.094082 kubelet[2795]: E0117 00:21:53.094074 2795 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-10\": node \"ip-172-31-16-10\" not found" Jan 17 00:21:53.129580 kubelet[2795]: E0117 00:21:53.129530 2795 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-10\" not found" Jan 17 00:21:53.230693 kubelet[2795]: E0117 00:21:53.230562 2795 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-10\" not found" Jan 17 00:21:53.331571 kubelet[2795]: E0117 00:21:53.331521 2795 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-10\" not found" Jan 17 00:21:53.432737 kubelet[2795]: E0117 00:21:53.432688 2795 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-10\" not found" Jan 17 00:21:53.611690 kubelet[2795]: I0117 00:21:53.611615 2795 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-10" Jan 17 00:21:53.622798 kubelet[2795]: E0117 00:21:53.622758 2795 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-10\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-10" Jan 17 00:21:53.622798 kubelet[2795]: I0117 00:21:53.622789 2795 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-10" Jan 17 00:21:53.625112 kubelet[2795]: E0117 00:21:53.625069 2795 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-10\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-10" Jan 17 00:21:53.625112 kubelet[2795]: I0117 00:21:53.625099 2795 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-10" Jan 17 00:21:53.627133 kubelet[2795]: E0117 00:21:53.627092 2795 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-10\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-10" Jan 17 00:21:54.059204 kubelet[2795]: I0117 00:21:54.059079 2795 apiserver.go:52] "Watching apiserver" Jan 17 00:21:54.105527 kubelet[2795]: I0117 00:21:54.105321 2795 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:21:55.633010 systemd[1]: Reloading requested from client PID 3258 ('systemctl') (unit session-7.scope)... Jan 17 00:21:55.633031 systemd[1]: Reloading... Jan 17 00:21:55.741379 zram_generator::config[3295]: No configuration found. Jan 17 00:21:55.781627 kubelet[2795]: I0117 00:21:55.781589 2795 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-10" Jan 17 00:21:55.891103 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:21:55.996260 systemd[1]: Reloading finished in 362 ms. Jan 17 00:21:56.043293 kubelet[2795]: I0117 00:21:56.043211 2795 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:21:56.043487 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:56.064098 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:21:56.064401 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:56.064653 systemd[1]: kubelet.service: Consumed 2.047s CPU time, 130.6M memory peak, 0B memory swap peak. Jan 17 00:21:56.071593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:56.307873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:56.324808 (kubelet)[3358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:21:56.399694 kubelet[3358]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:21:56.399694 kubelet[3358]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:21:56.399694 kubelet[3358]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:21:56.401099 kubelet[3358]: I0117 00:21:56.400102 3358 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:21:56.407678 kubelet[3358]: I0117 00:21:56.407642 3358 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:21:56.408274 kubelet[3358]: I0117 00:21:56.407919 3358 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:21:56.408711 kubelet[3358]: I0117 00:21:56.408697 3358 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:21:56.410448 kubelet[3358]: I0117 00:21:56.410165 3358 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:21:56.412754 kubelet[3358]: I0117 00:21:56.412728 3358 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:21:56.437553 kubelet[3358]: E0117 00:21:56.437472 3358 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:21:56.438649 kubelet[3358]: I0117 00:21:56.437711 3358 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:21:56.440858 kubelet[3358]: I0117 00:21:56.440824 3358 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:21:56.441122 kubelet[3358]: I0117 00:21:56.441092 3358 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:21:56.441321 kubelet[3358]: I0117 00:21:56.441123 3358 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-10","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:21:56.441469 kubelet[3358]: I0117 00:21:56.441329 3358 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:21:56.441469 kubelet[3358]: I0117 00:21:56.441344 3358 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:21:56.441469 kubelet[3358]: I0117 00:21:56.441406 3358 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:21:56.441639 kubelet[3358]: I0117 00:21:56.441594 3358 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:21:56.441639 kubelet[3358]: I0117 00:21:56.441612 3358 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:21:56.441639 kubelet[3358]: I0117 00:21:56.441639 3358 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:21:56.442405 kubelet[3358]: I0117 00:21:56.441658 3358 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:21:56.446634 kubelet[3358]: I0117 00:21:56.446602 3358 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:21:56.447899 kubelet[3358]: I0117 00:21:56.447872 3358 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:21:56.453834 kubelet[3358]: I0117 00:21:56.453811 3358 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:21:56.453960 kubelet[3358]: I0117 00:21:56.453862 3358 server.go:1289] "Started kubelet" Jan 17 00:21:56.457530 kubelet[3358]: I0117 00:21:56.457499 3358 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:21:56.461059 kubelet[3358]: I0117 00:21:56.460888 3358 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:21:56.470714 kubelet[3358]: I0117 00:21:56.470175 3358 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:21:56.475717 kubelet[3358]: I0117 00:21:56.475648 3358 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:21:56.475934 kubelet[3358]: I0117 00:21:56.475915 3358 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:21:56.476017 kubelet[3358]: I0117 00:21:56.475926 3358 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:21:56.478204 kubelet[3358]: E0117 00:21:56.476388 3358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-10\" not found" Jan 17 00:21:56.478204 kubelet[3358]: I0117 00:21:56.476615 3358 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:21:56.478204 kubelet[3358]: I0117 00:21:56.476735 3358 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:21:56.478987 kubelet[3358]: I0117 00:21:56.478965 3358 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:21:56.486131 kubelet[3358]: I0117 00:21:56.486099 3358 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:21:56.486299 kubelet[3358]: I0117 00:21:56.486217 3358 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:21:56.492620 kubelet[3358]: I0117 00:21:56.492558 3358 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:21:56.496439 kubelet[3358]: I0117 00:21:56.496364 3358 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:21:56.509128 kubelet[3358]: I0117 00:21:56.509100 3358 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:21:56.510571 kubelet[3358]: I0117 00:21:56.510548 3358 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:21:56.512574 kubelet[3358]: I0117 00:21:56.512536 3358 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:21:56.512574 kubelet[3358]: I0117 00:21:56.512560 3358 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:21:56.512769 kubelet[3358]: E0117 00:21:56.512617 3358 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:21:56.553139 kubelet[3358]: I0117 00:21:56.553117 3358 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:21:56.554301 kubelet[3358]: I0117 00:21:56.553371 3358 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:21:56.554301 kubelet[3358]: I0117 00:21:56.553398 3358 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:21:56.554301 kubelet[3358]: I0117 00:21:56.553528 3358 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:21:56.554301 kubelet[3358]: I0117 00:21:56.553539 3358 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:21:56.554301 kubelet[3358]: I0117 00:21:56.553562 3358 policy_none.go:49] "None policy: Start" Jan 17 00:21:56.554301 kubelet[3358]: I0117 00:21:56.553572 3358 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:21:56.554301 kubelet[3358]: I0117 00:21:56.553581 3358 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:21:56.554301 kubelet[3358]: I0117 00:21:56.553664 3358 state_mem.go:75] "Updated machine memory state" Jan 17 00:21:56.559849 kubelet[3358]: E0117 00:21:56.558781 3358 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:21:56.559849 kubelet[3358]: I0117 00:21:56.558985 3358 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:21:56.559849 kubelet[3358]: I0117 00:21:56.558998 3358 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:21:56.560933 kubelet[3358]: I0117 00:21:56.560237 3358 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:21:56.567815 kubelet[3358]: E0117 00:21:56.566563 3358 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:21:56.613619 kubelet[3358]: I0117 00:21:56.613576 3358 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-10" Jan 17 00:21:56.613984 kubelet[3358]: I0117 00:21:56.613941 3358 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-10" Jan 17 00:21:56.614197 kubelet[3358]: I0117 00:21:56.614160 3358 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-10" Jan 17 00:21:56.627330 kubelet[3358]: E0117 00:21:56.627292 3358 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-10\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-10" Jan 17 00:21:56.661429 kubelet[3358]: I0117 00:21:56.661395 3358 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-10" Jan 17 00:21:56.672881 kubelet[3358]: I0117 00:21:56.672835 3358 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-10" Jan 17 00:21:56.673457 kubelet[3358]: I0117 00:21:56.672954 3358 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-10" Jan 17 00:21:56.778146 kubelet[3358]: I0117 00:21:56.777810 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b64d7636ed6cc8ef27b7a1d0d17cc9b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-10\" (UID: \"2b64d7636ed6cc8ef27b7a1d0d17cc9b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-10" Jan 17 00:21:56.778146 kubelet[3358]: I0117 00:21:56.777862 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71081cc3708e116b155df6b7edbba581-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-10\" (UID: \"71081cc3708e116b155df6b7edbba581\") " pod="kube-system/kube-apiserver-ip-172-31-16-10" Jan 17 00:21:56.778146 kubelet[3358]: I0117 00:21:56.777890 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71081cc3708e116b155df6b7edbba581-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-10\" (UID: \"71081cc3708e116b155df6b7edbba581\") " pod="kube-system/kube-apiserver-ip-172-31-16-10" Jan 17 00:21:56.778146 kubelet[3358]: I0117 00:21:56.777916 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b64d7636ed6cc8ef27b7a1d0d17cc9b-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-10\" (UID: \"2b64d7636ed6cc8ef27b7a1d0d17cc9b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-10" Jan 17 00:21:56.778146 kubelet[3358]: I0117 00:21:56.777938 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b64d7636ed6cc8ef27b7a1d0d17cc9b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-10\" (UID: \"2b64d7636ed6cc8ef27b7a1d0d17cc9b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-10" Jan 17 00:21:56.778512 kubelet[3358]: I0117 00:21:56.777960 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aef961120e80899f40c828d3281e28a1-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-10\" (UID: \"aef961120e80899f40c828d3281e28a1\") " pod="kube-system/kube-scheduler-ip-172-31-16-10" Jan 17 00:21:56.778512 kubelet[3358]: I0117 00:21:56.777982 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71081cc3708e116b155df6b7edbba581-ca-certs\") pod \"kube-apiserver-ip-172-31-16-10\" (UID: \"71081cc3708e116b155df6b7edbba581\") " pod="kube-system/kube-apiserver-ip-172-31-16-10" Jan 17 00:21:56.778512 kubelet[3358]: I0117 00:21:56.778010 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2b64d7636ed6cc8ef27b7a1d0d17cc9b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-10\" (UID: \"2b64d7636ed6cc8ef27b7a1d0d17cc9b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-10" Jan 17 00:21:56.778512 kubelet[3358]: I0117 00:21:56.778032 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b64d7636ed6cc8ef27b7a1d0d17cc9b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-10\" (UID: \"2b64d7636ed6cc8ef27b7a1d0d17cc9b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-10" Jan 17 00:21:57.442848 kubelet[3358]: I0117 00:21:57.442791 3358 apiserver.go:52] "Watching apiserver" Jan 17 00:21:57.477140 kubelet[3358]: I0117 00:21:57.477041 3358 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:21:57.535336 kubelet[3358]: I0117 00:21:57.535303 3358 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-10" Jan 17 00:21:57.548284 kubelet[3358]: E0117 00:21:57.548238 3358 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-10\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-10" Jan 17 00:21:57.567079 kubelet[3358]: I0117 00:21:57.566998 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-10" podStartSLOduration=2.566954107 podStartE2EDuration="2.566954107s" podCreationTimestamp="2026-01-17 00:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:21:57.56623116 +0000 UTC m=+1.234347058" watchObservedRunningTime="2026-01-17 00:21:57.566954107 +0000 UTC m=+1.235069996" Jan 17 00:21:57.581078 kubelet[3358]: I0117 00:21:57.580846 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-10" podStartSLOduration=1.580816292 podStartE2EDuration="1.580816292s" podCreationTimestamp="2026-01-17 00:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:21:57.579103135 +0000 UTC m=+1.247219024" watchObservedRunningTime="2026-01-17 00:21:57.580816292 +0000 UTC m=+1.248932181" Jan 17 00:21:57.593753 kubelet[3358]: I0117 00:21:57.593403 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-10" podStartSLOduration=1.5933810510000002 podStartE2EDuration="1.593381051s" podCreationTimestamp="2026-01-17 00:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:21:57.592236362 +0000 UTC m=+1.260352261" watchObservedRunningTime="2026-01-17 00:21:57.593381051 +0000 UTC m=+1.261496952" Jan 17 00:21:58.356808 sudo[2293]: pam_unix(sudo:session): session closed for user root Jan 17 00:21:58.443079 sshd[2290]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:58.447871 systemd[1]: sshd@6-172.31.16.10:22-4.153.228.146:53818.service: Deactivated successfully. Jan 17 00:21:58.451439 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:21:58.451741 systemd[1]: session-7.scope: Consumed 5.133s CPU time, 147.2M memory peak, 0B memory swap peak. Jan 17 00:21:58.453538 systemd-logind[1964]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:21:58.455129 systemd-logind[1964]: Removed session 7. Jan 17 00:22:01.884220 kubelet[3358]: I0117 00:22:01.878721 3358 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:22:01.884812 containerd[1988]: time="2026-01-17T00:22:01.879356931Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:22:01.885811 kubelet[3358]: I0117 00:22:01.885370 3358 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:22:02.688204 kubelet[3358]: I0117 00:22:02.686995 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/8046b25a-e695-41db-9bd5-c2060c8d6c0d-cni\") pod \"kube-flannel-ds-cxskq\" (UID: \"8046b25a-e695-41db-9bd5-c2060c8d6c0d\") " pod="kube-flannel/kube-flannel-ds-cxskq" Jan 17 00:22:02.688204 kubelet[3358]: I0117 00:22:02.687037 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/8046b25a-e695-41db-9bd5-c2060c8d6c0d-flannel-cfg\") pod \"kube-flannel-ds-cxskq\" (UID: \"8046b25a-e695-41db-9bd5-c2060c8d6c0d\") " pod="kube-flannel/kube-flannel-ds-cxskq" Jan 17 00:22:02.688204 kubelet[3358]: I0117 00:22:02.687059 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d32e86b3-0661-4ca9-8201-17815a466560-kube-proxy\") pod \"kube-proxy-xkspl\" (UID: \"d32e86b3-0661-4ca9-8201-17815a466560\") " pod="kube-system/kube-proxy-xkspl" Jan 17 00:22:02.688204 kubelet[3358]: I0117 00:22:02.687078 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8046b25a-e695-41db-9bd5-c2060c8d6c0d-run\") pod \"kube-flannel-ds-cxskq\" (UID: \"8046b25a-e695-41db-9bd5-c2060c8d6c0d\") " pod="kube-flannel/kube-flannel-ds-cxskq" Jan 17 00:22:02.688204 kubelet[3358]: I0117 00:22:02.687099 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/8046b25a-e695-41db-9bd5-c2060c8d6c0d-cni-plugin\") pod \"kube-flannel-ds-cxskq\" (UID: \"8046b25a-e695-41db-9bd5-c2060c8d6c0d\") " pod="kube-flannel/kube-flannel-ds-cxskq" Jan 17 00:22:02.688543 kubelet[3358]: I0117 00:22:02.687120 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8046b25a-e695-41db-9bd5-c2060c8d6c0d-xtables-lock\") pod \"kube-flannel-ds-cxskq\" (UID: \"8046b25a-e695-41db-9bd5-c2060c8d6c0d\") " pod="kube-flannel/kube-flannel-ds-cxskq" Jan 17 00:22:02.688543 kubelet[3358]: I0117 00:22:02.687140 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9wjd\" (UniqueName: \"kubernetes.io/projected/8046b25a-e695-41db-9bd5-c2060c8d6c0d-kube-api-access-d9wjd\") pod \"kube-flannel-ds-cxskq\" (UID: \"8046b25a-e695-41db-9bd5-c2060c8d6c0d\") " pod="kube-flannel/kube-flannel-ds-cxskq" Jan 17 00:22:02.688543 kubelet[3358]: I0117 00:22:02.687167 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d32e86b3-0661-4ca9-8201-17815a466560-xtables-lock\") pod \"kube-proxy-xkspl\" (UID: \"d32e86b3-0661-4ca9-8201-17815a466560\") " pod="kube-system/kube-proxy-xkspl" Jan 17 00:22:02.688543 kubelet[3358]: I0117 00:22:02.687203 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d32e86b3-0661-4ca9-8201-17815a466560-lib-modules\") pod \"kube-proxy-xkspl\" (UID: \"d32e86b3-0661-4ca9-8201-17815a466560\") " pod="kube-system/kube-proxy-xkspl" Jan 17 00:22:02.698494 systemd[1]: Created slice kubepods-besteffort-podd32e86b3_0661_4ca9_8201_17815a466560.slice - libcontainer container kubepods-besteffort-podd32e86b3_0661_4ca9_8201_17815a466560.slice. Jan 17 00:22:02.751014 systemd[1]: Created slice kubepods-burstable-pod8046b25a_e695_41db_9bd5_c2060c8d6c0d.slice - libcontainer container kubepods-burstable-pod8046b25a_e695_41db_9bd5_c2060c8d6c0d.slice. Jan 17 00:22:02.787498 kubelet[3358]: I0117 00:22:02.787438 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8th2\" (UniqueName: \"kubernetes.io/projected/d32e86b3-0661-4ca9-8201-17815a466560-kube-api-access-w8th2\") pod \"kube-proxy-xkspl\" (UID: \"d32e86b3-0661-4ca9-8201-17815a466560\") " pod="kube-system/kube-proxy-xkspl" Jan 17 00:22:03.043588 containerd[1988]: time="2026-01-17T00:22:03.043555876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xkspl,Uid:d32e86b3-0661-4ca9-8201-17815a466560,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:03.091251 containerd[1988]: time="2026-01-17T00:22:03.090762451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cxskq,Uid:8046b25a-e695-41db-9bd5-c2060c8d6c0d,Namespace:kube-flannel,Attempt:0,}" Jan 17 00:22:03.126885 containerd[1988]: time="2026-01-17T00:22:03.126538458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:03.126885 containerd[1988]: time="2026-01-17T00:22:03.126618861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:03.126885 containerd[1988]: time="2026-01-17T00:22:03.126638745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:03.126885 containerd[1988]: time="2026-01-17T00:22:03.126747797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:03.167274 containerd[1988]: time="2026-01-17T00:22:03.166861920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:03.167274 containerd[1988]: time="2026-01-17T00:22:03.166935593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:03.167274 containerd[1988]: time="2026-01-17T00:22:03.166957491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:03.167274 containerd[1988]: time="2026-01-17T00:22:03.167071542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:03.180696 systemd[1]: Started cri-containerd-0e728973389a9d794b9025b8b4dbd77657a79f01c70f57920f08f6df540a6ab4.scope - libcontainer container 0e728973389a9d794b9025b8b4dbd77657a79f01c70f57920f08f6df540a6ab4. Jan 17 00:22:03.198444 systemd[1]: Started cri-containerd-852eb6b1fef07288e289e96cf85c02fc26b8a1b34d88486cc1ca0356bf3076fa.scope - libcontainer container 852eb6b1fef07288e289e96cf85c02fc26b8a1b34d88486cc1ca0356bf3076fa. Jan 17 00:22:03.242732 containerd[1988]: time="2026-01-17T00:22:03.242629474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xkspl,Uid:d32e86b3-0661-4ca9-8201-17815a466560,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e728973389a9d794b9025b8b4dbd77657a79f01c70f57920f08f6df540a6ab4\"" Jan 17 00:22:03.256444 containerd[1988]: time="2026-01-17T00:22:03.256051286Z" level=info msg="CreateContainer within sandbox \"0e728973389a9d794b9025b8b4dbd77657a79f01c70f57920f08f6df540a6ab4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:22:03.290572 containerd[1988]: time="2026-01-17T00:22:03.290519901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cxskq,Uid:8046b25a-e695-41db-9bd5-c2060c8d6c0d,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"852eb6b1fef07288e289e96cf85c02fc26b8a1b34d88486cc1ca0356bf3076fa\"" Jan 17 00:22:03.294227 containerd[1988]: time="2026-01-17T00:22:03.293532691Z" level=info msg="CreateContainer within sandbox \"0e728973389a9d794b9025b8b4dbd77657a79f01c70f57920f08f6df540a6ab4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc85b55de0263bf8ffdf885d175f2f7c35ee3b089b40ec053051d12e2ab983e8\"" Jan 17 00:22:03.300574 containerd[1988]: time="2026-01-17T00:22:03.300528222Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 17 00:22:03.303492 containerd[1988]: time="2026-01-17T00:22:03.303452451Z" level=info msg="StartContainer for \"cc85b55de0263bf8ffdf885d175f2f7c35ee3b089b40ec053051d12e2ab983e8\"" Jan 17 00:22:03.338445 systemd[1]: Started cri-containerd-cc85b55de0263bf8ffdf885d175f2f7c35ee3b089b40ec053051d12e2ab983e8.scope - libcontainer container cc85b55de0263bf8ffdf885d175f2f7c35ee3b089b40ec053051d12e2ab983e8. Jan 17 00:22:03.375210 containerd[1988]: time="2026-01-17T00:22:03.374723099Z" level=info msg="StartContainer for \"cc85b55de0263bf8ffdf885d175f2f7c35ee3b089b40ec053051d12e2ab983e8\" returns successfully" Jan 17 00:22:04.585850 kubelet[3358]: I0117 00:22:04.585701 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xkspl" podStartSLOduration=2.585648076 podStartE2EDuration="2.585648076s" podCreationTimestamp="2026-01-17 00:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:03.608748171 +0000 UTC m=+7.276864070" watchObservedRunningTime="2026-01-17 00:22:04.585648076 +0000 UTC m=+8.253763973" Jan 17 00:22:04.817824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445615166.mount: Deactivated successfully. Jan 17 00:22:04.919386 containerd[1988]: time="2026-01-17T00:22:04.919246245Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:04.921635 containerd[1988]: time="2026-01-17T00:22:04.921233750Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Jan 17 00:22:04.929503 containerd[1988]: time="2026-01-17T00:22:04.924574538Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:04.935496 containerd[1988]: time="2026-01-17T00:22:04.935085318Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:04.945241 containerd[1988]: time="2026-01-17T00:22:04.943799946Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 1.64322187s" Jan 17 00:22:04.945241 containerd[1988]: time="2026-01-17T00:22:04.943853639Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 17 00:22:04.962739 containerd[1988]: time="2026-01-17T00:22:04.962672031Z" level=info msg="CreateContainer within sandbox \"852eb6b1fef07288e289e96cf85c02fc26b8a1b34d88486cc1ca0356bf3076fa\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 17 00:22:05.035026 containerd[1988]: time="2026-01-17T00:22:05.034727152Z" level=info msg="CreateContainer within sandbox \"852eb6b1fef07288e289e96cf85c02fc26b8a1b34d88486cc1ca0356bf3076fa\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"359014efdc26d4dd8ca6911270c859e7c5c672f0849348884e247ed2d2ed2f61\"" Jan 17 00:22:05.039299 containerd[1988]: time="2026-01-17T00:22:05.037387228Z" level=info msg="StartContainer for \"359014efdc26d4dd8ca6911270c859e7c5c672f0849348884e247ed2d2ed2f61\"" Jan 17 00:22:05.088626 systemd[1]: Started cri-containerd-359014efdc26d4dd8ca6911270c859e7c5c672f0849348884e247ed2d2ed2f61.scope - libcontainer container 359014efdc26d4dd8ca6911270c859e7c5c672f0849348884e247ed2d2ed2f61. Jan 17 00:22:05.138623 systemd[1]: cri-containerd-359014efdc26d4dd8ca6911270c859e7c5c672f0849348884e247ed2d2ed2f61.scope: Deactivated successfully. Jan 17 00:22:05.140462 containerd[1988]: time="2026-01-17T00:22:05.140413677Z" level=info msg="StartContainer for \"359014efdc26d4dd8ca6911270c859e7c5c672f0849348884e247ed2d2ed2f61\" returns successfully" Jan 17 00:22:05.176905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-359014efdc26d4dd8ca6911270c859e7c5c672f0849348884e247ed2d2ed2f61-rootfs.mount: Deactivated successfully. Jan 17 00:22:05.197954 containerd[1988]: time="2026-01-17T00:22:05.197664616Z" level=info msg="shim disconnected" id=359014efdc26d4dd8ca6911270c859e7c5c672f0849348884e247ed2d2ed2f61 namespace=k8s.io Jan 17 00:22:05.197954 containerd[1988]: time="2026-01-17T00:22:05.197737271Z" level=warning msg="cleaning up after shim disconnected" id=359014efdc26d4dd8ca6911270c859e7c5c672f0849348884e247ed2d2ed2f61 namespace=k8s.io Jan 17 00:22:05.197954 containerd[1988]: time="2026-01-17T00:22:05.197746392Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:05.605967 containerd[1988]: time="2026-01-17T00:22:05.603849204Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 17 00:22:08.136015 containerd[1988]: time="2026-01-17T00:22:08.135953803Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:08.137880 containerd[1988]: time="2026-01-17T00:22:08.137818133Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Jan 17 00:22:08.140284 containerd[1988]: time="2026-01-17T00:22:08.140226318Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:08.176445 containerd[1988]: time="2026-01-17T00:22:08.175756851Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:08.177600 containerd[1988]: time="2026-01-17T00:22:08.177550752Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 2.57365533s" Jan 17 00:22:08.177721 containerd[1988]: time="2026-01-17T00:22:08.177604862Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 17 00:22:08.185051 containerd[1988]: time="2026-01-17T00:22:08.184995078Z" level=info msg="CreateContainer within sandbox \"852eb6b1fef07288e289e96cf85c02fc26b8a1b34d88486cc1ca0356bf3076fa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:22:08.207887 containerd[1988]: time="2026-01-17T00:22:08.207829977Z" level=info msg="CreateContainer within sandbox \"852eb6b1fef07288e289e96cf85c02fc26b8a1b34d88486cc1ca0356bf3076fa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5e0583677674bc2b3f8973c675ffbe588514ad0f1d5232f8f26e70ca8496fcfe\"" Jan 17 00:22:08.209755 containerd[1988]: time="2026-01-17T00:22:08.208871939Z" level=info msg="StartContainer for \"5e0583677674bc2b3f8973c675ffbe588514ad0f1d5232f8f26e70ca8496fcfe\"" Jan 17 00:22:08.245391 systemd[1]: Started cri-containerd-5e0583677674bc2b3f8973c675ffbe588514ad0f1d5232f8f26e70ca8496fcfe.scope - libcontainer container 5e0583677674bc2b3f8973c675ffbe588514ad0f1d5232f8f26e70ca8496fcfe. Jan 17 00:22:08.282803 systemd[1]: cri-containerd-5e0583677674bc2b3f8973c675ffbe588514ad0f1d5232f8f26e70ca8496fcfe.scope: Deactivated successfully. Jan 17 00:22:08.286807 containerd[1988]: time="2026-01-17T00:22:08.286696016Z" level=info msg="StartContainer for \"5e0583677674bc2b3f8973c675ffbe588514ad0f1d5232f8f26e70ca8496fcfe\" returns successfully" Jan 17 00:22:08.307013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e0583677674bc2b3f8973c675ffbe588514ad0f1d5232f8f26e70ca8496fcfe-rootfs.mount: Deactivated successfully. Jan 17 00:22:08.317865 kubelet[3358]: I0117 00:22:08.317840 3358 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:22:08.367430 systemd[1]: Created slice kubepods-burstable-podd508f3b7_e666_4f3b_91de_d03e4b39aaff.slice - libcontainer container kubepods-burstable-podd508f3b7_e666_4f3b_91de_d03e4b39aaff.slice. Jan 17 00:22:08.389624 systemd[1]: Created slice kubepods-burstable-podb9b0be4f_79bb_48c3_8d05_d9a9466cf729.slice - libcontainer container kubepods-burstable-podb9b0be4f_79bb_48c3_8d05_d9a9466cf729.slice. Jan 17 00:22:08.452149 containerd[1988]: time="2026-01-17T00:22:08.451733915Z" level=info msg="shim disconnected" id=5e0583677674bc2b3f8973c675ffbe588514ad0f1d5232f8f26e70ca8496fcfe namespace=k8s.io Jan 17 00:22:08.452149 containerd[1988]: time="2026-01-17T00:22:08.451799031Z" level=warning msg="cleaning up after shim disconnected" id=5e0583677674bc2b3f8973c675ffbe588514ad0f1d5232f8f26e70ca8496fcfe namespace=k8s.io Jan 17 00:22:08.452149 containerd[1988]: time="2026-01-17T00:22:08.451809948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:08.458703 kubelet[3358]: I0117 00:22:08.458405 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v22lq\" (UniqueName: \"kubernetes.io/projected/d508f3b7-e666-4f3b-91de-d03e4b39aaff-kube-api-access-v22lq\") pod \"coredns-674b8bbfcf-72nr2\" (UID: \"d508f3b7-e666-4f3b-91de-d03e4b39aaff\") " pod="kube-system/coredns-674b8bbfcf-72nr2" Jan 17 00:22:08.458703 kubelet[3358]: I0117 00:22:08.458463 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d508f3b7-e666-4f3b-91de-d03e4b39aaff-config-volume\") pod \"coredns-674b8bbfcf-72nr2\" (UID: \"d508f3b7-e666-4f3b-91de-d03e4b39aaff\") " pod="kube-system/coredns-674b8bbfcf-72nr2" Jan 17 00:22:08.458703 kubelet[3358]: I0117 00:22:08.458490 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9b0be4f-79bb-48c3-8d05-d9a9466cf729-config-volume\") pod \"coredns-674b8bbfcf-n4chh\" (UID: \"b9b0be4f-79bb-48c3-8d05-d9a9466cf729\") " pod="kube-system/coredns-674b8bbfcf-n4chh" Jan 17 00:22:08.458703 kubelet[3358]: I0117 00:22:08.458516 3358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqq58\" (UniqueName: \"kubernetes.io/projected/b9b0be4f-79bb-48c3-8d05-d9a9466cf729-kube-api-access-dqq58\") pod \"coredns-674b8bbfcf-n4chh\" (UID: \"b9b0be4f-79bb-48c3-8d05-d9a9466cf729\") " pod="kube-system/coredns-674b8bbfcf-n4chh" Jan 17 00:22:08.617139 containerd[1988]: time="2026-01-17T00:22:08.616982042Z" level=info msg="CreateContainer within sandbox \"852eb6b1fef07288e289e96cf85c02fc26b8a1b34d88486cc1ca0356bf3076fa\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 17 00:22:08.657739 containerd[1988]: time="2026-01-17T00:22:08.657611371Z" level=info msg="CreateContainer within sandbox \"852eb6b1fef07288e289e96cf85c02fc26b8a1b34d88486cc1ca0356bf3076fa\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"6c5d5bec7730da63340eaa31488861119da97c2655be57144dc1431fff8b172d\"" Jan 17 00:22:08.659486 containerd[1988]: time="2026-01-17T00:22:08.659422644Z" level=info msg="StartContainer for \"6c5d5bec7730da63340eaa31488861119da97c2655be57144dc1431fff8b172d\"" Jan 17 00:22:08.683108 containerd[1988]: time="2026-01-17T00:22:08.682532547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-72nr2,Uid:d508f3b7-e666-4f3b-91de-d03e4b39aaff,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:08.691411 systemd[1]: Started cri-containerd-6c5d5bec7730da63340eaa31488861119da97c2655be57144dc1431fff8b172d.scope - libcontainer container 6c5d5bec7730da63340eaa31488861119da97c2655be57144dc1431fff8b172d. Jan 17 00:22:08.697735 containerd[1988]: time="2026-01-17T00:22:08.697676923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n4chh,Uid:b9b0be4f-79bb-48c3-8d05-d9a9466cf729,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:08.765177 containerd[1988]: time="2026-01-17T00:22:08.762299108Z" level=info msg="StartContainer for \"6c5d5bec7730da63340eaa31488861119da97c2655be57144dc1431fff8b172d\" returns successfully" Jan 17 00:22:08.839227 containerd[1988]: time="2026-01-17T00:22:08.838310812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-72nr2,Uid:d508f3b7-e666-4f3b-91de-d03e4b39aaff,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79bd8b8aa7e713f5de47ffc60c74554ead2ead465642b1e875071a9a404f47d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:22:08.839859 kubelet[3358]: E0117 00:22:08.839459 3358 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79bd8b8aa7e713f5de47ffc60c74554ead2ead465642b1e875071a9a404f47d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:22:08.839859 kubelet[3358]: E0117 00:22:08.839541 3358 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79bd8b8aa7e713f5de47ffc60c74554ead2ead465642b1e875071a9a404f47d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-72nr2" Jan 17 00:22:08.839859 kubelet[3358]: E0117 00:22:08.839566 3358 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79bd8b8aa7e713f5de47ffc60c74554ead2ead465642b1e875071a9a404f47d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-72nr2" Jan 17 00:22:08.839859 kubelet[3358]: E0117 00:22:08.839610 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-72nr2_kube-system(d508f3b7-e666-4f3b-91de-d03e4b39aaff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-72nr2_kube-system(d508f3b7-e666-4f3b-91de-d03e4b39aaff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79bd8b8aa7e713f5de47ffc60c74554ead2ead465642b1e875071a9a404f47d0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-72nr2" podUID="d508f3b7-e666-4f3b-91de-d03e4b39aaff" Jan 17 00:22:08.855426 containerd[1988]: time="2026-01-17T00:22:08.855364380Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n4chh,Uid:b9b0be4f-79bb-48c3-8d05-d9a9466cf729,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48c99167588b0641267463e93a9509d948b095422425906c6f030521d781396c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:22:08.855640 kubelet[3358]: E0117 00:22:08.855604 3358 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48c99167588b0641267463e93a9509d948b095422425906c6f030521d781396c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:22:08.855736 kubelet[3358]: E0117 00:22:08.855660 3358 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48c99167588b0641267463e93a9509d948b095422425906c6f030521d781396c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-n4chh" Jan 17 00:22:08.855736 kubelet[3358]: E0117 00:22:08.855679 3358 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48c99167588b0641267463e93a9509d948b095422425906c6f030521d781396c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-n4chh" Jan 17 00:22:08.855807 kubelet[3358]: E0117 00:22:08.855730 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-n4chh_kube-system(b9b0be4f-79bb-48c3-8d05-d9a9466cf729)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-n4chh_kube-system(b9b0be4f-79bb-48c3-8d05-d9a9466cf729)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48c99167588b0641267463e93a9509d948b095422425906c6f030521d781396c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-n4chh" podUID="b9b0be4f-79bb-48c3-8d05-d9a9466cf729" Jan 17 00:22:09.634962 kubelet[3358]: I0117 00:22:09.633449 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-cxskq" podStartSLOduration=2.751691079 podStartE2EDuration="7.633430351s" podCreationTimestamp="2026-01-17 00:22:02 +0000 UTC" firstStartedPulling="2026-01-17 00:22:03.297518209 +0000 UTC m=+6.965634090" lastFinishedPulling="2026-01-17 00:22:08.179257484 +0000 UTC m=+11.847373362" observedRunningTime="2026-01-17 00:22:09.633127325 +0000 UTC m=+13.301243234" watchObservedRunningTime="2026-01-17 00:22:09.633430351 +0000 UTC m=+13.301546249" Jan 17 00:22:09.854947 (udev-worker)[3921]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:22:09.879170 systemd-networkd[1807]: flannel.1: Link UP Jan 17 00:22:09.879179 systemd-networkd[1807]: flannel.1: Gained carrier Jan 17 00:22:11.594531 systemd-networkd[1807]: flannel.1: Gained IPv6LL Jan 17 00:22:14.428768 ntpd[1956]: Listen normally on 8 flannel.1 192.168.0.0:123 Jan 17 00:22:14.428933 ntpd[1956]: Listen normally on 9 flannel.1 [fe80::70e4:10ff:feed:f92d%4]:123 Jan 17 00:22:14.429382 ntpd[1956]: 17 Jan 00:22:14 ntpd[1956]: Listen normally on 8 flannel.1 192.168.0.0:123 Jan 17 00:22:14.429382 ntpd[1956]: 17 Jan 00:22:14 ntpd[1956]: Listen normally on 9 flannel.1 [fe80::70e4:10ff:feed:f92d%4]:123 Jan 17 00:22:20.515126 containerd[1988]: time="2026-01-17T00:22:20.515074197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-72nr2,Uid:d508f3b7-e666-4f3b-91de-d03e4b39aaff,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:20.533649 containerd[1988]: time="2026-01-17T00:22:20.533598750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n4chh,Uid:b9b0be4f-79bb-48c3-8d05-d9a9466cf729,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:20.819897 systemd-networkd[1807]: cni0: Link UP Jan 17 00:22:20.819912 systemd-networkd[1807]: cni0: Gained carrier Jan 17 00:22:20.829747 (udev-worker)[4049]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:22:20.830020 systemd-networkd[1807]: cni0: Lost carrier Jan 17 00:22:20.841707 kernel: cni0: port 1(veth6b6a2f7e) entered blocking state Jan 17 00:22:20.841811 kernel: cni0: port 1(veth6b6a2f7e) entered disabled state Jan 17 00:22:20.838434 (udev-worker)[4051]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:22:20.841501 systemd-networkd[1807]: veth6b6a2f7e: Link UP Jan 17 00:22:20.852886 kernel: veth6b6a2f7e: entered allmulticast mode Jan 17 00:22:20.852960 kernel: veth6b6a2f7e: entered promiscuous mode Jan 17 00:22:20.852985 kernel: cni0: port 1(veth6b6a2f7e) entered blocking state Jan 17 00:22:20.853007 kernel: cni0: port 1(veth6b6a2f7e) entered forwarding state Jan 17 00:22:20.852764 (udev-worker)[4052]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:22:20.860048 kernel: cni0: port 1(veth6b6a2f7e) entered disabled state Jan 17 00:22:20.860086 kernel: cni0: port 2(vetha97bda4e) entered blocking state Jan 17 00:22:20.860113 kernel: cni0: port 2(vetha97bda4e) entered disabled state Jan 17 00:22:20.860144 kernel: vetha97bda4e: entered allmulticast mode Jan 17 00:22:20.860448 systemd-networkd[1807]: vetha97bda4e: Link UP Jan 17 00:22:20.863338 kernel: vetha97bda4e: entered promiscuous mode Jan 17 00:22:20.866168 kernel: cni0: port 2(vetha97bda4e) entered blocking state Jan 17 00:22:20.866282 kernel: cni0: port 2(vetha97bda4e) entered forwarding state Jan 17 00:22:20.868785 kernel: cni0: port 2(vetha97bda4e) entered disabled state Jan 17 00:22:21.060769 kernel: cni0: port 2(vetha97bda4e) entered blocking state Jan 17 00:22:21.060884 kernel: cni0: port 2(vetha97bda4e) entered forwarding state Jan 17 00:22:21.061261 kernel: cni0: port 1(veth6b6a2f7e) entered blocking state Jan 17 00:22:21.064977 systemd-networkd[1807]: vetha97bda4e: Gained carrier Jan 17 00:22:21.065879 kernel: cni0: port 1(veth6b6a2f7e) entered forwarding state Jan 17 00:22:21.065867 systemd-networkd[1807]: cni0: Gained carrier Jan 17 00:22:21.066822 systemd-networkd[1807]: veth6b6a2f7e: Gained carrier Jan 17 00:22:21.078432 containerd[1988]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000082950), "name":"cbr0", "type":"bridge"} Jan 17 00:22:21.078432 containerd[1988]: delegateAdd: netconf sent to delegate plugin: Jan 17 00:22:21.080922 containerd[1988]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"} Jan 17 00:22:21.080922 containerd[1988]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000082950), "name":"cbr0", "type":"bridge"} Jan 17 00:22:21.080922 containerd[1988]: delegateAdd: netconf sent to delegate plugin: Jan 17 00:22:21.156988 containerd[1988]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-17T00:22:21.154643686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:21.156988 containerd[1988]: time="2026-01-17T00:22:21.154744719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:21.156988 containerd[1988]: time="2026-01-17T00:22:21.154765562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:21.156988 containerd[1988]: time="2026-01-17T00:22:21.154887415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:21.159366 containerd[1988]: time="2026-01-17T00:22:21.158707580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:21.159366 containerd[1988]: time="2026-01-17T00:22:21.158922637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:21.159366 containerd[1988]: time="2026-01-17T00:22:21.159005407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:21.159590 containerd[1988]: time="2026-01-17T00:22:21.159370527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:21.222225 systemd[1]: Started cri-containerd-edbcb705c0b14dc07fc9199320a85579aeb59c2645acd865112871c7ed56d374.scope - libcontainer container edbcb705c0b14dc07fc9199320a85579aeb59c2645acd865112871c7ed56d374. Jan 17 00:22:21.230018 systemd[1]: Started cri-containerd-2f65e4e6f26d7ae556ec55a0c583133931640c4f40ac31ab80c4a5220fd37c2c.scope - libcontainer container 2f65e4e6f26d7ae556ec55a0c583133931640c4f40ac31ab80c4a5220fd37c2c. Jan 17 00:22:21.303412 containerd[1988]: time="2026-01-17T00:22:21.303338215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-72nr2,Uid:d508f3b7-e666-4f3b-91de-d03e4b39aaff,Namespace:kube-system,Attempt:0,} returns sandbox id \"edbcb705c0b14dc07fc9199320a85579aeb59c2645acd865112871c7ed56d374\"" Jan 17 00:22:21.311666 containerd[1988]: time="2026-01-17T00:22:21.311521699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n4chh,Uid:b9b0be4f-79bb-48c3-8d05-d9a9466cf729,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f65e4e6f26d7ae556ec55a0c583133931640c4f40ac31ab80c4a5220fd37c2c\"" Jan 17 00:22:21.313379 containerd[1988]: time="2026-01-17T00:22:21.313277673Z" level=info msg="CreateContainer within sandbox \"edbcb705c0b14dc07fc9199320a85579aeb59c2645acd865112871c7ed56d374\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:22:21.320672 containerd[1988]: time="2026-01-17T00:22:21.320532120Z" level=info msg="CreateContainer within sandbox \"2f65e4e6f26d7ae556ec55a0c583133931640c4f40ac31ab80c4a5220fd37c2c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:22:21.377706 containerd[1988]: time="2026-01-17T00:22:21.377514968Z" level=info msg="CreateContainer within sandbox \"edbcb705c0b14dc07fc9199320a85579aeb59c2645acd865112871c7ed56d374\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7af8d86146a6207f40c65d92e104c6be68977fedafe4966e83d40db1246a6640\"" Jan 17 00:22:21.379484 containerd[1988]: time="2026-01-17T00:22:21.378757645Z" level=info msg="StartContainer for \"7af8d86146a6207f40c65d92e104c6be68977fedafe4966e83d40db1246a6640\"" Jan 17 00:22:21.388875 containerd[1988]: time="2026-01-17T00:22:21.388818303Z" level=info msg="CreateContainer within sandbox \"2f65e4e6f26d7ae556ec55a0c583133931640c4f40ac31ab80c4a5220fd37c2c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13d9b656b4492c0bbd4e22af7f12b2888546482ba57903ecbefc56b361aa8737\"" Jan 17 00:22:21.391008 containerd[1988]: time="2026-01-17T00:22:21.390818307Z" level=info msg="StartContainer for \"13d9b656b4492c0bbd4e22af7f12b2888546482ba57903ecbefc56b361aa8737\"" Jan 17 00:22:21.424894 systemd[1]: Started cri-containerd-7af8d86146a6207f40c65d92e104c6be68977fedafe4966e83d40db1246a6640.scope - libcontainer container 7af8d86146a6207f40c65d92e104c6be68977fedafe4966e83d40db1246a6640. Jan 17 00:22:21.449533 systemd[1]: Started cri-containerd-13d9b656b4492c0bbd4e22af7f12b2888546482ba57903ecbefc56b361aa8737.scope - libcontainer container 13d9b656b4492c0bbd4e22af7f12b2888546482ba57903ecbefc56b361aa8737. Jan 17 00:22:21.488651 containerd[1988]: time="2026-01-17T00:22:21.488578408Z" level=info msg="StartContainer for \"7af8d86146a6207f40c65d92e104c6be68977fedafe4966e83d40db1246a6640\" returns successfully" Jan 17 00:22:21.516494 containerd[1988]: time="2026-01-17T00:22:21.516234917Z" level=info msg="StartContainer for \"13d9b656b4492c0bbd4e22af7f12b2888546482ba57903ecbefc56b361aa8737\" returns successfully" Jan 17 00:22:21.668768 kubelet[3358]: I0117 00:22:21.667539 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n4chh" podStartSLOduration=19.667525163 podStartE2EDuration="19.667525163s" podCreationTimestamp="2026-01-17 00:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:21.666996952 +0000 UTC m=+25.335112849" watchObservedRunningTime="2026-01-17 00:22:21.667525163 +0000 UTC m=+25.335641060" Jan 17 00:22:22.602554 systemd-networkd[1807]: cni0: Gained IPv6LL Jan 17 00:22:22.986392 systemd-networkd[1807]: veth6b6a2f7e: Gained IPv6LL Jan 17 00:22:23.114547 systemd-networkd[1807]: vetha97bda4e: Gained IPv6LL Jan 17 00:22:25.428697 ntpd[1956]: Listen normally on 10 cni0 192.168.0.1:123 Jan 17 00:22:25.431385 ntpd[1956]: 17 Jan 00:22:25 ntpd[1956]: Listen normally on 10 cni0 192.168.0.1:123 Jan 17 00:22:25.431385 ntpd[1956]: 17 Jan 00:22:25 ntpd[1956]: Listen normally on 11 cni0 [fe80::4cf:13ff:fec2:1337%5]:123 Jan 17 00:22:25.431385 ntpd[1956]: 17 Jan 00:22:25 ntpd[1956]: Listen normally on 12 vetha97bda4e [fe80::a001:10ff:fe05:f94%6]:123 Jan 17 00:22:25.431385 ntpd[1956]: 17 Jan 00:22:25 ntpd[1956]: Listen normally on 13 veth6b6a2f7e [fe80::f83f:65ff:fe35:a846%7]:123 Jan 17 00:22:25.428770 ntpd[1956]: Listen normally on 11 cni0 [fe80::4cf:13ff:fec2:1337%5]:123 Jan 17 00:22:25.428813 ntpd[1956]: Listen normally on 12 vetha97bda4e [fe80::a001:10ff:fe05:f94%6]:123 Jan 17 00:22:25.428842 ntpd[1956]: Listen normally on 13 veth6b6a2f7e [fe80::f83f:65ff:fe35:a846%7]:123 Jan 17 00:22:30.644733 systemd[1]: Started sshd@7-172.31.16.10:22-4.153.228.146:49334.service - OpenSSH per-connection server daemon (4.153.228.146:49334). Jan 17 00:22:31.153254 sshd[4300]: Accepted publickey for core from 4.153.228.146 port 49334 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:31.154942 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:31.160760 systemd-logind[1964]: New session 8 of user core. Jan 17 00:22:31.167437 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:22:31.684004 sshd[4300]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:31.687968 systemd[1]: sshd@7-172.31.16.10:22-4.153.228.146:49334.service: Deactivated successfully. Jan 17 00:22:31.690544 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:22:31.691426 systemd-logind[1964]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:22:31.692889 systemd-logind[1964]: Removed session 8. Jan 17 00:22:32.673635 kubelet[3358]: I0117 00:22:32.673498 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-72nr2" podStartSLOduration=30.673483327 podStartE2EDuration="30.673483327s" podCreationTimestamp="2026-01-17 00:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:21.682050915 +0000 UTC m=+25.350166815" watchObservedRunningTime="2026-01-17 00:22:32.673483327 +0000 UTC m=+36.341599225" Jan 17 00:22:36.771487 systemd[1]: Started sshd@8-172.31.16.10:22-4.153.228.146:43112.service - OpenSSH per-connection server daemon (4.153.228.146:43112). Jan 17 00:22:37.262874 sshd[4344]: Accepted publickey for core from 4.153.228.146 port 43112 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:37.264817 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:37.270564 systemd-logind[1964]: New session 9 of user core. Jan 17 00:22:37.274386 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:22:37.681318 sshd[4344]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:37.685382 systemd[1]: sshd@8-172.31.16.10:22-4.153.228.146:43112.service: Deactivated successfully. Jan 17 00:22:37.687825 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:22:37.690682 systemd-logind[1964]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:22:37.691921 systemd-logind[1964]: Removed session 9. Jan 17 00:22:42.770580 systemd[1]: Started sshd@9-172.31.16.10:22-4.153.228.146:43116.service - OpenSSH per-connection server daemon (4.153.228.146:43116). Jan 17 00:22:43.249059 sshd[4378]: Accepted publickey for core from 4.153.228.146 port 43116 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:43.250688 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:43.255297 systemd-logind[1964]: New session 10 of user core. Jan 17 00:22:43.261532 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:22:43.674844 sshd[4378]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:43.678449 systemd[1]: sshd@9-172.31.16.10:22-4.153.228.146:43116.service: Deactivated successfully. Jan 17 00:22:43.680497 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:22:43.681948 systemd-logind[1964]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:22:43.683231 systemd-logind[1964]: Removed session 10. Jan 17 00:22:43.766521 systemd[1]: Started sshd@10-172.31.16.10:22-4.153.228.146:43122.service - OpenSSH per-connection server daemon (4.153.228.146:43122). Jan 17 00:22:44.270656 sshd[4391]: Accepted publickey for core from 4.153.228.146 port 43122 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:44.272307 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:44.277898 systemd-logind[1964]: New session 11 of user core. Jan 17 00:22:44.285090 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:22:44.803660 sshd[4391]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:44.809069 systemd[1]: sshd@10-172.31.16.10:22-4.153.228.146:43122.service: Deactivated successfully. Jan 17 00:22:44.811786 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:22:44.812905 systemd-logind[1964]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:22:44.814089 systemd-logind[1964]: Removed session 11. Jan 17 00:22:44.865638 systemd[1]: Started sshd@11-172.31.16.10:22-4.153.228.146:50076.service - OpenSSH per-connection server daemon (4.153.228.146:50076). Jan 17 00:22:45.356698 sshd[4402]: Accepted publickey for core from 4.153.228.146 port 50076 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:45.358411 sshd[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:45.364379 systemd-logind[1964]: New session 12 of user core. Jan 17 00:22:45.369425 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:22:45.789831 sshd[4402]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:45.794329 systemd[1]: sshd@11-172.31.16.10:22-4.153.228.146:50076.service: Deactivated successfully. Jan 17 00:22:45.796947 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:22:45.797840 systemd-logind[1964]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:22:45.799889 systemd-logind[1964]: Removed session 12. Jan 17 00:22:50.881879 systemd[1]: Started sshd@12-172.31.16.10:22-4.153.228.146:50084.service - OpenSSH per-connection server daemon (4.153.228.146:50084). Jan 17 00:22:51.371819 sshd[4455]: Accepted publickey for core from 4.153.228.146 port 50084 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:51.373758 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:51.378423 systemd-logind[1964]: New session 13 of user core. Jan 17 00:22:51.385641 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:22:51.795087 sshd[4455]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:51.798472 systemd[1]: sshd@12-172.31.16.10:22-4.153.228.146:50084.service: Deactivated successfully. Jan 17 00:22:51.801154 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:22:51.802941 systemd-logind[1964]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:22:51.804521 systemd-logind[1964]: Removed session 13. Jan 17 00:22:51.898749 systemd[1]: Started sshd@13-172.31.16.10:22-4.153.228.146:50098.service - OpenSSH per-connection server daemon (4.153.228.146:50098). Jan 17 00:22:52.415542 sshd[4468]: Accepted publickey for core from 4.153.228.146 port 50098 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:52.417330 sshd[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:52.422005 systemd-logind[1964]: New session 14 of user core. Jan 17 00:22:52.430442 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:22:55.631681 sshd[4468]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:55.636575 systemd[1]: sshd@13-172.31.16.10:22-4.153.228.146:50098.service: Deactivated successfully. Jan 17 00:22:55.638864 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:22:55.639890 systemd-logind[1964]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:22:55.641286 systemd-logind[1964]: Removed session 14. Jan 17 00:22:55.730519 systemd[1]: Started sshd@14-172.31.16.10:22-4.153.228.146:32780.service - OpenSSH per-connection server daemon (4.153.228.146:32780). Jan 17 00:22:56.279308 sshd[4499]: Accepted publickey for core from 4.153.228.146 port 32780 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:56.280946 sshd[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:56.286662 systemd-logind[1964]: New session 15 of user core. Jan 17 00:22:56.289403 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:22:57.638308 sshd[4499]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:57.641900 systemd[1]: sshd@14-172.31.16.10:22-4.153.228.146:32780.service: Deactivated successfully. Jan 17 00:22:57.643934 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:22:57.645608 systemd-logind[1964]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:22:57.646977 systemd-logind[1964]: Removed session 15. Jan 17 00:22:57.738698 systemd[1]: Started sshd@15-172.31.16.10:22-4.153.228.146:32784.service - OpenSSH per-connection server daemon (4.153.228.146:32784). Jan 17 00:22:58.253754 sshd[4519]: Accepted publickey for core from 4.153.228.146 port 32784 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:58.255199 sshd[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:58.261044 systemd-logind[1964]: New session 16 of user core. Jan 17 00:22:58.273450 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:22:58.865424 sshd[4519]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:58.869146 systemd[1]: sshd@15-172.31.16.10:22-4.153.228.146:32784.service: Deactivated successfully. Jan 17 00:22:58.871178 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:22:58.871948 systemd-logind[1964]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:22:58.873319 systemd-logind[1964]: Removed session 16. Jan 17 00:22:58.948821 systemd[1]: Started sshd@16-172.31.16.10:22-4.153.228.146:32796.service - OpenSSH per-connection server daemon (4.153.228.146:32796). Jan 17 00:22:59.431176 sshd[4530]: Accepted publickey for core from 4.153.228.146 port 32796 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:59.433396 sshd[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:59.439426 systemd-logind[1964]: New session 17 of user core. Jan 17 00:22:59.444622 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:22:59.858340 sshd[4530]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:59.863313 systemd[1]: sshd@16-172.31.16.10:22-4.153.228.146:32796.service: Deactivated successfully. Jan 17 00:22:59.866646 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:22:59.867716 systemd-logind[1964]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:22:59.869259 systemd-logind[1964]: Removed session 17. Jan 17 00:23:04.966648 systemd[1]: Started sshd@17-172.31.16.10:22-4.153.228.146:59106.service - OpenSSH per-connection server daemon (4.153.228.146:59106). Jan 17 00:23:05.500160 sshd[4563]: Accepted publickey for core from 4.153.228.146 port 59106 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:05.501988 sshd[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:05.509095 systemd-logind[1964]: New session 18 of user core. Jan 17 00:23:05.512666 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:23:05.942850 sshd[4563]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:05.947595 systemd[1]: sshd@17-172.31.16.10:22-4.153.228.146:59106.service: Deactivated successfully. Jan 17 00:23:05.949935 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:23:05.950799 systemd-logind[1964]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:23:05.952505 systemd-logind[1964]: Removed session 18. Jan 17 00:23:11.043579 systemd[1]: Started sshd@18-172.31.16.10:22-4.153.228.146:59110.service - OpenSSH per-connection server daemon (4.153.228.146:59110). Jan 17 00:23:11.586289 sshd[4621]: Accepted publickey for core from 4.153.228.146 port 59110 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:11.588451 sshd[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:11.596260 systemd-logind[1964]: New session 19 of user core. Jan 17 00:23:11.603913 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:23:12.057248 sshd[4621]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:12.064890 systemd[1]: sshd@18-172.31.16.10:22-4.153.228.146:59110.service: Deactivated successfully. Jan 17 00:23:12.068998 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:23:12.069928 systemd-logind[1964]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:23:12.071313 systemd-logind[1964]: Removed session 19. Jan 17 00:23:26.973812 systemd[1]: cri-containerd-6fea4fbe62a5bab4ffd307aacbcbd389343b073bfac03c91adcf7313c15169da.scope: Deactivated successfully. Jan 17 00:23:26.974134 systemd[1]: cri-containerd-6fea4fbe62a5bab4ffd307aacbcbd389343b073bfac03c91adcf7313c15169da.scope: Consumed 4.029s CPU time, 38.5M memory peak, 0B memory swap peak. Jan 17 00:23:27.029616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fea4fbe62a5bab4ffd307aacbcbd389343b073bfac03c91adcf7313c15169da-rootfs.mount: Deactivated successfully. Jan 17 00:23:27.065563 containerd[1988]: time="2026-01-17T00:23:27.065290120Z" level=info msg="shim disconnected" id=6fea4fbe62a5bab4ffd307aacbcbd389343b073bfac03c91adcf7313c15169da namespace=k8s.io Jan 17 00:23:27.065563 containerd[1988]: time="2026-01-17T00:23:27.065364615Z" level=warning msg="cleaning up after shim disconnected" id=6fea4fbe62a5bab4ffd307aacbcbd389343b073bfac03c91adcf7313c15169da namespace=k8s.io Jan 17 00:23:27.065563 containerd[1988]: time="2026-01-17T00:23:27.065378814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:23:27.803274 kubelet[3358]: I0117 00:23:27.803217 3358 scope.go:117] "RemoveContainer" containerID="6fea4fbe62a5bab4ffd307aacbcbd389343b073bfac03c91adcf7313c15169da" Jan 17 00:23:27.842015 containerd[1988]: time="2026-01-17T00:23:27.841883515Z" level=info msg="CreateContainer within sandbox \"e1192efcc6c8507ac6b3e08fdd1bbbe449634755515ec25ea21bb19e54ecac28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:23:27.874509 containerd[1988]: time="2026-01-17T00:23:27.874439225Z" level=info msg="CreateContainer within sandbox \"e1192efcc6c8507ac6b3e08fdd1bbbe449634755515ec25ea21bb19e54ecac28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4529199d390e1dc33c5637d856985f8e3ecff150e69ccd6a592290d7654d1ded\"" Jan 17 00:23:27.875129 containerd[1988]: time="2026-01-17T00:23:27.875103421Z" level=info msg="StartContainer for \"4529199d390e1dc33c5637d856985f8e3ecff150e69ccd6a592290d7654d1ded\"" Jan 17 00:23:27.920559 systemd[1]: Started cri-containerd-4529199d390e1dc33c5637d856985f8e3ecff150e69ccd6a592290d7654d1ded.scope - libcontainer container 4529199d390e1dc33c5637d856985f8e3ecff150e69ccd6a592290d7654d1ded. Jan 17 00:23:27.973657 containerd[1988]: time="2026-01-17T00:23:27.973609566Z" level=info msg="StartContainer for \"4529199d390e1dc33c5637d856985f8e3ecff150e69ccd6a592290d7654d1ded\" returns successfully" Jan 17 00:23:28.027776 kubelet[3358]: E0117 00:23:28.025749 3358 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-10?timeout=10s\": context deadline exceeded" Jan 17 00:23:28.028889 systemd[1]: run-containerd-runc-k8s.io-4529199d390e1dc33c5637d856985f8e3ecff150e69ccd6a592290d7654d1ded-runc.zYoMfu.mount: Deactivated successfully. Jan 17 00:23:32.590264 systemd[1]: cri-containerd-efac05a0dcfc3a0f9769c79519d35f7ddd617325ff00c43efb2403e519741190.scope: Deactivated successfully. Jan 17 00:23:32.590500 systemd[1]: cri-containerd-efac05a0dcfc3a0f9769c79519d35f7ddd617325ff00c43efb2403e519741190.scope: Consumed 2.020s CPU time, 16.1M memory peak, 0B memory swap peak. Jan 17 00:23:32.616934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efac05a0dcfc3a0f9769c79519d35f7ddd617325ff00c43efb2403e519741190-rootfs.mount: Deactivated successfully. Jan 17 00:23:32.632899 containerd[1988]: time="2026-01-17T00:23:32.632643526Z" level=info msg="shim disconnected" id=efac05a0dcfc3a0f9769c79519d35f7ddd617325ff00c43efb2403e519741190 namespace=k8s.io Jan 17 00:23:32.632899 containerd[1988]: time="2026-01-17T00:23:32.632700696Z" level=warning msg="cleaning up after shim disconnected" id=efac05a0dcfc3a0f9769c79519d35f7ddd617325ff00c43efb2403e519741190 namespace=k8s.io Jan 17 00:23:32.632899 containerd[1988]: time="2026-01-17T00:23:32.632711458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:23:32.815324 kubelet[3358]: I0117 00:23:32.815293 3358 scope.go:117] "RemoveContainer" containerID="efac05a0dcfc3a0f9769c79519d35f7ddd617325ff00c43efb2403e519741190" Jan 17 00:23:32.818004 containerd[1988]: time="2026-01-17T00:23:32.817954323Z" level=info msg="CreateContainer within sandbox \"382395f1b572966fe17e663bca4489d5d4a3c10326686da065e6159dc168fb85\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:23:32.850001 containerd[1988]: time="2026-01-17T00:23:32.849886281Z" level=info msg="CreateContainer within sandbox \"382395f1b572966fe17e663bca4489d5d4a3c10326686da065e6159dc168fb85\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"54aef08188067740c3816678bf6d23a2a681d477f60e55ce5728f8472af9aaa5\"" Jan 17 00:23:32.851576 containerd[1988]: time="2026-01-17T00:23:32.850464165Z" level=info msg="StartContainer for \"54aef08188067740c3816678bf6d23a2a681d477f60e55ce5728f8472af9aaa5\"" Jan 17 00:23:32.892457 systemd[1]: Started cri-containerd-54aef08188067740c3816678bf6d23a2a681d477f60e55ce5728f8472af9aaa5.scope - libcontainer container 54aef08188067740c3816678bf6d23a2a681d477f60e55ce5728f8472af9aaa5. Jan 17 00:23:32.942890 containerd[1988]: time="2026-01-17T00:23:32.942817816Z" level=info msg="StartContainer for \"54aef08188067740c3816678bf6d23a2a681d477f60e55ce5728f8472af9aaa5\" returns successfully" Jan 17 00:23:38.026339 kubelet[3358]: E0117 00:23:38.026267 3358 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-10?timeout=10s\": context deadline exceeded" Jan 17 00:23:48.027557 kubelet[3358]: E0117 00:23:48.027487 3358 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-16-10)"