Jan 24 00:31:51.920258 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:31:51.920297 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:31:51.920316 kernel: BIOS-provided physical RAM map: Jan 24 00:31:51.920329 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:31:51.920340 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 24 00:31:51.920352 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jan 24 00:31:51.920367 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jan 24 00:31:51.920380 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 24 00:31:51.920393 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 24 00:31:51.920410 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 24 00:31:51.920423 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 24 00:31:51.920436 kernel: NX (Execute Disable) protection: active Jan 24 00:31:51.920448 kernel: APIC: Static calls initialized Jan 24 00:31:51.920461 kernel: efi: EFI v2.7 by EDK II Jan 24 00:31:51.920478 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 24 00:31:51.920496 kernel: SMBIOS 2.7 present. Jan 24 00:31:51.920510 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 24 00:31:51.920524 kernel: Hypervisor detected: KVM Jan 24 00:31:51.920538 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:31:51.920552 kernel: kvm-clock: using sched offset of 3944154259 cycles Jan 24 00:31:51.920566 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:31:51.920582 kernel: tsc: Detected 2499.998 MHz processor Jan 24 00:31:51.920596 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:31:51.920612 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:31:51.920626 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 24 00:31:51.920644 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:31:51.920658 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:31:51.920673 kernel: Using GB pages for direct mapping Jan 24 00:31:51.920687 kernel: Secure boot disabled Jan 24 00:31:51.920701 kernel: ACPI: Early table checksum verification disabled Jan 24 00:31:51.920716 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 24 00:31:51.920730 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 24 00:31:51.920745 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 24 00:31:51.920759 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 24 00:31:51.920777 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 24 00:31:51.920791 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 24 00:31:51.920806 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 24 00:31:51.920836 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 24 00:31:51.920851 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 24 00:31:51.920865 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 24 00:31:51.920886 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 24 00:31:51.920904 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 24 00:31:51.920920 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 24 00:31:51.920936 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 24 00:31:51.920952 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 24 00:31:51.920967 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 24 00:31:51.920983 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 24 00:31:51.920998 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 24 00:31:51.921017 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 24 00:31:51.921033 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 24 00:31:51.921048 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 24 00:31:51.921063 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 24 00:31:51.921079 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 24 00:31:51.921094 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 24 00:31:51.921110 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:31:51.921126 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:31:51.921141 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 24 00:31:51.921160 kernel: NUMA: Initialized distance table, cnt=1 Jan 24 00:31:51.921175 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Jan 24 00:31:51.921192 kernel: Zone ranges: Jan 24 00:31:51.921207 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:31:51.921223 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 24 00:31:51.921238 kernel: Normal empty Jan 24 00:31:51.921254 kernel: Movable zone start for each node Jan 24 00:31:51.921269 kernel: Early memory node ranges Jan 24 00:31:51.921285 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:31:51.921303 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 24 00:31:51.921319 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 24 00:31:51.921334 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 24 00:31:51.921350 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:31:51.921366 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:31:51.921382 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 24 00:31:51.921396 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 24 00:31:51.921412 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 24 00:31:51.921427 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:31:51.921443 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 24 00:31:51.921461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:31:51.921477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:31:51.921492 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:31:51.921508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:31:51.921523 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:31:51.921539 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:31:51.921554 kernel: TSC deadline timer available Jan 24 00:31:51.921570 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:31:51.921585 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:31:51.921604 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 24 00:31:51.921619 kernel: Booting paravirtualized kernel on KVM Jan 24 00:31:51.921635 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:31:51.921651 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:31:51.921667 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:31:51.921682 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:31:51.921697 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:31:51.921712 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:31:51.921728 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:31:51.921749 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:31:51.921765 kernel: random: crng init done Jan 24 00:31:51.921781 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:31:51.921796 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:31:51.921812 kernel: Fallback order for Node 0: 0 Jan 24 00:31:51.921845 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jan 24 00:31:51.921861 kernel: Policy zone: DMA32 Jan 24 00:31:51.921876 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:31:51.921896 kernel: Memory: 1874620K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 162924K reserved, 0K cma-reserved) Jan 24 00:31:51.921912 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:31:51.921928 kernel: Kernel/User page tables isolation: enabled Jan 24 00:31:51.921943 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:31:51.921958 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:31:51.921974 kernel: Dynamic Preempt: voluntary Jan 24 00:31:51.921990 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:31:51.922007 kernel: rcu: RCU event tracing is enabled. Jan 24 00:31:51.922022 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:31:51.922041 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:31:51.922056 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:31:51.922068 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:31:51.922083 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:31:51.922098 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:31:51.922114 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:31:51.922131 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:31:51.922161 kernel: Console: colour dummy device 80x25 Jan 24 00:31:51.922177 kernel: printk: console [tty0] enabled Jan 24 00:31:51.922193 kernel: printk: console [ttyS0] enabled Jan 24 00:31:51.922221 kernel: ACPI: Core revision 20230628 Jan 24 00:31:51.922238 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 24 00:31:51.922258 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:31:51.922275 kernel: x2apic enabled Jan 24 00:31:51.922292 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:31:51.922310 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 24 00:31:51.922326 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 24 00:31:51.922347 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 24 00:31:51.922363 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 24 00:31:51.922379 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:31:51.922396 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:31:51.922412 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:31:51.922428 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 24 00:31:51.922445 kernel: RETBleed: Vulnerable Jan 24 00:31:51.922461 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:31:51.922477 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:31:51.922493 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:31:51.922512 kernel: GDS: Unknown: Dependent on hypervisor status Jan 24 00:31:51.922529 kernel: active return thunk: its_return_thunk Jan 24 00:31:51.922545 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:31:51.922562 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:31:51.922579 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:31:51.922595 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:31:51.922610 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 24 00:31:51.922626 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 24 00:31:51.922643 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:31:51.922659 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:31:51.922671 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:31:51.922688 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:31:51.922702 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:31:51.922717 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 24 00:31:51.922732 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 24 00:31:51.922745 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 24 00:31:51.922758 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 24 00:31:51.922777 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 24 00:31:51.922797 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 24 00:31:51.922816 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 24 00:31:51.922868 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:31:51.922882 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:31:51.922902 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:31:51.922918 kernel: landlock: Up and running. Jan 24 00:31:51.922933 kernel: SELinux: Initializing. Jan 24 00:31:51.922949 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 00:31:51.922964 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 00:31:51.922980 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 24 00:31:51.922996 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:31:51.923012 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:31:51.923028 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:31:51.923043 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 24 00:31:51.923062 kernel: signal: max sigframe size: 3632 Jan 24 00:31:51.923077 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:31:51.923093 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:31:51.923109 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:31:51.923124 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:31:51.923140 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:31:51.923155 kernel: .... node #0, CPUs: #1 Jan 24 00:31:51.923171 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 24 00:31:51.923188 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:31:51.923206 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:31:51.923222 kernel: smpboot: Max logical packages: 1 Jan 24 00:31:51.923237 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 24 00:31:51.923253 kernel: devtmpfs: initialized Jan 24 00:31:51.923268 kernel: x86/mm: Memory block size: 128MB Jan 24 00:31:51.923284 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 24 00:31:51.923299 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:31:51.923315 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:31:51.923330 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:31:51.923349 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:31:51.923365 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:31:51.923380 kernel: audit: type=2000 audit(1769214711.394:1): state=initialized audit_enabled=0 res=1 Jan 24 00:31:51.923395 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:31:51.923411 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:31:51.923427 kernel: cpuidle: using governor menu Jan 24 00:31:51.923442 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:31:51.923457 kernel: dca service started, version 1.12.1 Jan 24 00:31:51.923473 kernel: PCI: Using configuration type 1 for base access Jan 24 00:31:51.923492 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:31:51.923508 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:31:51.923524 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:31:51.923539 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:31:51.923554 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:31:51.923570 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:31:51.923585 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:31:51.923601 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:31:51.923616 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 24 00:31:51.923635 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:31:51.923650 kernel: ACPI: Interpreter enabled Jan 24 00:31:51.923665 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:31:51.923681 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:31:51.923696 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:31:51.923712 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:31:51.923727 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 24 00:31:51.923742 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:31:51.923996 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:31:51.924148 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 24 00:31:51.924285 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 24 00:31:51.924303 kernel: acpiphp: Slot [3] registered Jan 24 00:31:51.924319 kernel: acpiphp: Slot [4] registered Jan 24 00:31:51.924334 kernel: acpiphp: Slot [5] registered Jan 24 00:31:51.924350 kernel: acpiphp: Slot [6] registered Jan 24 00:31:51.924365 kernel: acpiphp: Slot [7] registered Jan 24 00:31:51.924384 kernel: acpiphp: Slot [8] registered Jan 24 00:31:51.924399 kernel: acpiphp: Slot [9] registered Jan 24 00:31:51.924415 kernel: acpiphp: Slot [10] registered Jan 24 00:31:51.924430 kernel: acpiphp: Slot [11] registered Jan 24 00:31:51.924446 kernel: acpiphp: Slot [12] registered Jan 24 00:31:51.924461 kernel: acpiphp: Slot [13] registered Jan 24 00:31:51.924476 kernel: acpiphp: Slot [14] registered Jan 24 00:31:51.924492 kernel: acpiphp: Slot [15] registered Jan 24 00:31:51.924507 kernel: acpiphp: Slot [16] registered Jan 24 00:31:51.924522 kernel: acpiphp: Slot [17] registered Jan 24 00:31:51.924541 kernel: acpiphp: Slot [18] registered Jan 24 00:31:51.924556 kernel: acpiphp: Slot [19] registered Jan 24 00:31:51.924571 kernel: acpiphp: Slot [20] registered Jan 24 00:31:51.924587 kernel: acpiphp: Slot [21] registered Jan 24 00:31:51.924602 kernel: acpiphp: Slot [22] registered Jan 24 00:31:51.924617 kernel: acpiphp: Slot [23] registered Jan 24 00:31:51.924633 kernel: acpiphp: Slot [24] registered Jan 24 00:31:51.924648 kernel: acpiphp: Slot [25] registered Jan 24 00:31:51.924663 kernel: acpiphp: Slot [26] registered Jan 24 00:31:51.924681 kernel: acpiphp: Slot [27] registered Jan 24 00:31:51.924697 kernel: acpiphp: Slot [28] registered Jan 24 00:31:51.924712 kernel: acpiphp: Slot [29] registered Jan 24 00:31:51.924728 kernel: acpiphp: Slot [30] registered Jan 24 00:31:51.924743 kernel: acpiphp: Slot [31] registered Jan 24 00:31:51.924758 kernel: PCI host bridge to bus 0000:00 Jan 24 00:31:51.924909 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:31:51.925035 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:31:51.925161 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:31:51.925282 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 24 00:31:51.925403 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 24 00:31:51.925523 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:31:51.925684 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 24 00:31:51.925862 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 24 00:31:51.926010 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 24 00:31:51.926152 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 24 00:31:51.926296 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 24 00:31:51.926433 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 24 00:31:51.926568 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 24 00:31:51.926703 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 24 00:31:51.927431 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 24 00:31:51.927612 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 24 00:31:51.927768 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 24 00:31:51.928226 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jan 24 00:31:51.928362 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:31:51.928493 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jan 24 00:31:51.928625 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:31:51.928763 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 24 00:31:51.928912 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jan 24 00:31:51.929047 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 24 00:31:51.929178 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jan 24 00:31:51.929196 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:31:51.929211 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:31:51.929226 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:31:51.929241 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:31:51.929255 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 24 00:31:51.929274 kernel: iommu: Default domain type: Translated Jan 24 00:31:51.929288 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:31:51.929303 kernel: efivars: Registered efivars operations Jan 24 00:31:51.929317 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:31:51.929332 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:31:51.929347 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 24 00:31:51.929361 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 24 00:31:51.930929 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 24 00:31:51.931091 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 24 00:31:51.931229 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:31:51.931249 kernel: vgaarb: loaded Jan 24 00:31:51.931266 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 24 00:31:51.931281 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 24 00:31:51.931297 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:31:51.931312 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:31:51.931328 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:31:51.931343 kernel: pnp: PnP ACPI init Jan 24 00:31:51.931361 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:31:51.931377 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:31:51.931392 kernel: NET: Registered PF_INET protocol family Jan 24 00:31:51.931407 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:31:51.931423 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 24 00:31:51.931438 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:31:51.931454 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:31:51.931469 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 24 00:31:51.931484 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 24 00:31:51.931502 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 00:31:51.931517 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 00:31:51.931532 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:31:51.931547 kernel: NET: Registered PF_XDP protocol family Jan 24 00:31:51.931668 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:31:51.931782 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:31:51.933041 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:31:51.933182 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 24 00:31:51.933305 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 24 00:31:51.933460 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 24 00:31:51.933480 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:31:51.933497 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:31:51.933513 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 24 00:31:51.933529 kernel: clocksource: Switched to clocksource tsc Jan 24 00:31:51.933544 kernel: Initialise system trusted keyrings Jan 24 00:31:51.933560 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 24 00:31:51.933576 kernel: Key type asymmetric registered Jan 24 00:31:51.933596 kernel: Asymmetric key parser 'x509' registered Jan 24 00:31:51.933615 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:31:51.933631 kernel: io scheduler mq-deadline registered Jan 24 00:31:51.933656 kernel: io scheduler kyber registered Jan 24 00:31:51.933685 kernel: io scheduler bfq registered Jan 24 00:31:51.933704 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:31:51.933720 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:31:51.933737 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:31:51.933754 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:31:51.933774 kernel: i8042: Warning: Keylock active Jan 24 00:31:51.933790 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:31:51.933806 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:31:51.936010 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 24 00:31:51.936146 kernel: rtc_cmos 00:00: registered as rtc0 Jan 24 00:31:51.936270 kernel: rtc_cmos 00:00: setting system clock to 2026-01-24T00:31:51 UTC (1769214711) Jan 24 00:31:51.936392 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 24 00:31:51.936411 kernel: intel_pstate: CPU model not supported Jan 24 00:31:51.936432 kernel: efifb: probing for efifb Jan 24 00:31:51.936446 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jan 24 00:31:51.936462 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 24 00:31:51.936476 kernel: efifb: scrolling: redraw Jan 24 00:31:51.936490 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:31:51.936506 kernel: Console: switching to colour frame buffer device 100x37 Jan 24 00:31:51.936520 kernel: fb0: EFI VGA frame buffer device Jan 24 00:31:51.936534 kernel: pstore: Using crash dump compression: deflate Jan 24 00:31:51.936548 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:31:51.936566 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:31:51.936580 kernel: Segment Routing with IPv6 Jan 24 00:31:51.936595 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:31:51.936609 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:31:51.936623 kernel: Key type dns_resolver registered Jan 24 00:31:51.936638 kernel: IPI shorthand broadcast: enabled Jan 24 00:31:51.936677 kernel: sched_clock: Marking stable (465002091, 129640027)->(699441965, -104799847) Jan 24 00:31:51.936695 kernel: registered taskstats version 1 Jan 24 00:31:51.936711 kernel: Loading compiled-in X.509 certificates Jan 24 00:31:51.936731 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:31:51.936747 kernel: Key type .fscrypt registered Jan 24 00:31:51.936764 kernel: Key type fscrypt-provisioning registered Jan 24 00:31:51.936780 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:31:51.936797 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:31:51.936815 kernel: ima: No architecture policies found Jan 24 00:31:51.936905 kernel: clk: Disabling unused clocks Jan 24 00:31:51.936922 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:31:51.936939 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:31:51.936960 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:31:51.936977 kernel: Run /init as init process Jan 24 00:31:51.936994 kernel: with arguments: Jan 24 00:31:51.937010 kernel: /init Jan 24 00:31:51.937027 kernel: with environment: Jan 24 00:31:51.937046 kernel: HOME=/ Jan 24 00:31:51.937062 kernel: TERM=linux Jan 24 00:31:51.937082 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:31:51.937106 systemd[1]: Detected virtualization amazon. Jan 24 00:31:51.937124 systemd[1]: Detected architecture x86-64. Jan 24 00:31:51.937141 systemd[1]: Running in initrd. Jan 24 00:31:51.937158 systemd[1]: No hostname configured, using default hostname. Jan 24 00:31:51.937175 systemd[1]: Hostname set to . Jan 24 00:31:51.937193 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:31:51.937210 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:31:51.937228 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:31:51.937249 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:31:51.937268 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:31:51.937286 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:31:51.937304 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:31:51.937323 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:31:51.937346 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:31:51.937380 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:31:51.937415 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:31:51.937435 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:31:51.937451 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:31:51.937468 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:31:51.937485 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:31:51.937506 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:31:51.937520 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:31:51.937536 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:31:51.937554 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:31:51.937571 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:31:51.937586 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:31:51.937601 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:31:51.937618 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:31:51.937633 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:31:51.937661 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:31:51.937675 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:31:51.937690 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:31:51.937705 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:31:51.937721 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:31:51.937738 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:31:51.937753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:51.937768 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:31:51.937833 systemd-journald[179]: Collecting audit messages is disabled. Jan 24 00:31:51.937869 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:31:51.937884 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:31:51.937905 systemd-journald[179]: Journal started Jan 24 00:31:51.937934 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2682188d27008d2e7d71c40362102c) is 4.7M, max 38.2M, 33.4M free. Jan 24 00:31:51.937687 systemd-modules-load[180]: Inserted module 'overlay' Jan 24 00:31:51.944462 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:31:51.950149 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:31:51.963121 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:31:51.967614 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:51.990127 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:31:51.992220 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:31:51.994000 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:31:51.996984 kernel: Bridge firewalling registered Jan 24 00:31:51.994891 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 24 00:31:51.995726 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:31:52.003911 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:31:52.004278 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:31:52.008713 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:31:52.013603 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:31:52.032247 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:31:52.035427 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:31:52.036276 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:31:52.043107 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:31:52.049099 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:31:52.059039 dracut-cmdline[213]: dracut-dracut-053 Jan 24 00:31:52.063208 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:31:52.098884 systemd-resolved[214]: Positive Trust Anchors: Jan 24 00:31:52.098907 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:31:52.098972 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:31:52.109086 systemd-resolved[214]: Defaulting to hostname 'linux'. Jan 24 00:31:52.110542 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:31:52.111588 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:31:52.155861 kernel: SCSI subsystem initialized Jan 24 00:31:52.165849 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:31:52.177854 kernel: iscsi: registered transport (tcp) Jan 24 00:31:52.199876 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:31:52.199963 kernel: QLogic iSCSI HBA Driver Jan 24 00:31:52.238671 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:31:52.246015 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:31:52.270942 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:31:52.271019 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:31:52.272056 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:31:52.314853 kernel: raid6: avx512x4 gen() 18227 MB/s Jan 24 00:31:52.332847 kernel: raid6: avx512x2 gen() 18101 MB/s Jan 24 00:31:52.350850 kernel: raid6: avx512x1 gen() 17834 MB/s Jan 24 00:31:52.368849 kernel: raid6: avx2x4 gen() 17808 MB/s Jan 24 00:31:52.386849 kernel: raid6: avx2x2 gen() 17980 MB/s Jan 24 00:31:52.405053 kernel: raid6: avx2x1 gen() 13790 MB/s Jan 24 00:31:52.405113 kernel: raid6: using algorithm avx512x4 gen() 18227 MB/s Jan 24 00:31:52.424064 kernel: raid6: .... xor() 7429 MB/s, rmw enabled Jan 24 00:31:52.424129 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:31:52.445871 kernel: xor: automatically using best checksumming function avx Jan 24 00:31:52.605855 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:31:52.616709 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:31:52.622062 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:31:52.637312 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 24 00:31:52.642500 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:31:52.652357 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:31:52.670246 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 24 00:31:52.701068 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:31:52.707068 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:31:52.757810 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:31:52.766065 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:31:52.798542 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:31:52.800536 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:31:52.802541 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:31:52.803109 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:31:52.809115 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:31:52.840751 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:31:52.865844 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:31:52.886508 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:31:52.886680 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:31:52.912709 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 24 00:31:52.912988 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 24 00:31:52.913166 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 24 00:31:52.913339 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 24 00:31:52.913517 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 24 00:31:52.913541 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:3c:3e:2a:56:69 Jan 24 00:31:52.913719 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 24 00:31:52.915609 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:31:52.888935 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:31:52.935129 kernel: AES CTR mode by8 optimization enabled Jan 24 00:31:52.935166 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:31:52.935187 kernel: GPT:9289727 != 33554431 Jan 24 00:31:52.935207 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:31:52.935228 kernel: GPT:9289727 != 33554431 Jan 24 00:31:52.935247 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:31:52.935270 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:31:52.909480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:31:52.911489 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:52.914932 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:52.927068 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:52.928162 (udev-worker)[452]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:31:52.957904 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:31:52.958115 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:52.965088 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:52.986536 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:52.994006 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:31:53.020485 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:31:53.038848 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (448) Jan 24 00:31:53.064030 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (455) Jan 24 00:31:53.111911 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 24 00:31:53.122709 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 24 00:31:53.134378 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 24 00:31:53.140451 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 24 00:31:53.141179 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 24 00:31:53.150149 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:31:53.156808 disk-uuid[632]: Primary Header is updated. Jan 24 00:31:53.156808 disk-uuid[632]: Secondary Entries is updated. Jan 24 00:31:53.156808 disk-uuid[632]: Secondary Header is updated. Jan 24 00:31:53.161853 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:31:53.167199 kernel: GPT:disk_guids don't match. Jan 24 00:31:53.167266 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:31:53.169046 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:31:53.176868 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:31:54.178451 disk-uuid[633]: The operation has completed successfully. Jan 24 00:31:54.179472 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:31:54.287619 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:31:54.287728 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:31:54.305073 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:31:54.310569 sh[978]: Success Jan 24 00:31:54.325852 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:31:54.432664 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:31:54.439946 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:31:54.455443 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:31:54.489926 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:31:54.490005 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:31:54.490027 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:31:54.493280 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:31:54.493359 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:31:54.585853 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:31:54.605894 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:31:54.607164 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:31:54.613010 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:31:54.617007 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:31:54.633547 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:54.633604 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:31:54.635714 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:31:54.653128 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:31:54.669217 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:54.669353 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:31:54.677797 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:31:54.683194 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:31:54.719878 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:31:54.724072 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:31:54.758759 systemd-networkd[1170]: lo: Link UP Jan 24 00:31:54.758773 systemd-networkd[1170]: lo: Gained carrier Jan 24 00:31:54.760469 systemd-networkd[1170]: Enumeration completed Jan 24 00:31:54.760609 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:31:54.761019 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:54.761025 systemd-networkd[1170]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:31:54.762008 systemd[1]: Reached target network.target - Network. Jan 24 00:31:54.764711 systemd-networkd[1170]: eth0: Link UP Jan 24 00:31:54.764717 systemd-networkd[1170]: eth0: Gained carrier Jan 24 00:31:54.764730 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:54.776930 systemd-networkd[1170]: eth0: DHCPv4 address 172.31.28.170/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 24 00:31:55.042805 ignition[1119]: Ignition 2.19.0 Jan 24 00:31:55.042818 ignition[1119]: Stage: fetch-offline Jan 24 00:31:55.043137 ignition[1119]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:55.043155 ignition[1119]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:55.043493 ignition[1119]: Ignition finished successfully Jan 24 00:31:55.045663 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:31:55.049039 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:31:55.066682 ignition[1181]: Ignition 2.19.0 Jan 24 00:31:55.066696 ignition[1181]: Stage: fetch Jan 24 00:31:55.067160 ignition[1181]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:55.067175 ignition[1181]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:55.067291 ignition[1181]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:55.083310 ignition[1181]: PUT result: OK Jan 24 00:31:55.085408 ignition[1181]: parsed url from cmdline: "" Jan 24 00:31:55.085417 ignition[1181]: no config URL provided Jan 24 00:31:55.085427 ignition[1181]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:31:55.085439 ignition[1181]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:31:55.085459 ignition[1181]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:55.086174 ignition[1181]: PUT result: OK Jan 24 00:31:55.086288 ignition[1181]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 24 00:31:55.096631 ignition[1181]: GET result: OK Jan 24 00:31:55.096733 ignition[1181]: parsing config with SHA512: 2f7f95143c33410d377acbc7d9a2a58f6beb6be92d0d3ccfd0a23f27ee67d3b418acece054f9cbeee81c7f248bff49b829825e14af14a125473759df89653980 Jan 24 00:31:55.100390 unknown[1181]: fetched base config from "system" Jan 24 00:31:55.100399 unknown[1181]: fetched base config from "system" Jan 24 00:31:55.100732 ignition[1181]: fetch: fetch complete Jan 24 00:31:55.100405 unknown[1181]: fetched user config from "aws" Jan 24 00:31:55.100737 ignition[1181]: fetch: fetch passed Jan 24 00:31:55.102863 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:31:55.100774 ignition[1181]: Ignition finished successfully Jan 24 00:31:55.108064 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:31:55.124231 ignition[1187]: Ignition 2.19.0 Jan 24 00:31:55.124243 ignition[1187]: Stage: kargs Jan 24 00:31:55.124592 ignition[1187]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:55.124604 ignition[1187]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:55.124691 ignition[1187]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:55.125386 ignition[1187]: PUT result: OK Jan 24 00:31:55.128097 ignition[1187]: kargs: kargs passed Jan 24 00:31:55.128157 ignition[1187]: Ignition finished successfully Jan 24 00:31:55.129741 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:31:55.135018 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:31:55.149334 ignition[1193]: Ignition 2.19.0 Jan 24 00:31:55.149346 ignition[1193]: Stage: disks Jan 24 00:31:55.149696 ignition[1193]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:55.149706 ignition[1193]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:55.149787 ignition[1193]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:55.150713 ignition[1193]: PUT result: OK Jan 24 00:31:55.153498 ignition[1193]: disks: disks passed Jan 24 00:31:55.153576 ignition[1193]: Ignition finished successfully Jan 24 00:31:55.155492 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:31:55.156117 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:31:55.156486 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:31:55.157035 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:31:55.157582 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:31:55.158184 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:31:55.163039 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:31:55.199987 systemd-fsck[1202]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:31:55.203275 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:31:55.208977 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:31:55.305835 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:31:55.306609 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:31:55.307585 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:31:55.317947 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:31:55.321965 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:31:55.323154 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:31:55.323220 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:31:55.323253 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:31:55.334398 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:31:55.340848 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1221) Jan 24 00:31:55.345112 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:31:55.348521 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:55.348559 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:31:55.348580 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:31:55.361999 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:31:55.363227 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:31:55.713439 initrd-setup-root[1245]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:31:55.739839 initrd-setup-root[1252]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:31:55.744378 initrd-setup-root[1259]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:31:55.748851 initrd-setup-root[1266]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:31:56.048138 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:31:56.054027 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:31:56.056986 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:31:56.064614 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:31:56.065260 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:56.090293 ignition[1334]: INFO : Ignition 2.19.0 Jan 24 00:31:56.090293 ignition[1334]: INFO : Stage: mount Jan 24 00:31:56.090293 ignition[1334]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:56.090293 ignition[1334]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:56.093353 ignition[1334]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:56.093757 ignition[1334]: INFO : PUT result: OK Jan 24 00:31:56.096454 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:31:56.097926 ignition[1334]: INFO : mount: mount passed Jan 24 00:31:56.097926 ignition[1334]: INFO : Ignition finished successfully Jan 24 00:31:56.098382 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:31:56.104030 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:31:56.117034 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:31:56.132866 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1347) Jan 24 00:31:56.135907 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:56.135972 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:31:56.138395 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:31:56.142854 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:31:56.145278 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:31:56.167180 ignition[1363]: INFO : Ignition 2.19.0 Jan 24 00:31:56.167180 ignition[1363]: INFO : Stage: files Jan 24 00:31:56.168838 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:56.168838 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:56.168838 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:56.170395 ignition[1363]: INFO : PUT result: OK Jan 24 00:31:56.172929 ignition[1363]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:31:56.173671 ignition[1363]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:31:56.173671 ignition[1363]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:31:56.208034 ignition[1363]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:31:56.208843 ignition[1363]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:31:56.208843 ignition[1363]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:31:56.208499 unknown[1363]: wrote ssh authorized keys file for user: core Jan 24 00:31:56.210725 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:31:56.210725 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:31:56.296061 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:31:56.494868 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:31:56.494868 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:31:56.497292 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:31:56.497292 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:31:56.497292 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:31:56.497292 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:31:56.497292 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:31:56.497292 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:31:56.497292 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:31:56.497292 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:31:56.497292 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:31:56.497292 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:31:56.497292 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:31:56.497292 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:31:56.497292 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:31:56.560052 systemd-networkd[1170]: eth0: Gained IPv6LL Jan 24 00:31:57.015549 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:31:58.078687 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:31:58.078687 ignition[1363]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:31:58.092449 ignition[1363]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:31:58.093627 ignition[1363]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:31:58.093627 ignition[1363]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:31:58.093627 ignition[1363]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:31:58.093627 ignition[1363]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:31:58.093627 ignition[1363]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:31:58.093627 ignition[1363]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:31:58.093627 ignition[1363]: INFO : files: files passed Jan 24 00:31:58.093627 ignition[1363]: INFO : Ignition finished successfully Jan 24 00:31:58.094380 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:31:58.102915 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:31:58.106000 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:31:58.106949 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:31:58.107052 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:31:58.128120 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:31:58.128120 initrd-setup-root-after-ignition[1392]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:31:58.132499 initrd-setup-root-after-ignition[1396]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:31:58.133450 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:31:58.134469 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:31:58.140022 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:31:58.172794 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:31:58.172949 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:31:58.174574 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:31:58.175357 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:31:58.176308 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:31:58.185090 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:31:58.198474 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:31:58.205048 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:31:58.215384 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:31:58.216091 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:31:58.216673 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:31:58.217509 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:31:58.217630 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:31:58.218897 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:31:58.219733 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:31:58.220510 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:31:58.221253 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:31:58.221967 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:31:58.222837 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:31:58.223529 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:31:58.224323 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:31:58.225103 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:31:58.226132 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:31:58.226966 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:31:58.227097 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:31:58.228050 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:31:58.228741 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:31:58.229407 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:31:58.229520 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:31:58.230399 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:31:58.230524 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:31:58.231529 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:31:58.231643 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:31:58.232631 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:31:58.232733 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:31:58.241070 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:31:58.242394 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:31:58.242988 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:31:58.245027 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:31:58.246923 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:31:58.247502 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:31:58.248480 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:31:58.248612 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:31:58.255897 ignition[1416]: INFO : Ignition 2.19.0 Jan 24 00:31:58.255897 ignition[1416]: INFO : Stage: umount Jan 24 00:31:58.254710 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:31:58.257898 ignition[1416]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:58.257898 ignition[1416]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:58.257898 ignition[1416]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:58.257898 ignition[1416]: INFO : PUT result: OK Jan 24 00:31:58.254807 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:31:58.261320 ignition[1416]: INFO : umount: umount passed Jan 24 00:31:58.261320 ignition[1416]: INFO : Ignition finished successfully Jan 24 00:31:58.260804 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:31:58.261469 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:31:58.265597 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:31:58.265726 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:31:58.266612 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:31:58.266685 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:31:58.267355 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:31:58.267419 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:31:58.268170 systemd[1]: Stopped target network.target - Network. Jan 24 00:31:58.268910 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:31:58.268980 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:31:58.269895 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:31:58.270706 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:31:58.273946 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:31:58.275950 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:31:58.277018 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:31:58.278053 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:31:58.278127 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:31:58.279059 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:31:58.279122 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:31:58.281967 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:31:58.282073 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:31:58.282880 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:31:58.282943 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:31:58.283936 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:31:58.284598 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:31:58.287875 systemd-networkd[1170]: eth0: DHCPv6 lease lost Jan 24 00:31:58.291843 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:31:58.292010 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:31:58.294133 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:31:58.294347 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:31:58.298931 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:31:58.300110 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:31:58.300173 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:31:58.306985 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:31:58.308242 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:31:58.308320 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:31:58.308979 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:31:58.309048 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:31:58.309642 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:31:58.309698 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:31:58.310612 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:31:58.310669 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:31:58.311500 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:31:58.314898 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:31:58.315027 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:31:58.322112 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:31:58.322352 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:31:58.325131 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:31:58.325332 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:31:58.331110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:31:58.331222 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:31:58.332319 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:31:58.332372 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:31:58.333918 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:31:58.333970 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:31:58.334625 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:31:58.334673 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:31:58.336014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:31:58.336070 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:31:58.348972 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:31:58.349744 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:31:58.349867 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:31:58.350600 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:31:58.350660 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:58.356537 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:31:58.357487 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:31:58.362631 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:31:58.362774 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:31:58.364261 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:31:58.369037 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:31:58.387562 systemd[1]: Switching root. Jan 24 00:31:58.428654 systemd-journald[179]: Journal stopped Jan 24 00:32:00.369095 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 24 00:32:00.369199 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:32:00.369230 kernel: SELinux: policy capability open_perms=1 Jan 24 00:32:00.369260 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:32:00.369279 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:32:00.369299 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:32:00.369318 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:32:00.369337 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:32:00.369356 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:32:00.369377 kernel: audit: type=1403 audit(1769214718.863:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:32:00.369405 systemd[1]: Successfully loaded SELinux policy in 48.147ms. Jan 24 00:32:00.369433 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.541ms. Jan 24 00:32:00.369456 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:32:00.369480 systemd[1]: Detected virtualization amazon. Jan 24 00:32:00.369507 systemd[1]: Detected architecture x86-64. Jan 24 00:32:00.369529 systemd[1]: Detected first boot. Jan 24 00:32:00.369552 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:32:00.369580 zram_generator::config[1459]: No configuration found. Jan 24 00:32:00.369605 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:32:00.369628 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:32:00.369654 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:32:00.369677 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:32:00.369700 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:32:00.369722 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:32:00.369744 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:32:00.369765 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:32:00.369787 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:32:00.369809 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:32:00.376378 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:32:00.376423 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:32:00.376448 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:32:00.376471 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:32:00.376494 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:32:00.376515 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:32:00.376538 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:32:00.376560 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:32:00.376583 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:32:00.376613 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:32:00.376636 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:32:00.376658 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:32:00.376681 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:32:00.376703 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:32:00.376724 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:32:00.376747 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:32:00.376770 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:32:00.376795 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:32:00.376817 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:32:00.385861 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:32:00.385898 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:32:00.385921 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:32:00.385943 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:32:00.385965 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:32:00.385987 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:32:00.386009 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:32:00.386039 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:32:00.386062 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:32:00.386086 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:32:00.386108 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:32:00.386129 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:32:00.386152 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:32:00.386175 systemd[1]: Reached target machines.target - Containers. Jan 24 00:32:00.386197 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:32:00.386233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:32:00.386255 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:32:00.386277 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:32:00.386299 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:32:00.386321 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:32:00.386343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:32:00.386364 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:32:00.386386 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:32:00.386408 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:32:00.386433 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:32:00.386454 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:32:00.386478 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:32:00.386499 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:32:00.386521 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:32:00.386543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:32:00.386564 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:32:00.386586 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:32:00.386608 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:32:00.386633 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:32:00.386655 systemd[1]: Stopped verity-setup.service. Jan 24 00:32:00.386678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:32:00.386699 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:32:00.386721 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:32:00.386743 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:32:00.386764 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:32:00.386787 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:32:00.386813 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:32:00.386845 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:32:00.386867 kernel: loop: module loaded Jan 24 00:32:00.386929 systemd-journald[1537]: Collecting audit messages is disabled. Jan 24 00:32:00.386982 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:32:00.387010 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:32:00.387035 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:32:00.387057 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:32:00.387080 systemd-journald[1537]: Journal started Jan 24 00:32:00.387120 systemd-journald[1537]: Runtime Journal (/run/log/journal/ec2682188d27008d2e7d71c40362102c) is 4.7M, max 38.2M, 33.4M free. Jan 24 00:32:00.011150 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:32:00.060291 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 24 00:32:00.060792 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:32:00.393019 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:32:00.400700 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:32:00.400944 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:32:00.403359 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:32:00.403556 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:32:00.406427 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:32:00.410442 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:32:00.413741 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:32:00.438139 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:32:00.444850 kernel: fuse: init (API version 7.39) Jan 24 00:32:00.446900 kernel: ACPI: bus type drm_connector registered Jan 24 00:32:00.452346 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:32:00.453090 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:32:00.453150 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:32:00.455785 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:32:00.465050 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:32:00.472068 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:32:00.472996 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:32:00.479137 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:32:00.485090 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:32:00.485909 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:32:00.487881 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:32:00.489596 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:32:00.497217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:32:00.508051 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:32:00.511129 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:32:00.512417 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:32:00.514379 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:32:00.514688 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:32:00.516095 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:32:00.518410 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:32:00.555962 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:32:00.558917 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:32:00.565441 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:32:00.582270 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:32:00.585571 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:32:00.600109 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:32:00.602356 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:32:00.605003 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:32:00.614230 kernel: loop0: detected capacity change from 0 to 142488 Jan 24 00:32:00.615281 systemd-journald[1537]: Time spent on flushing to /var/log/journal/ec2682188d27008d2e7d71c40362102c is 153.197ms for 989 entries. Jan 24 00:32:00.615281 systemd-journald[1537]: System Journal (/var/log/journal/ec2682188d27008d2e7d71c40362102c) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:32:00.783346 systemd-journald[1537]: Received client request to flush runtime journal. Jan 24 00:32:00.783448 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:32:00.626000 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:32:00.678550 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:32:00.690575 udevadm[1597]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:32:00.732377 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:32:00.742384 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:32:00.788362 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:32:00.794869 kernel: loop1: detected capacity change from 0 to 140768 Jan 24 00:32:00.806620 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:32:00.812454 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:32:00.827340 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Jan 24 00:32:00.827369 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Jan 24 00:32:00.837443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:32:00.912951 kernel: loop2: detected capacity change from 0 to 61336 Jan 24 00:32:01.100974 kernel: loop3: detected capacity change from 0 to 224512 Jan 24 00:32:01.408275 kernel: loop4: detected capacity change from 0 to 142488 Jan 24 00:32:01.518483 kernel: loop5: detected capacity change from 0 to 140768 Jan 24 00:32:01.592747 kernel: loop6: detected capacity change from 0 to 61336 Jan 24 00:32:01.670171 kernel: loop7: detected capacity change from 0 to 224512 Jan 24 00:32:01.771755 (sd-merge)[1614]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 24 00:32:01.772536 (sd-merge)[1614]: Merged extensions into '/usr'. Jan 24 00:32:01.788966 systemd[1]: Reloading requested from client PID 1584 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:32:01.789256 systemd[1]: Reloading... Jan 24 00:32:02.196854 zram_generator::config[1640]: No configuration found. Jan 24 00:32:02.494882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:32:02.572922 systemd[1]: Reloading finished in 782 ms. Jan 24 00:32:02.599414 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:32:02.600502 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:32:02.608074 systemd[1]: Starting ensure-sysext.service... Jan 24 00:32:02.616612 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:32:02.621062 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:32:02.640961 systemd[1]: Reloading requested from client PID 1692 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:32:02.641895 systemd[1]: Reloading... Jan 24 00:32:02.651633 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:32:02.652797 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:32:02.658126 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:32:02.658640 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Jan 24 00:32:02.658736 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Jan 24 00:32:02.668682 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:32:02.669894 systemd-tmpfiles[1693]: Skipping /boot Jan 24 00:32:02.690284 systemd-udevd[1694]: Using default interface naming scheme 'v255'. Jan 24 00:32:02.698599 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:32:02.698615 systemd-tmpfiles[1693]: Skipping /boot Jan 24 00:32:02.879183 (udev-worker)[1729]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:32:02.883161 zram_generator::config[1739]: No configuration found. Jan 24 00:32:02.894665 ldconfig[1579]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:32:02.962849 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 24 00:32:02.994873 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:32:02.999846 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:32:03.020896 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 24 00:32:03.024917 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 24 00:32:03.045875 kernel: ACPI: button: Sleep Button [SLPF] Jan 24 00:32:03.090128 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1731) Jan 24 00:32:03.194956 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:32:03.220852 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:32:03.319490 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:32:03.319752 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 24 00:32:03.320775 systemd[1]: Reloading finished in 678 ms. Jan 24 00:32:03.335713 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:32:03.337591 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:32:03.342569 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:32:03.366402 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:32:03.367186 systemd[1]: Finished ensure-sysext.service. Jan 24 00:32:03.392877 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:32:03.402091 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:32:03.407046 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:32:03.407966 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:32:03.413519 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:32:03.421041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:32:03.426043 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:32:03.429298 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:32:03.443186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:32:03.444675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:32:03.463369 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:32:03.470900 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:32:03.479583 lvm[1889]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:32:03.484029 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:32:03.487983 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:32:03.488661 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:32:03.514183 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:32:03.519705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:32:03.521116 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:32:03.524434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:32:03.524653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:32:03.525728 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:32:03.526104 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:32:03.527626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:32:03.528247 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:32:03.530331 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:32:03.530527 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:32:03.532798 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:32:03.538876 augenrules[1914]: No rules Jan 24 00:32:03.538192 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:32:03.540711 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:32:03.560563 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:32:03.569099 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:32:03.570920 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:32:03.571021 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:32:03.577279 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:32:03.579077 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:32:03.591481 lvm[1927]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:32:03.607272 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:32:03.618159 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:32:03.633374 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:32:03.642918 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:32:03.653245 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:32:03.655179 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:32:03.667366 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:32:03.699599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:32:03.755673 systemd-networkd[1906]: lo: Link UP Jan 24 00:32:03.755685 systemd-networkd[1906]: lo: Gained carrier Jan 24 00:32:03.757808 systemd-networkd[1906]: Enumeration completed Jan 24 00:32:03.757973 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:32:03.758856 systemd-networkd[1906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:32:03.758866 systemd-networkd[1906]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:32:03.765217 systemd-networkd[1906]: eth0: Link UP Jan 24 00:32:03.765436 systemd-networkd[1906]: eth0: Gained carrier Jan 24 00:32:03.765466 systemd-networkd[1906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:32:03.769290 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:32:03.773875 systemd-resolved[1909]: Positive Trust Anchors: Jan 24 00:32:03.774478 systemd-resolved[1909]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:32:03.774538 systemd-resolved[1909]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:32:03.775917 systemd-networkd[1906]: eth0: DHCPv4 address 172.31.28.170/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 24 00:32:03.788587 systemd-resolved[1909]: Defaulting to hostname 'linux'. Jan 24 00:32:03.790535 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:32:03.791138 systemd[1]: Reached target network.target - Network. Jan 24 00:32:03.791575 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:32:03.792036 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:32:03.792519 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:32:03.792976 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:32:03.793512 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:32:03.794007 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:32:03.794458 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:32:03.794847 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:32:03.794887 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:32:03.795270 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:32:03.796988 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:32:03.798811 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:32:03.807173 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:32:03.808351 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:32:03.808956 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:32:03.809380 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:32:03.809807 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:32:03.809866 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:32:03.811401 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:32:03.816065 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:32:03.828094 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:32:03.831083 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:32:03.833202 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:32:03.833815 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:32:03.841122 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:32:03.856038 systemd[1]: Started ntpd.service - Network Time Service. Jan 24 00:32:03.861780 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:32:03.869091 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 24 00:32:03.874059 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:32:03.891314 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:32:03.905176 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:32:03.907735 jq[1952]: false Jan 24 00:32:03.907318 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:32:03.908037 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:32:03.912316 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:32:03.916096 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:32:03.927326 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:32:03.927601 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:32:03.943845 jq[1968]: true Jan 24 00:32:03.964592 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:32:03.966791 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:32:03.979127 extend-filesystems[1953]: Found loop4 Jan 24 00:32:03.979127 extend-filesystems[1953]: Found loop5 Jan 24 00:32:03.979127 extend-filesystems[1953]: Found loop6 Jan 24 00:32:03.979127 extend-filesystems[1953]: Found loop7 Jan 24 00:32:03.979127 extend-filesystems[1953]: Found nvme0n1 Jan 24 00:32:03.979127 extend-filesystems[1953]: Found nvme0n1p1 Jan 24 00:32:03.979127 extend-filesystems[1953]: Found nvme0n1p2 Jan 24 00:32:03.979127 extend-filesystems[1953]: Found nvme0n1p3 Jan 24 00:32:03.979127 extend-filesystems[1953]: Found usr Jan 24 00:32:03.979127 extend-filesystems[1953]: Found nvme0n1p4 Jan 24 00:32:03.979127 extend-filesystems[1953]: Found nvme0n1p6 Jan 24 00:32:03.979127 extend-filesystems[1953]: Found nvme0n1p7 Jan 24 00:32:03.979127 extend-filesystems[1953]: Found nvme0n1p9 Jan 24 00:32:03.979127 extend-filesystems[1953]: Checking size of /dev/nvme0n1p9 Jan 24 00:32:04.070018 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 24 00:32:04.070069 tar[1977]: linux-amd64/LICENSE Jan 24 00:32:04.070069 tar[1977]: linux-amd64/helm Jan 24 00:32:04.070416 update_engine[1967]: I20260124 00:32:04.063580 1967 main.cc:92] Flatcar Update Engine starting Jan 24 00:32:04.075934 extend-filesystems[1953]: Resized partition /dev/nvme0n1p9 Jan 24 00:32:03.993490 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:32:04.072146 dbus-daemon[1951]: [system] SELinux support is enabled Jan 24 00:32:04.079438 jq[1976]: true Jan 24 00:32:04.079652 extend-filesystems[1991]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:32:03.998434 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:32:04.094056 update_engine[1967]: I20260124 00:32:04.091058 1967 update_check_scheduler.cc:74] Next update check in 6m54s Jan 24 00:32:04.082384 dbus-daemon[1951]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1906 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 00:32:04.077005 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:32:04.086985 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:32:04.084207 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:32:04.084276 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:32:04.087204 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:32:04.087234 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:32:04.092705 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:32:04.106249 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 00:32:04.116038 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:32:04.116038 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:32:04.116038 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: ---------------------------------------------------- Jan 24 00:32:04.116038 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:32:04.116038 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:32:04.116038 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: corporation. Support and training for ntp-4 are Jan 24 00:32:04.116038 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: available at https://www.nwtime.org/support Jan 24 00:32:04.116038 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: ---------------------------------------------------- Jan 24 00:32:04.111968 ntpd[1955]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:32:04.109845 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:32:04.111996 ntpd[1955]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:32:04.111416 (ntainerd)[1979]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:32:04.112009 ntpd[1955]: ---------------------------------------------------- Jan 24 00:32:04.112020 ntpd[1955]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:32:04.112031 ntpd[1955]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:32:04.112040 ntpd[1955]: corporation. Support and training for ntp-4 are Jan 24 00:32:04.112051 ntpd[1955]: available at https://www.nwtime.org/support Jan 24 00:32:04.112061 ntpd[1955]: ---------------------------------------------------- Jan 24 00:32:04.122628 ntpd[1955]: proto: precision = 0.070 usec (-24) Jan 24 00:32:04.131474 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: proto: precision = 0.070 usec (-24) Jan 24 00:32:04.131474 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: basedate set to 2026-01-11 Jan 24 00:32:04.131474 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: gps base set to 2026-01-11 (week 2401) Jan 24 00:32:04.125090 ntpd[1955]: basedate set to 2026-01-11 Jan 24 00:32:04.125113 ntpd[1955]: gps base set to 2026-01-11 (week 2401) Jan 24 00:32:04.140261 ntpd[1955]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:32:04.140719 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:32:04.141145 ntpd[1955]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:32:04.141600 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:32:04.142679 ntpd[1955]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:32:04.143037 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:32:04.143150 ntpd[1955]: Listen normally on 3 eth0 172.31.28.170:123 Jan 24 00:32:04.144255 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: Listen normally on 3 eth0 172.31.28.170:123 Jan 24 00:32:04.144255 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: Listen normally on 4 lo [::1]:123 Jan 24 00:32:04.144255 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: bind(21) AF_INET6 fe80::43c:3eff:fe2a:5669%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:32:04.144255 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: unable to create socket on eth0 (5) for fe80::43c:3eff:fe2a:5669%2#123 Jan 24 00:32:04.143209 ntpd[1955]: Listen normally on 4 lo [::1]:123 Jan 24 00:32:04.145429 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: failed to init interface for address fe80::43c:3eff:fe2a:5669%2 Jan 24 00:32:04.145429 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: Listening on routing socket on fd #21 for interface updates Jan 24 00:32:04.147876 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 24 00:32:04.143918 ntpd[1955]: bind(21) AF_INET6 fe80::43c:3eff:fe2a:5669%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:32:04.143950 ntpd[1955]: unable to create socket on eth0 (5) for fe80::43c:3eff:fe2a:5669%2#123 Jan 24 00:32:04.143969 ntpd[1955]: failed to init interface for address fe80::43c:3eff:fe2a:5669%2 Jan 24 00:32:04.144442 ntpd[1955]: Listening on routing socket on fd #21 for interface updates Jan 24 00:32:04.170584 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:32:04.170584 ntpd[1955]: 24 Jan 00:32:04 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:32:04.156798 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:32:04.164698 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:32:04.177951 extend-filesystems[1991]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 24 00:32:04.177951 extend-filesystems[1991]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 24 00:32:04.177951 extend-filesystems[1991]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 24 00:32:04.191475 extend-filesystems[1953]: Resized filesystem in /dev/nvme0n1p9 Jan 24 00:32:04.192014 coreos-metadata[1950]: Jan 24 00:32:04.183 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 24 00:32:04.180391 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:32:04.181202 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:32:04.189543 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 24 00:32:04.200810 systemd-logind[1963]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:32:04.204777 systemd-logind[1963]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 24 00:32:04.204818 systemd-logind[1963]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:32:04.205085 systemd-logind[1963]: New seat seat0. Jan 24 00:32:04.209198 coreos-metadata[1950]: Jan 24 00:32:04.208 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 24 00:32:04.210351 coreos-metadata[1950]: Jan 24 00:32:04.209 INFO Fetch successful Jan 24 00:32:04.210351 coreos-metadata[1950]: Jan 24 00:32:04.209 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 24 00:32:04.213454 coreos-metadata[1950]: Jan 24 00:32:04.212 INFO Fetch successful Jan 24 00:32:04.213454 coreos-metadata[1950]: Jan 24 00:32:04.213 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 24 00:32:04.214264 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:32:04.215584 coreos-metadata[1950]: Jan 24 00:32:04.214 INFO Fetch successful Jan 24 00:32:04.215584 coreos-metadata[1950]: Jan 24 00:32:04.215 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 24 00:32:04.221490 coreos-metadata[1950]: Jan 24 00:32:04.218 INFO Fetch successful Jan 24 00:32:04.221490 coreos-metadata[1950]: Jan 24 00:32:04.219 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 24 00:32:04.221490 coreos-metadata[1950]: Jan 24 00:32:04.221 INFO Fetch failed with 404: resource not found Jan 24 00:32:04.221490 coreos-metadata[1950]: Jan 24 00:32:04.221 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 24 00:32:04.222703 coreos-metadata[1950]: Jan 24 00:32:04.222 INFO Fetch successful Jan 24 00:32:04.222703 coreos-metadata[1950]: Jan 24 00:32:04.222 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 24 00:32:04.224939 coreos-metadata[1950]: Jan 24 00:32:04.223 INFO Fetch successful Jan 24 00:32:04.224939 coreos-metadata[1950]: Jan 24 00:32:04.223 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 24 00:32:04.225111 bash[2027]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:32:04.234214 coreos-metadata[1950]: Jan 24 00:32:04.228 INFO Fetch successful Jan 24 00:32:04.234214 coreos-metadata[1950]: Jan 24 00:32:04.228 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 24 00:32:04.234214 coreos-metadata[1950]: Jan 24 00:32:04.233 INFO Fetch successful Jan 24 00:32:04.234214 coreos-metadata[1950]: Jan 24 00:32:04.233 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 24 00:32:04.237946 coreos-metadata[1950]: Jan 24 00:32:04.236 INFO Fetch successful Jan 24 00:32:04.236316 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:32:04.247150 systemd[1]: Starting sshkeys.service... Jan 24 00:32:04.281540 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1733) Jan 24 00:32:04.354928 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:32:04.364262 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:32:04.398372 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:32:04.400666 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:32:04.552062 coreos-metadata[2040]: Jan 24 00:32:04.551 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 24 00:32:04.553503 coreos-metadata[2040]: Jan 24 00:32:04.553 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 24 00:32:04.558543 coreos-metadata[2040]: Jan 24 00:32:04.556 INFO Fetch successful Jan 24 00:32:04.558543 coreos-metadata[2040]: Jan 24 00:32:04.557 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 24 00:32:04.558543 coreos-metadata[2040]: Jan 24 00:32:04.558 INFO Fetch successful Jan 24 00:32:04.573012 unknown[2040]: wrote ssh authorized keys file for user: core Jan 24 00:32:04.664950 update-ssh-keys[2107]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:32:04.668905 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:32:04.680097 systemd[1]: Finished sshkeys.service. Jan 24 00:32:04.722134 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 00:32:04.722340 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 00:32:04.727856 dbus-daemon[1951]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2001 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 00:32:04.737195 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 00:32:04.783595 locksmithd[2004]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:32:04.801936 polkitd[2143]: Started polkitd version 121 Jan 24 00:32:04.821168 polkitd[2143]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 00:32:04.821261 polkitd[2143]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 00:32:04.831479 polkitd[2143]: Finished loading, compiling and executing 2 rules Jan 24 00:32:04.832526 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 00:32:04.832731 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 00:32:04.834670 polkitd[2143]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 00:32:04.876072 systemd-resolved[1909]: System hostname changed to 'ip-172-31-28-170'. Jan 24 00:32:04.876514 systemd-hostnamed[2001]: Hostname set to (transient) Jan 24 00:32:04.880005 systemd-networkd[1906]: eth0: Gained IPv6LL Jan 24 00:32:04.887593 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:32:04.888876 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:32:04.898179 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 24 00:32:04.914971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:32:04.921032 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:32:04.969275 containerd[1979]: time="2026-01-24T00:32:04.969171589Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:32:05.048747 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:32:05.081680 amazon-ssm-agent[2153]: Initializing new seelog logger Jan 24 00:32:05.083055 amazon-ssm-agent[2153]: New Seelog Logger Creation Complete Jan 24 00:32:05.083233 amazon-ssm-agent[2153]: 2026/01/24 00:32:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:32:05.083233 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:32:05.083797 amazon-ssm-agent[2153]: 2026/01/24 00:32:05 processing appconfig overrides Jan 24 00:32:05.089646 amazon-ssm-agent[2153]: 2026/01/24 00:32:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:32:05.089646 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:32:05.089780 amazon-ssm-agent[2153]: 2026/01/24 00:32:05 processing appconfig overrides Jan 24 00:32:05.090843 amazon-ssm-agent[2153]: 2026/01/24 00:32:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:32:05.090843 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:32:05.090843 amazon-ssm-agent[2153]: 2026/01/24 00:32:05 processing appconfig overrides Jan 24 00:32:05.090843 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO Proxy environment variables: Jan 24 00:32:05.097322 amazon-ssm-agent[2153]: 2026/01/24 00:32:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:32:05.101846 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:32:05.101846 amazon-ssm-agent[2153]: 2026/01/24 00:32:05 processing appconfig overrides Jan 24 00:32:05.108510 containerd[1979]: time="2026-01-24T00:32:05.108451073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:32:05.112887 containerd[1979]: time="2026-01-24T00:32:05.112782872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:32:05.112887 containerd[1979]: time="2026-01-24T00:32:05.112847577Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:32:05.112887 containerd[1979]: time="2026-01-24T00:32:05.112883525Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:32:05.113106 containerd[1979]: time="2026-01-24T00:32:05.113082922Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:32:05.113152 containerd[1979]: time="2026-01-24T00:32:05.113114743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:32:05.113212 containerd[1979]: time="2026-01-24T00:32:05.113191710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:32:05.113252 containerd[1979]: time="2026-01-24T00:32:05.113218033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:32:05.114621 containerd[1979]: time="2026-01-24T00:32:05.114585132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:32:05.114621 containerd[1979]: time="2026-01-24T00:32:05.114621087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:32:05.114744 containerd[1979]: time="2026-01-24T00:32:05.114641328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:32:05.114744 containerd[1979]: time="2026-01-24T00:32:05.114655120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:32:05.114818 containerd[1979]: time="2026-01-24T00:32:05.114774528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:32:05.116121 containerd[1979]: time="2026-01-24T00:32:05.116091395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:32:05.117063 containerd[1979]: time="2026-01-24T00:32:05.117031602Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:32:05.117127 containerd[1979]: time="2026-01-24T00:32:05.117064562Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:32:05.117201 containerd[1979]: time="2026-01-24T00:32:05.117182156Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:32:05.117267 containerd[1979]: time="2026-01-24T00:32:05.117249926Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:32:05.131425 containerd[1979]: time="2026-01-24T00:32:05.131374326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:32:05.131886 containerd[1979]: time="2026-01-24T00:32:05.131864869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:32:05.131956 containerd[1979]: time="2026-01-24T00:32:05.131938509Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:32:05.131996 containerd[1979]: time="2026-01-24T00:32:05.131971599Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:32:05.132032 containerd[1979]: time="2026-01-24T00:32:05.131996882Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:32:05.132206 containerd[1979]: time="2026-01-24T00:32:05.132186176Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:32:05.132971 containerd[1979]: time="2026-01-24T00:32:05.132944317Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:32:05.133138 containerd[1979]: time="2026-01-24T00:32:05.133115475Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:32:05.133187 containerd[1979]: time="2026-01-24T00:32:05.133150989Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:32:05.133187 containerd[1979]: time="2026-01-24T00:32:05.133172577Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:32:05.133259 containerd[1979]: time="2026-01-24T00:32:05.133193334Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:32:05.133259 containerd[1979]: time="2026-01-24T00:32:05.133214222Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:32:05.133259 containerd[1979]: time="2026-01-24T00:32:05.133233634Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:32:05.133377 containerd[1979]: time="2026-01-24T00:32:05.133256164Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:32:05.133377 containerd[1979]: time="2026-01-24T00:32:05.133278926Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:32:05.133377 containerd[1979]: time="2026-01-24T00:32:05.133298780Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:32:05.133377 containerd[1979]: time="2026-01-24T00:32:05.133316958Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:32:05.133377 containerd[1979]: time="2026-01-24T00:32:05.133336167Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:32:05.133377 containerd[1979]: time="2026-01-24T00:32:05.133365211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133585 containerd[1979]: time="2026-01-24T00:32:05.133386199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133585 containerd[1979]: time="2026-01-24T00:32:05.133405636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133585 containerd[1979]: time="2026-01-24T00:32:05.133433850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133585 containerd[1979]: time="2026-01-24T00:32:05.133452575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133585 containerd[1979]: time="2026-01-24T00:32:05.133472390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133585 containerd[1979]: time="2026-01-24T00:32:05.133491348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133585 containerd[1979]: time="2026-01-24T00:32:05.133513013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133585 containerd[1979]: time="2026-01-24T00:32:05.133533571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133585 containerd[1979]: time="2026-01-24T00:32:05.133556074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133585 containerd[1979]: time="2026-01-24T00:32:05.133575353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133952 containerd[1979]: time="2026-01-24T00:32:05.133593373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133952 containerd[1979]: time="2026-01-24T00:32:05.133612935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133952 containerd[1979]: time="2026-01-24T00:32:05.133635098Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:32:05.133952 containerd[1979]: time="2026-01-24T00:32:05.133664631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133952 containerd[1979]: time="2026-01-24T00:32:05.133682760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.133952 containerd[1979]: time="2026-01-24T00:32:05.133700446Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:32:05.137926 containerd[1979]: time="2026-01-24T00:32:05.135946917Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:32:05.137926 containerd[1979]: time="2026-01-24T00:32:05.136891226Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:32:05.137926 containerd[1979]: time="2026-01-24T00:32:05.136917953Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:32:05.137926 containerd[1979]: time="2026-01-24T00:32:05.136937444Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:32:05.137926 containerd[1979]: time="2026-01-24T00:32:05.136951747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.137926 containerd[1979]: time="2026-01-24T00:32:05.136981193Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:32:05.137926 containerd[1979]: time="2026-01-24T00:32:05.136997113Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:32:05.137926 containerd[1979]: time="2026-01-24T00:32:05.137012114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:32:05.138277 containerd[1979]: time="2026-01-24T00:32:05.137434236Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:32:05.138277 containerd[1979]: time="2026-01-24T00:32:05.137524575Z" level=info msg="Connect containerd service" Jan 24 00:32:05.138277 containerd[1979]: time="2026-01-24T00:32:05.137581657Z" level=info msg="using legacy CRI server" Jan 24 00:32:05.138277 containerd[1979]: time="2026-01-24T00:32:05.137592962Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:32:05.138277 containerd[1979]: time="2026-01-24T00:32:05.137736941Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:32:05.144850 containerd[1979]: time="2026-01-24T00:32:05.142263390Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:32:05.144850 containerd[1979]: time="2026-01-24T00:32:05.142736114Z" level=info msg="Start subscribing containerd event" Jan 24 00:32:05.146087 containerd[1979]: time="2026-01-24T00:32:05.146042827Z" level=info msg="Start recovering state" Jan 24 00:32:05.146190 containerd[1979]: time="2026-01-24T00:32:05.146173086Z" level=info msg="Start event monitor" Jan 24 00:32:05.146246 containerd[1979]: time="2026-01-24T00:32:05.146199084Z" level=info msg="Start snapshots syncer" Jan 24 00:32:05.146246 containerd[1979]: time="2026-01-24T00:32:05.146214667Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:32:05.146246 containerd[1979]: time="2026-01-24T00:32:05.146240924Z" level=info msg="Start streaming server" Jan 24 00:32:05.146602 containerd[1979]: time="2026-01-24T00:32:05.146577333Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:32:05.146675 containerd[1979]: time="2026-01-24T00:32:05.146654396Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:32:05.146837 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:32:05.154239 sshd_keygen[2007]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:32:05.156907 containerd[1979]: time="2026-01-24T00:32:05.156860463Z" level=info msg="containerd successfully booted in 0.192668s" Jan 24 00:32:05.190477 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO https_proxy: Jan 24 00:32:05.218653 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:32:05.229189 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:32:05.255401 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:32:05.257346 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:32:05.270196 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:32:05.289979 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO http_proxy: Jan 24 00:32:05.293080 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:32:05.301524 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:32:05.312622 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:32:05.314096 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:32:05.388189 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO no_proxy: Jan 24 00:32:05.486216 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO Checking if agent identity type OnPrem can be assumed Jan 24 00:32:05.587266 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO Checking if agent identity type EC2 can be assumed Jan 24 00:32:05.607289 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO Agent will take identity from EC2 Jan 24 00:32:05.607289 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:32:05.607289 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:32:05.607494 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:32:05.607494 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 24 00:32:05.607494 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 24 00:32:05.607494 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO [amazon-ssm-agent] Starting Core Agent Jan 24 00:32:05.607494 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 24 00:32:05.607494 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO [Registrar] Starting registrar module Jan 24 00:32:05.607494 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 24 00:32:05.607494 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO [EC2Identity] EC2 registration was successful. Jan 24 00:32:05.607494 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO [CredentialRefresher] credentialRefresher has started Jan 24 00:32:05.607494 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO [CredentialRefresher] Starting credentials refresher loop Jan 24 00:32:05.607494 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 24 00:32:05.626622 tar[1977]: linux-amd64/README.md Jan 24 00:32:05.638088 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:32:05.685182 amazon-ssm-agent[2153]: 2026-01-24 00:32:05 INFO [CredentialRefresher] Next credential rotation will be in 31.283322573733333 minutes Jan 24 00:32:05.762128 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:32:05.770401 systemd[1]: Started sshd@0-172.31.28.170:22-4.153.228.146:36362.service - OpenSSH per-connection server daemon (4.153.228.146:36362). Jan 24 00:32:06.274079 sshd[2195]: Accepted publickey for core from 4.153.228.146 port 36362 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:32:06.276548 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:06.285162 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:32:06.298211 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:32:06.301245 systemd-logind[1963]: New session 1 of user core. Jan 24 00:32:06.314058 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:32:06.323275 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:32:06.328172 (systemd)[2199]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:32:06.462100 systemd[2199]: Queued start job for default target default.target. Jan 24 00:32:06.469153 systemd[2199]: Created slice app.slice - User Application Slice. Jan 24 00:32:06.469198 systemd[2199]: Reached target paths.target - Paths. Jan 24 00:32:06.469218 systemd[2199]: Reached target timers.target - Timers. Jan 24 00:32:06.470947 systemd[2199]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:32:06.496401 systemd[2199]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:32:06.496661 systemd[2199]: Reached target sockets.target - Sockets. Jan 24 00:32:06.496684 systemd[2199]: Reached target basic.target - Basic System. Jan 24 00:32:06.496743 systemd[2199]: Reached target default.target - Main User Target. Jan 24 00:32:06.496785 systemd[2199]: Startup finished in 161ms. Jan 24 00:32:06.497046 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:32:06.509117 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:32:06.622716 amazon-ssm-agent[2153]: 2026-01-24 00:32:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 24 00:32:06.723847 amazon-ssm-agent[2153]: 2026-01-24 00:32:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2209) started Jan 24 00:32:06.823543 amazon-ssm-agent[2153]: 2026-01-24 00:32:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 24 00:32:06.874180 systemd[1]: Started sshd@1-172.31.28.170:22-4.153.228.146:36370.service - OpenSSH per-connection server daemon (4.153.228.146:36370). Jan 24 00:32:07.112493 ntpd[1955]: Listen normally on 6 eth0 [fe80::43c:3eff:fe2a:5669%2]:123 Jan 24 00:32:07.112876 ntpd[1955]: 24 Jan 00:32:07 ntpd[1955]: Listen normally on 6 eth0 [fe80::43c:3eff:fe2a:5669%2]:123 Jan 24 00:32:07.364413 sshd[2221]: Accepted publickey for core from 4.153.228.146 port 36370 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:32:07.366152 sshd[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:07.371333 systemd-logind[1963]: New session 2 of user core. Jan 24 00:32:07.378090 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:32:07.593465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:07.595297 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:32:07.596819 systemd[1]: Startup finished in 593ms (kernel) + 7.160s (initrd) + 8.780s (userspace) = 16.534s. Jan 24 00:32:07.601943 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:32:07.720422 sshd[2221]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:07.723873 systemd[1]: sshd@1-172.31.28.170:22-4.153.228.146:36370.service: Deactivated successfully. Jan 24 00:32:07.725587 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:32:07.726196 systemd-logind[1963]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:32:07.727492 systemd-logind[1963]: Removed session 2. Jan 24 00:32:07.806875 systemd[1]: Started sshd@2-172.31.28.170:22-4.153.228.146:36376.service - OpenSSH per-connection server daemon (4.153.228.146:36376). Jan 24 00:32:08.299250 sshd[2238]: Accepted publickey for core from 4.153.228.146 port 36376 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:32:08.301422 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:08.308480 systemd-logind[1963]: New session 3 of user core. Jan 24 00:32:08.315030 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:32:08.650718 sshd[2238]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:08.653926 systemd-logind[1963]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:32:08.654384 systemd[1]: sshd@2-172.31.28.170:22-4.153.228.146:36376.service: Deactivated successfully. Jan 24 00:32:08.656902 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:32:08.659450 systemd-logind[1963]: Removed session 3. Jan 24 00:32:08.685876 kubelet[2229]: E0124 00:32:08.685779 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:32:08.688215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:32:08.688388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:32:08.688663 systemd[1]: kubelet.service: Consumed 1.061s CPU time. Jan 24 00:32:08.735052 systemd[1]: Started sshd@3-172.31.28.170:22-4.153.228.146:36380.service - OpenSSH per-connection server daemon (4.153.228.146:36380). Jan 24 00:32:09.217531 sshd[2250]: Accepted publickey for core from 4.153.228.146 port 36380 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:32:09.219143 sshd[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:09.223291 systemd-logind[1963]: New session 4 of user core. Jan 24 00:32:09.233121 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:32:09.568081 sshd[2250]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:09.571086 systemd[1]: sshd@3-172.31.28.170:22-4.153.228.146:36380.service: Deactivated successfully. Jan 24 00:32:09.573279 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:32:09.574194 systemd-logind[1963]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:32:09.575404 systemd-logind[1963]: Removed session 4. Jan 24 00:32:09.659231 systemd[1]: Started sshd@4-172.31.28.170:22-4.153.228.146:36384.service - OpenSSH per-connection server daemon (4.153.228.146:36384). Jan 24 00:32:10.140465 sshd[2257]: Accepted publickey for core from 4.153.228.146 port 36384 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:32:10.141968 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:10.147442 systemd-logind[1963]: New session 5 of user core. Jan 24 00:32:10.154094 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:32:10.447047 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:32:10.447469 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:32:10.996125 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:32:10.997906 (dockerd)[2275]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:32:11.623503 systemd-resolved[1909]: Clock change detected. Flushing caches. Jan 24 00:32:12.106831 dockerd[2275]: time="2026-01-24T00:32:12.106743990Z" level=info msg="Starting up" Jan 24 00:32:12.273521 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3440566288-merged.mount: Deactivated successfully. Jan 24 00:32:12.300968 systemd[1]: var-lib-docker-metacopy\x2dcheck3699822226-merged.mount: Deactivated successfully. Jan 24 00:32:12.324934 dockerd[2275]: time="2026-01-24T00:32:12.324889553Z" level=info msg="Loading containers: start." Jan 24 00:32:12.510787 kernel: Initializing XFRM netlink socket Jan 24 00:32:12.550899 (udev-worker)[2297]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:32:12.608847 systemd-networkd[1906]: docker0: Link UP Jan 24 00:32:12.634600 dockerd[2275]: time="2026-01-24T00:32:12.634554843Z" level=info msg="Loading containers: done." Jan 24 00:32:12.676160 dockerd[2275]: time="2026-01-24T00:32:12.676086864Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:32:12.676334 dockerd[2275]: time="2026-01-24T00:32:12.676198854Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:32:12.676334 dockerd[2275]: time="2026-01-24T00:32:12.676311811Z" level=info msg="Daemon has completed initialization" Jan 24 00:32:12.725149 dockerd[2275]: time="2026-01-24T00:32:12.724733358Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:32:12.725026 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:32:13.976117 containerd[1979]: time="2026-01-24T00:32:13.976072339Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:32:14.659083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007028771.mount: Deactivated successfully. Jan 24 00:32:16.855374 containerd[1979]: time="2026-01-24T00:32:16.855312776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:16.856737 containerd[1979]: time="2026-01-24T00:32:16.856575449Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 24 00:32:16.859738 containerd[1979]: time="2026-01-24T00:32:16.858387942Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:16.862061 containerd[1979]: time="2026-01-24T00:32:16.862021166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:16.863372 containerd[1979]: time="2026-01-24T00:32:16.863178847Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.887063901s" Jan 24 00:32:16.863372 containerd[1979]: time="2026-01-24T00:32:16.863223179Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:32:16.864078 containerd[1979]: time="2026-01-24T00:32:16.864044851Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:32:18.813677 containerd[1979]: time="2026-01-24T00:32:18.813620677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:18.815710 containerd[1979]: time="2026-01-24T00:32:18.815645566Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 24 00:32:18.821671 containerd[1979]: time="2026-01-24T00:32:18.820981420Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:18.825973 containerd[1979]: time="2026-01-24T00:32:18.825934023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:18.826690 containerd[1979]: time="2026-01-24T00:32:18.826649986Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.962567577s" Jan 24 00:32:18.826690 containerd[1979]: time="2026-01-24T00:32:18.826690703Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:32:18.827915 containerd[1979]: time="2026-01-24T00:32:18.827891924Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:32:19.450057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:32:19.458114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:32:19.676013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:19.685204 (kubelet)[2482]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:32:19.747991 kubelet[2482]: E0124 00:32:19.747861 2482 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:32:19.752142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:32:19.752354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:32:20.401846 containerd[1979]: time="2026-01-24T00:32:20.401800670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:20.403814 containerd[1979]: time="2026-01-24T00:32:20.403741500Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 24 00:32:20.406106 containerd[1979]: time="2026-01-24T00:32:20.406070269Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:20.412658 containerd[1979]: time="2026-01-24T00:32:20.412598173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:20.413748 containerd[1979]: time="2026-01-24T00:32:20.413696080Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.585774332s" Jan 24 00:32:20.413748 containerd[1979]: time="2026-01-24T00:32:20.413740887Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:32:20.415024 containerd[1979]: time="2026-01-24T00:32:20.415001431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:32:21.491544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241912596.mount: Deactivated successfully. Jan 24 00:32:22.098431 containerd[1979]: time="2026-01-24T00:32:22.098375953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:22.100438 containerd[1979]: time="2026-01-24T00:32:22.100391449Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 24 00:32:22.103917 containerd[1979]: time="2026-01-24T00:32:22.102851221Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:22.106809 containerd[1979]: time="2026-01-24T00:32:22.106059913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:22.106809 containerd[1979]: time="2026-01-24T00:32:22.106667167Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.691635446s" Jan 24 00:32:22.106809 containerd[1979]: time="2026-01-24T00:32:22.106695763Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:32:22.107512 containerd[1979]: time="2026-01-24T00:32:22.107484421Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:32:22.617541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1536799687.mount: Deactivated successfully. Jan 24 00:32:23.714671 containerd[1979]: time="2026-01-24T00:32:23.713129077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:23.717395 containerd[1979]: time="2026-01-24T00:32:23.717328228Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 24 00:32:23.720617 containerd[1979]: time="2026-01-24T00:32:23.720573470Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:23.727794 containerd[1979]: time="2026-01-24T00:32:23.727594234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:23.728495 containerd[1979]: time="2026-01-24T00:32:23.728259714Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.620742211s" Jan 24 00:32:23.728495 containerd[1979]: time="2026-01-24T00:32:23.728288646Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:32:23.728961 containerd[1979]: time="2026-01-24T00:32:23.728935450Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:32:24.219025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4222399355.mount: Deactivated successfully. Jan 24 00:32:24.231651 containerd[1979]: time="2026-01-24T00:32:24.231601843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:24.233746 containerd[1979]: time="2026-01-24T00:32:24.233578223Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 24 00:32:24.237426 containerd[1979]: time="2026-01-24T00:32:24.235814039Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:24.240322 containerd[1979]: time="2026-01-24T00:32:24.239512710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:24.240322 containerd[1979]: time="2026-01-24T00:32:24.240204537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 511.235266ms" Jan 24 00:32:24.240322 containerd[1979]: time="2026-01-24T00:32:24.240232578Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:32:24.241146 containerd[1979]: time="2026-01-24T00:32:24.241032484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:32:24.814103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421367279.mount: Deactivated successfully. Jan 24 00:32:27.244359 containerd[1979]: time="2026-01-24T00:32:27.244304431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:27.248663 containerd[1979]: time="2026-01-24T00:32:27.248578740Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 24 00:32:27.251892 containerd[1979]: time="2026-01-24T00:32:27.251821100Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:27.257032 containerd[1979]: time="2026-01-24T00:32:27.256988292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:27.258645 containerd[1979]: time="2026-01-24T00:32:27.258490459Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.017421181s" Jan 24 00:32:27.258645 containerd[1979]: time="2026-01-24T00:32:27.258529539Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:32:29.611505 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:29.618142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:32:29.655401 systemd[1]: Reloading requested from client PID 2638 ('systemctl') (unit session-5.scope)... Jan 24 00:32:29.655420 systemd[1]: Reloading... Jan 24 00:32:29.780790 zram_generator::config[2679]: No configuration found. Jan 24 00:32:29.950428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:32:30.037574 systemd[1]: Reloading finished in 381 ms. Jan 24 00:32:30.098965 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:32:30.099070 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:32:30.099357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:30.104252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:32:30.313790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:30.325393 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:32:30.383510 kubelet[2742]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:32:30.383510 kubelet[2742]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:32:30.383510 kubelet[2742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:32:30.385684 kubelet[2742]: I0124 00:32:30.385615 2742 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:32:30.657506 kubelet[2742]: I0124 00:32:30.656922 2742 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:32:30.657506 kubelet[2742]: I0124 00:32:30.656954 2742 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:32:30.657506 kubelet[2742]: I0124 00:32:30.657328 2742 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:32:30.703381 kubelet[2742]: I0124 00:32:30.702718 2742 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:32:30.707297 kubelet[2742]: E0124 00:32:30.707257 2742 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.170:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:30.722332 kubelet[2742]: E0124 00:32:30.722277 2742 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:32:30.722332 kubelet[2742]: I0124 00:32:30.722326 2742 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:32:30.728238 kubelet[2742]: I0124 00:32:30.728171 2742 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:32:30.730548 kubelet[2742]: I0124 00:32:30.730475 2742 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:32:30.730712 kubelet[2742]: I0124 00:32:30.730537 2742 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-170","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:32:30.733672 kubelet[2742]: I0124 00:32:30.733627 2742 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:32:30.733672 kubelet[2742]: I0124 00:32:30.733662 2742 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:32:30.735197 kubelet[2742]: I0124 00:32:30.735149 2742 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:32:30.740118 kubelet[2742]: I0124 00:32:30.740075 2742 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:32:30.740118 kubelet[2742]: I0124 00:32:30.740126 2742 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:32:30.740279 kubelet[2742]: I0124 00:32:30.740148 2742 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:32:30.740279 kubelet[2742]: I0124 00:32:30.740160 2742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:32:30.747810 kubelet[2742]: W0124 00:32:30.747302 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-170&limit=500&resourceVersion=0": dial tcp 172.31.28.170:6443: connect: connection refused Jan 24 00:32:30.747810 kubelet[2742]: E0124 00:32:30.747367 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-170&limit=500&resourceVersion=0\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:30.747810 kubelet[2742]: W0124 00:32:30.747712 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.170:6443: connect: connection refused Jan 24 00:32:30.747810 kubelet[2742]: E0124 00:32:30.747742 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:30.749806 kubelet[2742]: I0124 00:32:30.749754 2742 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:32:30.753792 kubelet[2742]: I0124 00:32:30.753640 2742 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:32:30.753792 kubelet[2742]: W0124 00:32:30.753705 2742 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:32:30.754724 kubelet[2742]: I0124 00:32:30.754579 2742 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:32:30.754724 kubelet[2742]: I0124 00:32:30.754630 2742 server.go:1287] "Started kubelet" Jan 24 00:32:30.754854 kubelet[2742]: I0124 00:32:30.754803 2742 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:32:30.757144 kubelet[2742]: I0124 00:32:30.756537 2742 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:32:30.761893 kubelet[2742]: I0124 00:32:30.761568 2742 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:32:30.761893 kubelet[2742]: I0124 00:32:30.761836 2742 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:32:30.767801 kubelet[2742]: I0124 00:32:30.765872 2742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:32:30.769136 kubelet[2742]: E0124 00:32:30.763897 2742 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.170:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.170:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-170.188d83712e1bf21f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-170,UID:ip-172-31-28-170,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-170,},FirstTimestamp:2026-01-24 00:32:30.754599455 +0000 UTC m=+0.425497955,LastTimestamp:2026-01-24 00:32:30.754599455 +0000 UTC m=+0.425497955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-170,}" Jan 24 00:32:30.773617 kubelet[2742]: I0124 00:32:30.772304 2742 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:32:30.773617 kubelet[2742]: I0124 00:32:30.772509 2742 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:32:30.773617 kubelet[2742]: E0124 00:32:30.772737 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:30.777514 kubelet[2742]: I0124 00:32:30.777489 2742 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:32:30.777628 kubelet[2742]: I0124 00:32:30.777556 2742 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:32:30.778185 kubelet[2742]: I0124 00:32:30.778159 2742 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:32:30.778292 kubelet[2742]: I0124 00:32:30.778264 2742 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:32:30.784220 kubelet[2742]: E0124 00:32:30.784176 2742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-170?timeout=10s\": dial tcp 172.31.28.170:6443: connect: connection refused" interval="200ms" Jan 24 00:32:30.789826 kubelet[2742]: W0124 00:32:30.785504 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.170:6443: connect: connection refused Jan 24 00:32:30.789826 kubelet[2742]: E0124 00:32:30.785566 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:30.791595 kubelet[2742]: E0124 00:32:30.791545 2742 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:32:30.793678 kubelet[2742]: I0124 00:32:30.793656 2742 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:32:30.795446 kubelet[2742]: I0124 00:32:30.794840 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:32:30.798229 kubelet[2742]: I0124 00:32:30.798031 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:32:30.798229 kubelet[2742]: I0124 00:32:30.798056 2742 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:32:30.798229 kubelet[2742]: I0124 00:32:30.798075 2742 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:32:30.798229 kubelet[2742]: I0124 00:32:30.798083 2742 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:32:30.798229 kubelet[2742]: E0124 00:32:30.798125 2742 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:32:30.804096 kubelet[2742]: W0124 00:32:30.800945 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.170:6443: connect: connection refused Jan 24 00:32:30.804096 kubelet[2742]: E0124 00:32:30.800996 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:30.810731 kubelet[2742]: I0124 00:32:30.810707 2742 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:32:30.810731 kubelet[2742]: I0124 00:32:30.810722 2742 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:32:30.810731 kubelet[2742]: I0124 00:32:30.810738 2742 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:32:30.815571 kubelet[2742]: I0124 00:32:30.815532 2742 policy_none.go:49] "None policy: Start" Jan 24 00:32:30.815571 kubelet[2742]: I0124 00:32:30.815564 2742 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:32:30.815739 kubelet[2742]: I0124 00:32:30.815593 2742 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:32:30.823366 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:32:30.837725 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:32:30.842383 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:32:30.853612 kubelet[2742]: I0124 00:32:30.853566 2742 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:32:30.853990 kubelet[2742]: I0124 00:32:30.853842 2742 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:32:30.853990 kubelet[2742]: I0124 00:32:30.853858 2742 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:32:30.855029 kubelet[2742]: I0124 00:32:30.854879 2742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:32:30.857131 kubelet[2742]: E0124 00:32:30.857098 2742 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:32:30.857228 kubelet[2742]: E0124 00:32:30.857151 2742 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-170\" not found" Jan 24 00:32:30.910279 systemd[1]: Created slice kubepods-burstable-pod7b2c9c10fbeb034c4316933e19c4be35.slice - libcontainer container kubepods-burstable-pod7b2c9c10fbeb034c4316933e19c4be35.slice. Jan 24 00:32:30.920942 kubelet[2742]: E0124 00:32:30.920686 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-170\" not found" node="ip-172-31-28-170" Jan 24 00:32:30.924201 systemd[1]: Created slice kubepods-burstable-podab2552e41acec7a664b167a77d719c13.slice - libcontainer container kubepods-burstable-podab2552e41acec7a664b167a77d719c13.slice. Jan 24 00:32:30.926910 kubelet[2742]: E0124 00:32:30.926865 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-170\" not found" node="ip-172-31-28-170" Jan 24 00:32:30.929754 systemd[1]: Created slice kubepods-burstable-pod91c77364a5dcc263c3ad264a3810df9b.slice - libcontainer container kubepods-burstable-pod91c77364a5dcc263c3ad264a3810df9b.slice. Jan 24 00:32:30.932064 kubelet[2742]: E0124 00:32:30.932025 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-170\" not found" node="ip-172-31-28-170" Jan 24 00:32:30.955343 kubelet[2742]: I0124 00:32:30.955309 2742 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-170" Jan 24 00:32:30.955733 kubelet[2742]: E0124 00:32:30.955696 2742 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.170:6443/api/v1/nodes\": dial tcp 172.31.28.170:6443: connect: connection refused" node="ip-172-31-28-170" Jan 24 00:32:30.979427 kubelet[2742]: I0124 00:32:30.979383 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b2c9c10fbeb034c4316933e19c4be35-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-170\" (UID: \"7b2c9c10fbeb034c4316933e19c4be35\") " pod="kube-system/kube-scheduler-ip-172-31-28-170" Jan 24 00:32:30.979574 kubelet[2742]: I0124 00:32:30.979442 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab2552e41acec7a664b167a77d719c13-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-170\" (UID: \"ab2552e41acec7a664b167a77d719c13\") " pod="kube-system/kube-apiserver-ip-172-31-28-170" Jan 24 00:32:30.979574 kubelet[2742]: I0124 00:32:30.979489 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab2552e41acec7a664b167a77d719c13-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-170\" (UID: \"ab2552e41acec7a664b167a77d719c13\") " pod="kube-system/kube-apiserver-ip-172-31-28-170" Jan 24 00:32:30.979574 kubelet[2742]: I0124 00:32:30.979514 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91c77364a5dcc263c3ad264a3810df9b-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-170\" (UID: \"91c77364a5dcc263c3ad264a3810df9b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-170" Jan 24 00:32:30.979574 kubelet[2742]: I0124 00:32:30.979534 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91c77364a5dcc263c3ad264a3810df9b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-170\" (UID: \"91c77364a5dcc263c3ad264a3810df9b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-170" Jan 24 00:32:30.979574 kubelet[2742]: I0124 00:32:30.979551 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91c77364a5dcc263c3ad264a3810df9b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-170\" (UID: \"91c77364a5dcc263c3ad264a3810df9b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-170" Jan 24 00:32:30.979711 kubelet[2742]: I0124 00:32:30.979588 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab2552e41acec7a664b167a77d719c13-ca-certs\") pod \"kube-apiserver-ip-172-31-28-170\" (UID: \"ab2552e41acec7a664b167a77d719c13\") " pod="kube-system/kube-apiserver-ip-172-31-28-170" Jan 24 00:32:30.979711 kubelet[2742]: I0124 00:32:30.979605 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/91c77364a5dcc263c3ad264a3810df9b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-170\" (UID: \"91c77364a5dcc263c3ad264a3810df9b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-170" Jan 24 00:32:30.979711 kubelet[2742]: I0124 00:32:30.979634 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91c77364a5dcc263c3ad264a3810df9b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-170\" (UID: \"91c77364a5dcc263c3ad264a3810df9b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-170" Jan 24 00:32:30.984889 kubelet[2742]: E0124 00:32:30.984847 2742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-170?timeout=10s\": dial tcp 172.31.28.170:6443: connect: connection refused" interval="400ms" Jan 24 00:32:31.158161 kubelet[2742]: I0124 00:32:31.158133 2742 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-170" Jan 24 00:32:31.158503 kubelet[2742]: E0124 00:32:31.158468 2742 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.170:6443/api/v1/nodes\": dial tcp 172.31.28.170:6443: connect: connection refused" node="ip-172-31-28-170" Jan 24 00:32:31.222635 containerd[1979]: time="2026-01-24T00:32:31.222520077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-170,Uid:7b2c9c10fbeb034c4316933e19c4be35,Namespace:kube-system,Attempt:0,}" Jan 24 00:32:31.232129 containerd[1979]: time="2026-01-24T00:32:31.232084217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-170,Uid:ab2552e41acec7a664b167a77d719c13,Namespace:kube-system,Attempt:0,}" Jan 24 00:32:31.232918 containerd[1979]: time="2026-01-24T00:32:31.232884419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-170,Uid:91c77364a5dcc263c3ad264a3810df9b,Namespace:kube-system,Attempt:0,}" Jan 24 00:32:31.385535 kubelet[2742]: E0124 00:32:31.385476 2742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-170?timeout=10s\": dial tcp 172.31.28.170:6443: connect: connection refused" interval="800ms" Jan 24 00:32:31.560665 kubelet[2742]: I0124 00:32:31.560559 2742 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-170" Jan 24 00:32:31.560983 kubelet[2742]: E0124 00:32:31.560950 2742 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.170:6443/api/v1/nodes\": dial tcp 172.31.28.170:6443: connect: connection refused" node="ip-172-31-28-170" Jan 24 00:32:31.578789 kubelet[2742]: W0124 00:32:31.578022 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.170:6443: connect: connection refused Jan 24 00:32:31.578789 kubelet[2742]: E0124 00:32:31.578094 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:31.704729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount411520600.mount: Deactivated successfully. Jan 24 00:32:31.722423 containerd[1979]: time="2026-01-24T00:32:31.722357784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:32:31.724279 containerd[1979]: time="2026-01-24T00:32:31.724230749Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:32:31.726509 containerd[1979]: time="2026-01-24T00:32:31.726463440Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:32:31.728292 containerd[1979]: time="2026-01-24T00:32:31.728249827Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:32:31.730312 containerd[1979]: time="2026-01-24T00:32:31.730260377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:32:31.732668 containerd[1979]: time="2026-01-24T00:32:31.732595542Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:32:31.734313 containerd[1979]: time="2026-01-24T00:32:31.734241151Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:32:31.737988 containerd[1979]: time="2026-01-24T00:32:31.737906768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:32:31.739810 containerd[1979]: time="2026-01-24T00:32:31.738884942Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 505.931695ms" Jan 24 00:32:31.739964 containerd[1979]: time="2026-01-24T00:32:31.739930863Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 517.332571ms" Jan 24 00:32:31.744457 containerd[1979]: time="2026-01-24T00:32:31.743899622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 511.735025ms" Jan 24 00:32:31.889886 kubelet[2742]: W0124 00:32:31.889684 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-170&limit=500&resourceVersion=0": dial tcp 172.31.28.170:6443: connect: connection refused Jan 24 00:32:31.890537 kubelet[2742]: E0124 00:32:31.890479 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-170&limit=500&resourceVersion=0\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:31.913375 containerd[1979]: time="2026-01-24T00:32:31.913180796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:31.913375 containerd[1979]: time="2026-01-24T00:32:31.913314365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:31.913696 containerd[1979]: time="2026-01-24T00:32:31.913357159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:31.913696 containerd[1979]: time="2026-01-24T00:32:31.913551457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:31.916278 containerd[1979]: time="2026-01-24T00:32:31.915686785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:31.916278 containerd[1979]: time="2026-01-24T00:32:31.915877990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:31.916278 containerd[1979]: time="2026-01-24T00:32:31.915890505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:31.916447 containerd[1979]: time="2026-01-24T00:32:31.916293042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:31.925705 containerd[1979]: time="2026-01-24T00:32:31.924990533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:31.925705 containerd[1979]: time="2026-01-24T00:32:31.925421996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:31.925705 containerd[1979]: time="2026-01-24T00:32:31.925439490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:31.925705 containerd[1979]: time="2026-01-24T00:32:31.925524763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:31.942957 systemd[1]: Started cri-containerd-e89ad739e1a8b29c383a877efef383b85ed94491a635478e92d5eefc1bb8375c.scope - libcontainer container e89ad739e1a8b29c383a877efef383b85ed94491a635478e92d5eefc1bb8375c. Jan 24 00:32:31.950455 systemd[1]: Started cri-containerd-a108d7b0d0d7ffe6427e20fb09e8f006c8721fd89074553a0f47e86625b3d9ed.scope - libcontainer container a108d7b0d0d7ffe6427e20fb09e8f006c8721fd89074553a0f47e86625b3d9ed. Jan 24 00:32:31.960025 systemd[1]: Started cri-containerd-71bc2c2d1c076037518639153a40423971786c271cbf3d07fa0d9ae53e7f0a65.scope - libcontainer container 71bc2c2d1c076037518639153a40423971786c271cbf3d07fa0d9ae53e7f0a65. Jan 24 00:32:31.992266 kubelet[2742]: W0124 00:32:31.992230 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.170:6443: connect: connection refused Jan 24 00:32:31.992403 kubelet[2742]: E0124 00:32:31.992275 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:32.023182 containerd[1979]: time="2026-01-24T00:32:32.023100394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-170,Uid:91c77364a5dcc263c3ad264a3810df9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e89ad739e1a8b29c383a877efef383b85ed94491a635478e92d5eefc1bb8375c\"" Jan 24 00:32:32.037422 containerd[1979]: time="2026-01-24T00:32:32.037377115Z" level=info msg="CreateContainer within sandbox \"e89ad739e1a8b29c383a877efef383b85ed94491a635478e92d5eefc1bb8375c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:32:32.056946 containerd[1979]: time="2026-01-24T00:32:32.054916777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-170,Uid:ab2552e41acec7a664b167a77d719c13,Namespace:kube-system,Attempt:0,} returns sandbox id \"71bc2c2d1c076037518639153a40423971786c271cbf3d07fa0d9ae53e7f0a65\"" Jan 24 00:32:32.066885 containerd[1979]: time="2026-01-24T00:32:32.066749821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-170,Uid:7b2c9c10fbeb034c4316933e19c4be35,Namespace:kube-system,Attempt:0,} returns sandbox id \"a108d7b0d0d7ffe6427e20fb09e8f006c8721fd89074553a0f47e86625b3d9ed\"" Jan 24 00:32:32.070063 containerd[1979]: time="2026-01-24T00:32:32.070022415Z" level=info msg="CreateContainer within sandbox \"71bc2c2d1c076037518639153a40423971786c271cbf3d07fa0d9ae53e7f0a65\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:32:32.072643 containerd[1979]: time="2026-01-24T00:32:32.072608367Z" level=info msg="CreateContainer within sandbox \"a108d7b0d0d7ffe6427e20fb09e8f006c8721fd89074553a0f47e86625b3d9ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:32:32.079562 containerd[1979]: time="2026-01-24T00:32:32.079421641Z" level=info msg="CreateContainer within sandbox \"e89ad739e1a8b29c383a877efef383b85ed94491a635478e92d5eefc1bb8375c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"284f9777b593dd681fc978aff40c38e50af2136f80a4fcdfb663be915dffb8d1\"" Jan 24 00:32:32.080085 containerd[1979]: time="2026-01-24T00:32:32.080061851Z" level=info msg="StartContainer for \"284f9777b593dd681fc978aff40c38e50af2136f80a4fcdfb663be915dffb8d1\"" Jan 24 00:32:32.112136 containerd[1979]: time="2026-01-24T00:32:32.112076963Z" level=info msg="CreateContainer within sandbox \"a108d7b0d0d7ffe6427e20fb09e8f006c8721fd89074553a0f47e86625b3d9ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1fe1bb403425fc30517139abf06a0328108e08e7c572e00efe5c1fc76291e194\"" Jan 24 00:32:32.112883 containerd[1979]: time="2026-01-24T00:32:32.112844810Z" level=info msg="StartContainer for \"1fe1bb403425fc30517139abf06a0328108e08e7c572e00efe5c1fc76291e194\"" Jan 24 00:32:32.117233 systemd[1]: Started cri-containerd-284f9777b593dd681fc978aff40c38e50af2136f80a4fcdfb663be915dffb8d1.scope - libcontainer container 284f9777b593dd681fc978aff40c38e50af2136f80a4fcdfb663be915dffb8d1. Jan 24 00:32:32.135387 containerd[1979]: time="2026-01-24T00:32:32.134647360Z" level=info msg="CreateContainer within sandbox \"71bc2c2d1c076037518639153a40423971786c271cbf3d07fa0d9ae53e7f0a65\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"94923b5272130b079a567b21bbb3c8578688b6702fb4b6bfae8d431486402850\"" Jan 24 00:32:32.137393 containerd[1979]: time="2026-01-24T00:32:32.137334674Z" level=info msg="StartContainer for \"94923b5272130b079a567b21bbb3c8578688b6702fb4b6bfae8d431486402850\"" Jan 24 00:32:32.159575 kubelet[2742]: W0124 00:32:32.159411 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.170:6443: connect: connection refused Jan 24 00:32:32.159575 kubelet[2742]: E0124 00:32:32.159495 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:32.175011 systemd[1]: Started cri-containerd-1fe1bb403425fc30517139abf06a0328108e08e7c572e00efe5c1fc76291e194.scope - libcontainer container 1fe1bb403425fc30517139abf06a0328108e08e7c572e00efe5c1fc76291e194. Jan 24 00:32:32.186605 kubelet[2742]: E0124 00:32:32.186550 2742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-170?timeout=10s\": dial tcp 172.31.28.170:6443: connect: connection refused" interval="1.6s" Jan 24 00:32:32.193003 systemd[1]: Started cri-containerd-94923b5272130b079a567b21bbb3c8578688b6702fb4b6bfae8d431486402850.scope - libcontainer container 94923b5272130b079a567b21bbb3c8578688b6702fb4b6bfae8d431486402850. Jan 24 00:32:32.230464 containerd[1979]: time="2026-01-24T00:32:32.230308050Z" level=info msg="StartContainer for \"284f9777b593dd681fc978aff40c38e50af2136f80a4fcdfb663be915dffb8d1\" returns successfully" Jan 24 00:32:32.279691 containerd[1979]: time="2026-01-24T00:32:32.278807084Z" level=info msg="StartContainer for \"1fe1bb403425fc30517139abf06a0328108e08e7c572e00efe5c1fc76291e194\" returns successfully" Jan 24 00:32:32.287266 containerd[1979]: time="2026-01-24T00:32:32.287222160Z" level=info msg="StartContainer for \"94923b5272130b079a567b21bbb3c8578688b6702fb4b6bfae8d431486402850\" returns successfully" Jan 24 00:32:32.364284 kubelet[2742]: I0124 00:32:32.364076 2742 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-170" Jan 24 00:32:32.365029 kubelet[2742]: E0124 00:32:32.364938 2742 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.170:6443/api/v1/nodes\": dial tcp 172.31.28.170:6443: connect: connection refused" node="ip-172-31-28-170" Jan 24 00:32:32.816326 kubelet[2742]: E0124 00:32:32.816279 2742 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.170:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:32.834780 kubelet[2742]: E0124 00:32:32.832991 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-170\" not found" node="ip-172-31-28-170" Jan 24 00:32:32.837304 kubelet[2742]: E0124 00:32:32.836966 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-170\" not found" node="ip-172-31-28-170" Jan 24 00:32:32.841054 kubelet[2742]: E0124 00:32:32.841031 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-170\" not found" node="ip-172-31-28-170" Jan 24 00:32:33.465663 kubelet[2742]: W0124 00:32:33.464848 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.170:6443: connect: connection refused Jan 24 00:32:33.465663 kubelet[2742]: E0124 00:32:33.464932 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:33.787782 kubelet[2742]: E0124 00:32:33.787334 2742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-170?timeout=10s\": dial tcp 172.31.28.170:6443: connect: connection refused" interval="3.2s" Jan 24 00:32:33.841385 kubelet[2742]: E0124 00:32:33.841067 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-170\" not found" node="ip-172-31-28-170" Jan 24 00:32:33.842291 kubelet[2742]: E0124 00:32:33.842091 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-170\" not found" node="ip-172-31-28-170" Jan 24 00:32:33.967287 kubelet[2742]: I0124 00:32:33.967248 2742 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-170" Jan 24 00:32:33.967674 kubelet[2742]: E0124 00:32:33.967637 2742 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.170:6443/api/v1/nodes\": dial tcp 172.31.28.170:6443: connect: connection refused" node="ip-172-31-28-170" Jan 24 00:32:34.263257 kubelet[2742]: W0124 00:32:34.262918 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-170&limit=500&resourceVersion=0": dial tcp 172.31.28.170:6443: connect: connection refused Jan 24 00:32:34.263257 kubelet[2742]: E0124 00:32:34.263004 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-170&limit=500&resourceVersion=0\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:34.647212 kubelet[2742]: W0124 00:32:34.647087 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.170:6443: connect: connection refused Jan 24 00:32:34.647212 kubelet[2742]: E0124 00:32:34.647152 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:34.842017 kubelet[2742]: E0124 00:32:34.841986 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-170\" not found" node="ip-172-31-28-170" Jan 24 00:32:35.010819 kubelet[2742]: W0124 00:32:35.010674 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.170:6443: connect: connection refused Jan 24 00:32:35.010819 kubelet[2742]: E0124 00:32:35.010721 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.170:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:35.122347 kubelet[2742]: E0124 00:32:35.122036 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-170\" not found" node="ip-172-31-28-170" Jan 24 00:32:35.422830 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 00:32:35.755124 kubelet[2742]: E0124 00:32:35.755023 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-170\" not found" node="ip-172-31-28-170" Jan 24 00:32:36.953738 kubelet[2742]: E0124 00:32:36.953700 2742 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-28-170" not found Jan 24 00:32:36.991902 kubelet[2742]: E0124 00:32:36.991859 2742 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-170\" not found" node="ip-172-31-28-170" Jan 24 00:32:37.169901 kubelet[2742]: I0124 00:32:37.169872 2742 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-170" Jan 24 00:32:37.184699 kubelet[2742]: I0124 00:32:37.184639 2742 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-170" Jan 24 00:32:37.184699 kubelet[2742]: E0124 00:32:37.184683 2742 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-170\": node \"ip-172-31-28-170\" not found" Jan 24 00:32:37.195888 kubelet[2742]: E0124 00:32:37.195836 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:37.296806 kubelet[2742]: E0124 00:32:37.296655 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:37.397489 kubelet[2742]: E0124 00:32:37.397393 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:37.498669 kubelet[2742]: E0124 00:32:37.498415 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:37.599426 kubelet[2742]: E0124 00:32:37.599251 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:37.700100 kubelet[2742]: E0124 00:32:37.700050 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:37.801175 kubelet[2742]: E0124 00:32:37.801139 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:37.901813 kubelet[2742]: E0124 00:32:37.901307 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:38.002069 kubelet[2742]: E0124 00:32:38.002028 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:38.102344 kubelet[2742]: E0124 00:32:38.102295 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:38.203350 kubelet[2742]: E0124 00:32:38.203236 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:38.304436 kubelet[2742]: E0124 00:32:38.304393 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:38.404647 kubelet[2742]: E0124 00:32:38.404592 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:38.506099 kubelet[2742]: E0124 00:32:38.505951 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:38.607106 kubelet[2742]: E0124 00:32:38.607023 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:38.708083 kubelet[2742]: E0124 00:32:38.708040 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:38.781327 systemd[1]: Reloading requested from client PID 3024 ('systemctl') (unit session-5.scope)... Jan 24 00:32:38.781350 systemd[1]: Reloading... Jan 24 00:32:38.808974 kubelet[2742]: E0124 00:32:38.808906 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:38.896863 zram_generator::config[3063]: No configuration found. Jan 24 00:32:38.910386 kubelet[2742]: E0124 00:32:38.910347 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:39.011365 kubelet[2742]: E0124 00:32:39.011279 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:39.037814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:32:39.112085 kubelet[2742]: E0124 00:32:39.112036 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-170\" not found" Jan 24 00:32:39.140811 systemd[1]: Reloading finished in 358 ms. Jan 24 00:32:39.177535 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:32:39.190620 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:32:39.191098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:39.195281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:32:39.462978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:39.470458 (kubelet)[3124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:32:39.552823 kubelet[3124]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:32:39.552823 kubelet[3124]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:32:39.552823 kubelet[3124]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:32:39.553463 kubelet[3124]: I0124 00:32:39.552890 3124 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:32:39.565553 kubelet[3124]: I0124 00:32:39.564459 3124 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:32:39.565553 kubelet[3124]: I0124 00:32:39.564486 3124 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:32:39.565553 kubelet[3124]: I0124 00:32:39.564955 3124 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:32:39.566749 kubelet[3124]: I0124 00:32:39.566721 3124 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:32:39.570200 kubelet[3124]: I0124 00:32:39.569920 3124 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:32:39.573950 kubelet[3124]: E0124 00:32:39.573864 3124 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:32:39.573950 kubelet[3124]: I0124 00:32:39.573897 3124 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:32:39.576114 kubelet[3124]: I0124 00:32:39.576085 3124 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:32:39.576363 kubelet[3124]: I0124 00:32:39.576314 3124 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:32:39.576553 kubelet[3124]: I0124 00:32:39.576342 3124 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-170","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:32:39.576651 kubelet[3124]: I0124 00:32:39.576555 3124 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:32:39.576651 kubelet[3124]: I0124 00:32:39.576565 3124 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:32:39.576651 kubelet[3124]: I0124 00:32:39.576608 3124 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:32:39.576758 kubelet[3124]: I0124 00:32:39.576740 3124 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:32:39.578795 kubelet[3124]: I0124 00:32:39.577521 3124 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:32:39.578795 kubelet[3124]: I0124 00:32:39.577557 3124 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:32:39.578795 kubelet[3124]: I0124 00:32:39.577571 3124 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:32:39.588791 kubelet[3124]: I0124 00:32:39.588133 3124 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:32:39.588791 kubelet[3124]: I0124 00:32:39.588568 3124 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:32:39.589357 kubelet[3124]: I0124 00:32:39.589341 3124 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:32:39.589507 kubelet[3124]: I0124 00:32:39.589499 3124 server.go:1287] "Started kubelet" Jan 24 00:32:39.590371 kubelet[3124]: I0124 00:32:39.590333 3124 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:32:39.593589 kubelet[3124]: I0124 00:32:39.593566 3124 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:32:39.595798 kubelet[3124]: I0124 00:32:39.595689 3124 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:32:39.596964 kubelet[3124]: I0124 00:32:39.596949 3124 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:32:39.597701 kubelet[3124]: I0124 00:32:39.597684 3124 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:32:39.601726 kubelet[3124]: I0124 00:32:39.601694 3124 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:32:39.606815 kubelet[3124]: I0124 00:32:39.605261 3124 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:32:39.611004 kubelet[3124]: I0124 00:32:39.610981 3124 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:32:39.611103 kubelet[3124]: I0124 00:32:39.611019 3124 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:32:39.612996 kubelet[3124]: I0124 00:32:39.612937 3124 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:32:39.613110 kubelet[3124]: I0124 00:32:39.613102 3124 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:32:39.613238 kubelet[3124]: I0124 00:32:39.613217 3124 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:32:39.619929 kubelet[3124]: I0124 00:32:39.619895 3124 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:32:39.621323 kubelet[3124]: I0124 00:32:39.621174 3124 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:32:39.621475 kubelet[3124]: I0124 00:32:39.621464 3124 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:32:39.621538 kubelet[3124]: I0124 00:32:39.621529 3124 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:32:39.621581 kubelet[3124]: I0124 00:32:39.621576 3124 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:32:39.621668 kubelet[3124]: E0124 00:32:39.621652 3124 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:32:39.674003 kubelet[3124]: I0124 00:32:39.673976 3124 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:32:39.674178 kubelet[3124]: I0124 00:32:39.674166 3124 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:32:39.674262 kubelet[3124]: I0124 00:32:39.674255 3124 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:32:39.674578 kubelet[3124]: I0124 00:32:39.674557 3124 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:32:39.674677 kubelet[3124]: I0124 00:32:39.674650 3124 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:32:39.674742 kubelet[3124]: I0124 00:32:39.674736 3124 policy_none.go:49] "None policy: Start" Jan 24 00:32:39.674886 kubelet[3124]: I0124 00:32:39.674877 3124 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:32:39.674952 kubelet[3124]: I0124 00:32:39.674945 3124 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:32:39.675217 kubelet[3124]: I0124 00:32:39.675206 3124 state_mem.go:75] "Updated machine memory state" Jan 24 00:32:39.683984 kubelet[3124]: I0124 00:32:39.683963 3124 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:32:39.684533 kubelet[3124]: I0124 00:32:39.684517 3124 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:32:39.684642 kubelet[3124]: I0124 00:32:39.684617 3124 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:32:39.685055 kubelet[3124]: I0124 00:32:39.685040 3124 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:32:39.694193 kubelet[3124]: E0124 00:32:39.694171 3124 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:32:39.724958 kubelet[3124]: I0124 00:32:39.724856 3124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-170" Jan 24 00:32:39.726789 kubelet[3124]: I0124 00:32:39.726158 3124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-170" Jan 24 00:32:39.728129 kubelet[3124]: I0124 00:32:39.727171 3124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-170" Jan 24 00:32:39.795250 kubelet[3124]: I0124 00:32:39.795217 3124 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-170" Jan 24 00:32:39.806428 kubelet[3124]: I0124 00:32:39.806174 3124 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-170" Jan 24 00:32:39.806428 kubelet[3124]: I0124 00:32:39.806252 3124 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-170" Jan 24 00:32:39.812043 kubelet[3124]: I0124 00:32:39.812010 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/91c77364a5dcc263c3ad264a3810df9b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-170\" (UID: \"91c77364a5dcc263c3ad264a3810df9b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-170" Jan 24 00:32:39.812173 kubelet[3124]: I0124 00:32:39.812051 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91c77364a5dcc263c3ad264a3810df9b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-170\" (UID: \"91c77364a5dcc263c3ad264a3810df9b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-170" Jan 24 00:32:39.812173 kubelet[3124]: I0124 00:32:39.812068 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91c77364a5dcc263c3ad264a3810df9b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-170\" (UID: \"91c77364a5dcc263c3ad264a3810df9b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-170" Jan 24 00:32:39.812173 kubelet[3124]: I0124 00:32:39.812084 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91c77364a5dcc263c3ad264a3810df9b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-170\" (UID: \"91c77364a5dcc263c3ad264a3810df9b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-170" Jan 24 00:32:39.812173 kubelet[3124]: I0124 00:32:39.812103 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab2552e41acec7a664b167a77d719c13-ca-certs\") pod \"kube-apiserver-ip-172-31-28-170\" (UID: \"ab2552e41acec7a664b167a77d719c13\") " pod="kube-system/kube-apiserver-ip-172-31-28-170" Jan 24 00:32:39.812173 kubelet[3124]: I0124 00:32:39.812117 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab2552e41acec7a664b167a77d719c13-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-170\" (UID: \"ab2552e41acec7a664b167a77d719c13\") " pod="kube-system/kube-apiserver-ip-172-31-28-170" Jan 24 00:32:39.812305 kubelet[3124]: I0124 00:32:39.812132 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab2552e41acec7a664b167a77d719c13-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-170\" (UID: \"ab2552e41acec7a664b167a77d719c13\") " pod="kube-system/kube-apiserver-ip-172-31-28-170" Jan 24 00:32:39.812305 kubelet[3124]: I0124 00:32:39.812146 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91c77364a5dcc263c3ad264a3810df9b-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-170\" (UID: \"91c77364a5dcc263c3ad264a3810df9b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-170" Jan 24 00:32:39.812305 kubelet[3124]: I0124 00:32:39.812161 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b2c9c10fbeb034c4316933e19c4be35-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-170\" (UID: \"7b2c9c10fbeb034c4316933e19c4be35\") " pod="kube-system/kube-scheduler-ip-172-31-28-170" Jan 24 00:32:40.581157 kubelet[3124]: I0124 00:32:40.581116 3124 apiserver.go:52] "Watching apiserver" Jan 24 00:32:40.611816 kubelet[3124]: I0124 00:32:40.611771 3124 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:32:40.657073 kubelet[3124]: I0124 00:32:40.657043 3124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-170" Jan 24 00:32:40.657919 kubelet[3124]: I0124 00:32:40.657894 3124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-170" Jan 24 00:32:40.670579 kubelet[3124]: E0124 00:32:40.670518 3124 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-170\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-170" Jan 24 00:32:40.670865 kubelet[3124]: E0124 00:32:40.670843 3124 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-170\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-170" Jan 24 00:32:40.710094 kubelet[3124]: I0124 00:32:40.709995 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-170" podStartSLOduration=1.7099744000000001 podStartE2EDuration="1.7099744s" podCreationTimestamp="2026-01-24 00:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:32:40.709419503 +0000 UTC m=+1.229633585" watchObservedRunningTime="2026-01-24 00:32:40.7099744 +0000 UTC m=+1.230188484" Jan 24 00:32:40.710557 kubelet[3124]: I0124 00:32:40.710135 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-170" podStartSLOduration=1.710126477 podStartE2EDuration="1.710126477s" podCreationTimestamp="2026-01-24 00:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:32:40.696720729 +0000 UTC m=+1.216934815" watchObservedRunningTime="2026-01-24 00:32:40.710126477 +0000 UTC m=+1.230340564" Jan 24 00:32:40.722790 kubelet[3124]: I0124 00:32:40.722057 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-170" podStartSLOduration=1.722034416 podStartE2EDuration="1.722034416s" podCreationTimestamp="2026-01-24 00:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:32:40.721955713 +0000 UTC m=+1.242169824" watchObservedRunningTime="2026-01-24 00:32:40.722034416 +0000 UTC m=+1.242248505" Jan 24 00:32:40.903343 sudo[2260]: pam_unix(sudo:session): session closed for user root Jan 24 00:32:40.981056 sshd[2257]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:40.986459 systemd-logind[1963]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:32:40.988011 systemd[1]: sshd@4-172.31.28.170:22-4.153.228.146:36384.service: Deactivated successfully. Jan 24 00:32:40.990468 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:32:40.990730 systemd[1]: session-5.scope: Consumed 3.473s CPU time, 143.1M memory peak, 0B memory swap peak. Jan 24 00:32:40.991933 systemd-logind[1963]: Removed session 5. Jan 24 00:32:44.363960 kubelet[3124]: I0124 00:32:44.363914 3124 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:32:44.364378 containerd[1979]: time="2026-01-24T00:32:44.364340989Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:32:44.364704 kubelet[3124]: I0124 00:32:44.364588 3124 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:32:45.079203 systemd[1]: Created slice kubepods-besteffort-pod5f71f588_01aa_42e5_bc6b_95abf5eb1e36.slice - libcontainer container kubepods-besteffort-pod5f71f588_01aa_42e5_bc6b_95abf5eb1e36.slice. Jan 24 00:32:45.098356 systemd[1]: Created slice kubepods-burstable-podd9cbc66c_94d9_4487_a5ec_f42c93ec2175.slice - libcontainer container kubepods-burstable-podd9cbc66c_94d9_4487_a5ec_f42c93ec2175.slice. Jan 24 00:32:45.144180 kubelet[3124]: I0124 00:32:45.144110 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/d9cbc66c-94d9-4487-a5ec-f42c93ec2175-flannel-cfg\") pod \"kube-flannel-ds-j5rwq\" (UID: \"d9cbc66c-94d9-4487-a5ec-f42c93ec2175\") " pod="kube-flannel/kube-flannel-ds-j5rwq" Jan 24 00:32:45.144180 kubelet[3124]: I0124 00:32:45.144154 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk44t\" (UniqueName: \"kubernetes.io/projected/d9cbc66c-94d9-4487-a5ec-f42c93ec2175-kube-api-access-pk44t\") pod \"kube-flannel-ds-j5rwq\" (UID: \"d9cbc66c-94d9-4487-a5ec-f42c93ec2175\") " pod="kube-flannel/kube-flannel-ds-j5rwq" Jan 24 00:32:45.144180 kubelet[3124]: I0124 00:32:45.144177 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f71f588-01aa-42e5-bc6b-95abf5eb1e36-lib-modules\") pod \"kube-proxy-6k7jt\" (UID: \"5f71f588-01aa-42e5-bc6b-95abf5eb1e36\") " pod="kube-system/kube-proxy-6k7jt" Jan 24 00:32:45.144180 kubelet[3124]: I0124 00:32:45.144192 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/d9cbc66c-94d9-4487-a5ec-f42c93ec2175-cni-plugin\") pod \"kube-flannel-ds-j5rwq\" (UID: \"d9cbc66c-94d9-4487-a5ec-f42c93ec2175\") " pod="kube-flannel/kube-flannel-ds-j5rwq" Jan 24 00:32:45.144511 kubelet[3124]: I0124 00:32:45.144210 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f71f588-01aa-42e5-bc6b-95abf5eb1e36-kube-proxy\") pod \"kube-proxy-6k7jt\" (UID: \"5f71f588-01aa-42e5-bc6b-95abf5eb1e36\") " pod="kube-system/kube-proxy-6k7jt" Jan 24 00:32:45.144511 kubelet[3124]: I0124 00:32:45.144226 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d9cbc66c-94d9-4487-a5ec-f42c93ec2175-run\") pod \"kube-flannel-ds-j5rwq\" (UID: \"d9cbc66c-94d9-4487-a5ec-f42c93ec2175\") " pod="kube-flannel/kube-flannel-ds-j5rwq" Jan 24 00:32:45.144511 kubelet[3124]: I0124 00:32:45.144242 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9cbc66c-94d9-4487-a5ec-f42c93ec2175-xtables-lock\") pod \"kube-flannel-ds-j5rwq\" (UID: \"d9cbc66c-94d9-4487-a5ec-f42c93ec2175\") " pod="kube-flannel/kube-flannel-ds-j5rwq" Jan 24 00:32:45.144511 kubelet[3124]: I0124 00:32:45.144260 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/d9cbc66c-94d9-4487-a5ec-f42c93ec2175-cni\") pod \"kube-flannel-ds-j5rwq\" (UID: \"d9cbc66c-94d9-4487-a5ec-f42c93ec2175\") " pod="kube-flannel/kube-flannel-ds-j5rwq" Jan 24 00:32:45.144511 kubelet[3124]: I0124 00:32:45.144276 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f71f588-01aa-42e5-bc6b-95abf5eb1e36-xtables-lock\") pod \"kube-proxy-6k7jt\" (UID: \"5f71f588-01aa-42e5-bc6b-95abf5eb1e36\") " pod="kube-system/kube-proxy-6k7jt" Jan 24 00:32:45.144637 kubelet[3124]: I0124 00:32:45.144299 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfqch\" (UniqueName: \"kubernetes.io/projected/5f71f588-01aa-42e5-bc6b-95abf5eb1e36-kube-api-access-xfqch\") pod \"kube-proxy-6k7jt\" (UID: \"5f71f588-01aa-42e5-bc6b-95abf5eb1e36\") " pod="kube-system/kube-proxy-6k7jt" Jan 24 00:32:45.396164 containerd[1979]: time="2026-01-24T00:32:45.396006787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6k7jt,Uid:5f71f588-01aa-42e5-bc6b-95abf5eb1e36,Namespace:kube-system,Attempt:0,}" Jan 24 00:32:45.409005 containerd[1979]: time="2026-01-24T00:32:45.408124770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-j5rwq,Uid:d9cbc66c-94d9-4487-a5ec-f42c93ec2175,Namespace:kube-flannel,Attempt:0,}" Jan 24 00:32:45.452074 containerd[1979]: time="2026-01-24T00:32:45.451272214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:45.454704 containerd[1979]: time="2026-01-24T00:32:45.453892643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:45.454704 containerd[1979]: time="2026-01-24T00:32:45.453937602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:45.454704 containerd[1979]: time="2026-01-24T00:32:45.454048281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:45.510813 containerd[1979]: time="2026-01-24T00:32:45.510021778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:45.510813 containerd[1979]: time="2026-01-24T00:32:45.510155153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:45.510813 containerd[1979]: time="2026-01-24T00:32:45.510189026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:45.510343 systemd[1]: Started cri-containerd-4af65eac4f5228f92d3d35a3e8846d89ec9f176e44e301b402572d6700a095d5.scope - libcontainer container 4af65eac4f5228f92d3d35a3e8846d89ec9f176e44e301b402572d6700a095d5. Jan 24 00:32:45.512259 containerd[1979]: time="2026-01-24T00:32:45.510833479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:45.537403 systemd[1]: Started cri-containerd-365f89050065f5293d7e9a8fbd7643d051d8f7ba5f7a8408b19a5f6c2813242f.scope - libcontainer container 365f89050065f5293d7e9a8fbd7643d051d8f7ba5f7a8408b19a5f6c2813242f. Jan 24 00:32:45.566225 containerd[1979]: time="2026-01-24T00:32:45.565974908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6k7jt,Uid:5f71f588-01aa-42e5-bc6b-95abf5eb1e36,Namespace:kube-system,Attempt:0,} returns sandbox id \"4af65eac4f5228f92d3d35a3e8846d89ec9f176e44e301b402572d6700a095d5\"" Jan 24 00:32:45.571231 containerd[1979]: time="2026-01-24T00:32:45.571177053Z" level=info msg="CreateContainer within sandbox \"4af65eac4f5228f92d3d35a3e8846d89ec9f176e44e301b402572d6700a095d5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:32:45.609045 containerd[1979]: time="2026-01-24T00:32:45.608997355Z" level=info msg="CreateContainer within sandbox \"4af65eac4f5228f92d3d35a3e8846d89ec9f176e44e301b402572d6700a095d5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fbbc87e610812c98d5a4621889f5f4bbb97464cc12b5205e2b6b893c72ea9230\"" Jan 24 00:32:45.611699 containerd[1979]: time="2026-01-24T00:32:45.611658445Z" level=info msg="StartContainer for \"fbbc87e610812c98d5a4621889f5f4bbb97464cc12b5205e2b6b893c72ea9230\"" Jan 24 00:32:45.621603 containerd[1979]: time="2026-01-24T00:32:45.621462267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-j5rwq,Uid:d9cbc66c-94d9-4487-a5ec-f42c93ec2175,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"365f89050065f5293d7e9a8fbd7643d051d8f7ba5f7a8408b19a5f6c2813242f\"" Jan 24 00:32:45.627705 containerd[1979]: time="2026-01-24T00:32:45.627570568Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 24 00:32:45.657005 systemd[1]: Started cri-containerd-fbbc87e610812c98d5a4621889f5f4bbb97464cc12b5205e2b6b893c72ea9230.scope - libcontainer container fbbc87e610812c98d5a4621889f5f4bbb97464cc12b5205e2b6b893c72ea9230. Jan 24 00:32:45.700680 containerd[1979]: time="2026-01-24T00:32:45.700636165Z" level=info msg="StartContainer for \"fbbc87e610812c98d5a4621889f5f4bbb97464cc12b5205e2b6b893c72ea9230\" returns successfully" Jan 24 00:32:46.698369 kubelet[3124]: I0124 00:32:46.698270 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6k7jt" podStartSLOduration=1.6982245969999998 podStartE2EDuration="1.698224597s" podCreationTimestamp="2026-01-24 00:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:32:46.697747029 +0000 UTC m=+7.217961117" watchObservedRunningTime="2026-01-24 00:32:46.698224597 +0000 UTC m=+7.218438687" Jan 24 00:32:47.247466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548722955.mount: Deactivated successfully. Jan 24 00:32:47.302827 containerd[1979]: time="2026-01-24T00:32:47.302743932Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:47.304752 containerd[1979]: time="2026-01-24T00:32:47.304539630Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 24 00:32:47.308087 containerd[1979]: time="2026-01-24T00:32:47.306835650Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:47.310223 containerd[1979]: time="2026-01-24T00:32:47.310175758Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:47.312796 containerd[1979]: time="2026-01-24T00:32:47.311878492Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.684261561s" Jan 24 00:32:47.312796 containerd[1979]: time="2026-01-24T00:32:47.311922984Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 24 00:32:47.318496 containerd[1979]: time="2026-01-24T00:32:47.318452115Z" level=info msg="CreateContainer within sandbox \"365f89050065f5293d7e9a8fbd7643d051d8f7ba5f7a8408b19a5f6c2813242f\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 24 00:32:47.344412 containerd[1979]: time="2026-01-24T00:32:47.344366697Z" level=info msg="CreateContainer within sandbox \"365f89050065f5293d7e9a8fbd7643d051d8f7ba5f7a8408b19a5f6c2813242f\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"2adab5835d6494dd4427faec434020d637f703eacc65aaaa53cf8f9b08ff03e2\"" Jan 24 00:32:47.345418 containerd[1979]: time="2026-01-24T00:32:47.345378691Z" level=info msg="StartContainer for \"2adab5835d6494dd4427faec434020d637f703eacc65aaaa53cf8f9b08ff03e2\"" Jan 24 00:32:47.375985 systemd[1]: Started cri-containerd-2adab5835d6494dd4427faec434020d637f703eacc65aaaa53cf8f9b08ff03e2.scope - libcontainer container 2adab5835d6494dd4427faec434020d637f703eacc65aaaa53cf8f9b08ff03e2. Jan 24 00:32:47.405998 systemd[1]: cri-containerd-2adab5835d6494dd4427faec434020d637f703eacc65aaaa53cf8f9b08ff03e2.scope: Deactivated successfully. Jan 24 00:32:47.407880 containerd[1979]: time="2026-01-24T00:32:47.407720037Z" level=info msg="StartContainer for \"2adab5835d6494dd4427faec434020d637f703eacc65aaaa53cf8f9b08ff03e2\" returns successfully" Jan 24 00:32:47.432718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2adab5835d6494dd4427faec434020d637f703eacc65aaaa53cf8f9b08ff03e2-rootfs.mount: Deactivated successfully. Jan 24 00:32:47.479933 containerd[1979]: time="2026-01-24T00:32:47.479874360Z" level=info msg="shim disconnected" id=2adab5835d6494dd4427faec434020d637f703eacc65aaaa53cf8f9b08ff03e2 namespace=k8s.io Jan 24 00:32:47.480175 containerd[1979]: time="2026-01-24T00:32:47.480135806Z" level=warning msg="cleaning up after shim disconnected" id=2adab5835d6494dd4427faec434020d637f703eacc65aaaa53cf8f9b08ff03e2 namespace=k8s.io Jan 24 00:32:47.480175 containerd[1979]: time="2026-01-24T00:32:47.480153013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:32:47.683477 containerd[1979]: time="2026-01-24T00:32:47.683369780Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 24 00:32:49.519040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2557389281.mount: Deactivated successfully. Jan 24 00:32:49.576529 update_engine[1967]: I20260124 00:32:49.576462 1967 update_attempter.cc:509] Updating boot flags... Jan 24 00:32:49.661984 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3519) Jan 24 00:32:49.964912 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3518) Jan 24 00:32:50.742050 containerd[1979]: time="2026-01-24T00:32:50.741996862Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:50.744047 containerd[1979]: time="2026-01-24T00:32:50.743800442Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 24 00:32:50.746505 containerd[1979]: time="2026-01-24T00:32:50.746044636Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:50.750217 containerd[1979]: time="2026-01-24T00:32:50.750173545Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:50.751404 containerd[1979]: time="2026-01-24T00:32:50.751364504Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.067951608s" Jan 24 00:32:50.751578 containerd[1979]: time="2026-01-24T00:32:50.751554895Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 24 00:32:50.754690 containerd[1979]: time="2026-01-24T00:32:50.754649549Z" level=info msg="CreateContainer within sandbox \"365f89050065f5293d7e9a8fbd7643d051d8f7ba5f7a8408b19a5f6c2813242f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:32:50.784224 containerd[1979]: time="2026-01-24T00:32:50.784173264Z" level=info msg="CreateContainer within sandbox \"365f89050065f5293d7e9a8fbd7643d051d8f7ba5f7a8408b19a5f6c2813242f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9e94e862cbe6d822ecc9acf83c3578b0fa9696177a244042438c39d310bdcc91\"" Jan 24 00:32:50.784980 containerd[1979]: time="2026-01-24T00:32:50.784952794Z" level=info msg="StartContainer for \"9e94e862cbe6d822ecc9acf83c3578b0fa9696177a244042438c39d310bdcc91\"" Jan 24 00:32:50.814581 systemd[1]: run-containerd-runc-k8s.io-9e94e862cbe6d822ecc9acf83c3578b0fa9696177a244042438c39d310bdcc91-runc.CFszNW.mount: Deactivated successfully. Jan 24 00:32:50.826023 systemd[1]: Started cri-containerd-9e94e862cbe6d822ecc9acf83c3578b0fa9696177a244042438c39d310bdcc91.scope - libcontainer container 9e94e862cbe6d822ecc9acf83c3578b0fa9696177a244042438c39d310bdcc91. Jan 24 00:32:50.866717 systemd[1]: cri-containerd-9e94e862cbe6d822ecc9acf83c3578b0fa9696177a244042438c39d310bdcc91.scope: Deactivated successfully. Jan 24 00:32:50.871244 containerd[1979]: time="2026-01-24T00:32:50.871198716Z" level=info msg="StartContainer for \"9e94e862cbe6d822ecc9acf83c3578b0fa9696177a244042438c39d310bdcc91\" returns successfully" Jan 24 00:32:50.945683 kubelet[3124]: I0124 00:32:50.945404 3124 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:32:50.975576 kubelet[3124]: I0124 00:32:50.974812 3124 status_manager.go:890] "Failed to get status for pod" podUID="d3e5c367-d627-44d2-a25a-69f5ef368df2" pod="kube-system/coredns-668d6bf9bc-jjfzx" err="pods \"coredns-668d6bf9bc-jjfzx\" is forbidden: User \"system:node:ip-172-31-28-170\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-28-170' and this object" Jan 24 00:32:50.975576 kubelet[3124]: W0124 00:32:50.975494 3124 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-28-170" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-170' and this object Jan 24 00:32:50.975576 kubelet[3124]: E0124 00:32:50.975526 3124 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-28-170\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-28-170' and this object" logger="UnhandledError" Jan 24 00:32:50.983249 systemd[1]: Created slice kubepods-burstable-podd3e5c367_d627_44d2_a25a_69f5ef368df2.slice - libcontainer container kubepods-burstable-podd3e5c367_d627_44d2_a25a_69f5ef368df2.slice. Jan 24 00:32:50.986437 kubelet[3124]: I0124 00:32:50.986397 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/600dd8ef-3ff7-435b-9edf-7c77fb4dee22-config-volume\") pod \"coredns-668d6bf9bc-s6zk7\" (UID: \"600dd8ef-3ff7-435b-9edf-7c77fb4dee22\") " pod="kube-system/coredns-668d6bf9bc-s6zk7" Jan 24 00:32:50.986577 kubelet[3124]: I0124 00:32:50.986445 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k2xt\" (UniqueName: \"kubernetes.io/projected/600dd8ef-3ff7-435b-9edf-7c77fb4dee22-kube-api-access-9k2xt\") pod \"coredns-668d6bf9bc-s6zk7\" (UID: \"600dd8ef-3ff7-435b-9edf-7c77fb4dee22\") " pod="kube-system/coredns-668d6bf9bc-s6zk7" Jan 24 00:32:50.986577 kubelet[3124]: I0124 00:32:50.986471 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3e5c367-d627-44d2-a25a-69f5ef368df2-config-volume\") pod \"coredns-668d6bf9bc-jjfzx\" (UID: \"d3e5c367-d627-44d2-a25a-69f5ef368df2\") " pod="kube-system/coredns-668d6bf9bc-jjfzx" Jan 24 00:32:50.986577 kubelet[3124]: I0124 00:32:50.986498 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4d6c\" (UniqueName: \"kubernetes.io/projected/d3e5c367-d627-44d2-a25a-69f5ef368df2-kube-api-access-l4d6c\") pod \"coredns-668d6bf9bc-jjfzx\" (UID: \"d3e5c367-d627-44d2-a25a-69f5ef368df2\") " pod="kube-system/coredns-668d6bf9bc-jjfzx" Jan 24 00:32:50.993955 systemd[1]: Created slice kubepods-burstable-pod600dd8ef_3ff7_435b_9edf_7c77fb4dee22.slice - libcontainer container kubepods-burstable-pod600dd8ef_3ff7_435b_9edf_7c77fb4dee22.slice. Jan 24 00:32:51.047479 containerd[1979]: time="2026-01-24T00:32:51.047421066Z" level=info msg="shim disconnected" id=9e94e862cbe6d822ecc9acf83c3578b0fa9696177a244042438c39d310bdcc91 namespace=k8s.io Jan 24 00:32:51.047479 containerd[1979]: time="2026-01-24T00:32:51.047479439Z" level=warning msg="cleaning up after shim disconnected" id=9e94e862cbe6d822ecc9acf83c3578b0fa9696177a244042438c39d310bdcc91 namespace=k8s.io Jan 24 00:32:51.047479 containerd[1979]: time="2026-01-24T00:32:51.047488317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:32:51.693113 containerd[1979]: time="2026-01-24T00:32:51.692949813Z" level=info msg="CreateContainer within sandbox \"365f89050065f5293d7e9a8fbd7643d051d8f7ba5f7a8408b19a5f6c2813242f\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 24 00:32:51.726513 containerd[1979]: time="2026-01-24T00:32:51.726447483Z" level=info msg="CreateContainer within sandbox \"365f89050065f5293d7e9a8fbd7643d051d8f7ba5f7a8408b19a5f6c2813242f\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"1cb6eb713f3d4c04e2594d431a58d5e6e97461793a3c8314c8d37e6847fef1e6\"" Jan 24 00:32:51.727194 containerd[1979]: time="2026-01-24T00:32:51.727030378Z" level=info msg="StartContainer for \"1cb6eb713f3d4c04e2594d431a58d5e6e97461793a3c8314c8d37e6847fef1e6\"" Jan 24 00:32:51.757991 systemd[1]: Started cri-containerd-1cb6eb713f3d4c04e2594d431a58d5e6e97461793a3c8314c8d37e6847fef1e6.scope - libcontainer container 1cb6eb713f3d4c04e2594d431a58d5e6e97461793a3c8314c8d37e6847fef1e6. Jan 24 00:32:51.785574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e94e862cbe6d822ecc9acf83c3578b0fa9696177a244042438c39d310bdcc91-rootfs.mount: Deactivated successfully. Jan 24 00:32:51.802820 containerd[1979]: time="2026-01-24T00:32:51.802749813Z" level=info msg="StartContainer for \"1cb6eb713f3d4c04e2594d431a58d5e6e97461793a3c8314c8d37e6847fef1e6\" returns successfully" Jan 24 00:32:52.087847 kubelet[3124]: E0124 00:32:52.087796 3124 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:32:52.088412 kubelet[3124]: E0124 00:32:52.087902 3124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3e5c367-d627-44d2-a25a-69f5ef368df2-config-volume podName:d3e5c367-d627-44d2-a25a-69f5ef368df2 nodeName:}" failed. No retries permitted until 2026-01-24 00:32:52.587881082 +0000 UTC m=+13.108095150 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d3e5c367-d627-44d2-a25a-69f5ef368df2-config-volume") pod "coredns-668d6bf9bc-jjfzx" (UID: "d3e5c367-d627-44d2-a25a-69f5ef368df2") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:32:52.088412 kubelet[3124]: E0124 00:32:52.087800 3124 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:32:52.088412 kubelet[3124]: E0124 00:32:52.088138 3124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/600dd8ef-3ff7-435b-9edf-7c77fb4dee22-config-volume podName:600dd8ef-3ff7-435b-9edf-7c77fb4dee22 nodeName:}" failed. No retries permitted until 2026-01-24 00:32:52.588126696 +0000 UTC m=+13.108340763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/600dd8ef-3ff7-435b-9edf-7c77fb4dee22-config-volume") pod "coredns-668d6bf9bc-s6zk7" (UID: "600dd8ef-3ff7-435b-9edf-7c77fb4dee22") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:32:52.709838 kubelet[3124]: I0124 00:32:52.709778 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-j5rwq" podStartSLOduration=2.583210089 podStartE2EDuration="7.709753101s" podCreationTimestamp="2026-01-24 00:32:45 +0000 UTC" firstStartedPulling="2026-01-24 00:32:45.626177393 +0000 UTC m=+6.146391466" lastFinishedPulling="2026-01-24 00:32:50.752720394 +0000 UTC m=+11.272934478" observedRunningTime="2026-01-24 00:32:52.707943199 +0000 UTC m=+13.228157305" watchObservedRunningTime="2026-01-24 00:32:52.709753101 +0000 UTC m=+13.229967188" Jan 24 00:32:52.790079 containerd[1979]: time="2026-01-24T00:32:52.790039797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jjfzx,Uid:d3e5c367-d627-44d2-a25a-69f5ef368df2,Namespace:kube-system,Attempt:0,}" Jan 24 00:32:52.800856 containerd[1979]: time="2026-01-24T00:32:52.800812725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6zk7,Uid:600dd8ef-3ff7-435b-9edf-7c77fb4dee22,Namespace:kube-system,Attempt:0,}" Jan 24 00:32:52.856185 (udev-worker)[3527]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:32:52.875636 systemd-networkd[1906]: flannel.1: Link UP Jan 24 00:32:52.875643 systemd-networkd[1906]: flannel.1: Gained carrier Jan 24 00:32:52.956093 systemd[1]: run-netns-cni\x2dd8cb39ab\x2dffd8\x2df053\x2d6f3d\x2dbd6cb5216920.mount: Deactivated successfully. Jan 24 00:32:52.956243 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53209d01949431a4ffec93272e4656729d89d350117ec878bd38c7ec4a8395b2-shm.mount: Deactivated successfully. Jan 24 00:32:52.961374 containerd[1979]: time="2026-01-24T00:32:52.961248853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jjfzx,Uid:d3e5c367-d627-44d2-a25a-69f5ef368df2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53209d01949431a4ffec93272e4656729d89d350117ec878bd38c7ec4a8395b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 24 00:32:52.962623 kubelet[3124]: E0124 00:32:52.962568 3124 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53209d01949431a4ffec93272e4656729d89d350117ec878bd38c7ec4a8395b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 24 00:32:52.962870 kubelet[3124]: E0124 00:32:52.962653 3124 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53209d01949431a4ffec93272e4656729d89d350117ec878bd38c7ec4a8395b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-jjfzx" Jan 24 00:32:52.962870 kubelet[3124]: E0124 00:32:52.962691 3124 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53209d01949431a4ffec93272e4656729d89d350117ec878bd38c7ec4a8395b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-jjfzx" Jan 24 00:32:52.964584 containerd[1979]: time="2026-01-24T00:32:52.964538297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6zk7,Uid:600dd8ef-3ff7-435b-9edf-7c77fb4dee22,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d8d504eec59e71c7c7faa7632a3b67a52d4b2c56c696a2e964d1320299822b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 24 00:32:52.965291 kubelet[3124]: E0124 00:32:52.962738 3124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jjfzx_kube-system(d3e5c367-d627-44d2-a25a-69f5ef368df2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jjfzx_kube-system(d3e5c367-d627-44d2-a25a-69f5ef368df2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53209d01949431a4ffec93272e4656729d89d350117ec878bd38c7ec4a8395b2\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-jjfzx" podUID="d3e5c367-d627-44d2-a25a-69f5ef368df2" Jan 24 00:32:52.965662 systemd[1]: run-netns-cni\x2d42550204\x2d9fa4\x2d1ea8\x2dfe2d\x2d16f5fdb1423c.mount: Deactivated successfully. Jan 24 00:32:52.966007 kubelet[3124]: E0124 00:32:52.965922 3124 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8d504eec59e71c7c7faa7632a3b67a52d4b2c56c696a2e964d1320299822b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 24 00:32:52.966084 kubelet[3124]: E0124 00:32:52.966029 3124 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8d504eec59e71c7c7faa7632a3b67a52d4b2c56c696a2e964d1320299822b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-s6zk7" Jan 24 00:32:52.966084 kubelet[3124]: E0124 00:32:52.966059 3124 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8d504eec59e71c7c7faa7632a3b67a52d4b2c56c696a2e964d1320299822b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-s6zk7" Jan 24 00:32:52.966176 kubelet[3124]: E0124 00:32:52.966111 3124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s6zk7_kube-system(600dd8ef-3ff7-435b-9edf-7c77fb4dee22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s6zk7_kube-system(600dd8ef-3ff7-435b-9edf-7c77fb4dee22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d8d504eec59e71c7c7faa7632a3b67a52d4b2c56c696a2e964d1320299822b6\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-s6zk7" podUID="600dd8ef-3ff7-435b-9edf-7c77fb4dee22" Jan 24 00:32:52.966220 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d8d504eec59e71c7c7faa7632a3b67a52d4b2c56c696a2e964d1320299822b6-shm.mount: Deactivated successfully. Jan 24 00:32:54.158981 systemd-networkd[1906]: flannel.1: Gained IPv6LL Jan 24 00:32:56.623380 ntpd[1955]: Listen normally on 7 flannel.1 192.168.0.0:123 Jan 24 00:32:56.623791 ntpd[1955]: 24 Jan 00:32:56 ntpd[1955]: Listen normally on 7 flannel.1 192.168.0.0:123 Jan 24 00:32:56.623791 ntpd[1955]: 24 Jan 00:32:56 ntpd[1955]: Listen normally on 8 flannel.1 [fe80::2023:15ff:fe2e:4b82%4]:123 Jan 24 00:32:56.623457 ntpd[1955]: Listen normally on 8 flannel.1 [fe80::2023:15ff:fe2e:4b82%4]:123 Jan 24 00:33:05.623277 containerd[1979]: time="2026-01-24T00:33:05.622901110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6zk7,Uid:600dd8ef-3ff7-435b-9edf-7c77fb4dee22,Namespace:kube-system,Attempt:0,}" Jan 24 00:33:05.681669 systemd-networkd[1906]: cni0: Link UP Jan 24 00:33:05.681678 systemd-networkd[1906]: cni0: Gained carrier Jan 24 00:33:05.685715 (udev-worker)[3993]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:33:05.686720 systemd-networkd[1906]: cni0: Lost carrier Jan 24 00:33:05.694433 (udev-worker)[3996]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:33:05.696938 kernel: cni0: port 1(veth3ac8e39c) entered blocking state Jan 24 00:33:05.696971 kernel: cni0: port 1(veth3ac8e39c) entered disabled state Jan 24 00:33:05.694663 systemd-networkd[1906]: veth3ac8e39c: Link UP Jan 24 00:33:05.701381 kernel: veth3ac8e39c: entered allmulticast mode Jan 24 00:33:05.701443 kernel: veth3ac8e39c: entered promiscuous mode Jan 24 00:33:05.701482 kernel: cni0: port 1(veth3ac8e39c) entered blocking state Jan 24 00:33:05.702817 kernel: cni0: port 1(veth3ac8e39c) entered forwarding state Jan 24 00:33:05.704186 kernel: cni0: port 1(veth3ac8e39c) entered disabled state Jan 24 00:33:05.711851 kernel: cni0: port 1(veth3ac8e39c) entered blocking state Jan 24 00:33:05.711933 kernel: cni0: port 1(veth3ac8e39c) entered forwarding state Jan 24 00:33:05.711920 systemd-networkd[1906]: veth3ac8e39c: Gained carrier Jan 24 00:33:05.712452 systemd-networkd[1906]: cni0: Gained carrier Jan 24 00:33:05.717187 containerd[1979]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 24 00:33:05.717187 containerd[1979]: delegateAdd: netconf sent to delegate plugin: Jan 24 00:33:05.747358 containerd[1979]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-24T00:33:05.746541432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:05.747358 containerd[1979]: time="2026-01-24T00:33:05.746973700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:05.747358 containerd[1979]: time="2026-01-24T00:33:05.747001877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:05.748155 containerd[1979]: time="2026-01-24T00:33:05.748078424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:05.776104 systemd[1]: Started cri-containerd-39a93b44e5a15a3080805ff5461cdc26d8080e5b8c136e745ce5b3cdf105f8f3.scope - libcontainer container 39a93b44e5a15a3080805ff5461cdc26d8080e5b8c136e745ce5b3cdf105f8f3. Jan 24 00:33:05.822718 containerd[1979]: time="2026-01-24T00:33:05.822532312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6zk7,Uid:600dd8ef-3ff7-435b-9edf-7c77fb4dee22,Namespace:kube-system,Attempt:0,} returns sandbox id \"39a93b44e5a15a3080805ff5461cdc26d8080e5b8c136e745ce5b3cdf105f8f3\"" Jan 24 00:33:05.826515 containerd[1979]: time="2026-01-24T00:33:05.826484599Z" level=info msg="CreateContainer within sandbox \"39a93b44e5a15a3080805ff5461cdc26d8080e5b8c136e745ce5b3cdf105f8f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:33:05.844791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount419717885.mount: Deactivated successfully. Jan 24 00:33:05.851421 containerd[1979]: time="2026-01-24T00:33:05.851259674Z" level=info msg="CreateContainer within sandbox \"39a93b44e5a15a3080805ff5461cdc26d8080e5b8c136e745ce5b3cdf105f8f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b76b94dc89a480b21ff2e0b3513bf1b27df98339f005a5378de033b2c9c695b\"" Jan 24 00:33:05.852350 containerd[1979]: time="2026-01-24T00:33:05.852186568Z" level=info msg="StartContainer for \"3b76b94dc89a480b21ff2e0b3513bf1b27df98339f005a5378de033b2c9c695b\"" Jan 24 00:33:05.890044 systemd[1]: Started cri-containerd-3b76b94dc89a480b21ff2e0b3513bf1b27df98339f005a5378de033b2c9c695b.scope - libcontainer container 3b76b94dc89a480b21ff2e0b3513bf1b27df98339f005a5378de033b2c9c695b. Jan 24 00:33:05.920978 containerd[1979]: time="2026-01-24T00:33:05.920940722Z" level=info msg="StartContainer for \"3b76b94dc89a480b21ff2e0b3513bf1b27df98339f005a5378de033b2c9c695b\" returns successfully" Jan 24 00:33:06.636363 containerd[1979]: time="2026-01-24T00:33:06.636325565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jjfzx,Uid:d3e5c367-d627-44d2-a25a-69f5ef368df2,Namespace:kube-system,Attempt:0,}" Jan 24 00:33:06.686319 (udev-worker)[4005]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:33:06.686471 systemd-networkd[1906]: vethed9b3ea9: Link UP Jan 24 00:33:06.689040 kernel: cni0: port 2(vethed9b3ea9) entered blocking state Jan 24 00:33:06.689076 kernel: cni0: port 2(vethed9b3ea9) entered disabled state Jan 24 00:33:06.689094 kernel: vethed9b3ea9: entered allmulticast mode Jan 24 00:33:06.690392 kernel: vethed9b3ea9: entered promiscuous mode Jan 24 00:33:06.691022 kernel: cni0: port 2(vethed9b3ea9) entered blocking state Jan 24 00:33:06.691867 kernel: cni0: port 2(vethed9b3ea9) entered forwarding state Jan 24 00:33:06.693804 kernel: cni0: port 2(vethed9b3ea9) entered disabled state Jan 24 00:33:06.713207 kernel: cni0: port 2(vethed9b3ea9) entered blocking state Jan 24 00:33:06.713443 kernel: cni0: port 2(vethed9b3ea9) entered forwarding state Jan 24 00:33:06.713744 systemd-networkd[1906]: vethed9b3ea9: Gained carrier Jan 24 00:33:06.721514 containerd[1979]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} Jan 24 00:33:06.721514 containerd[1979]: delegateAdd: netconf sent to delegate plugin: Jan 24 00:33:06.753045 containerd[1979]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-24T00:33:06.752284677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:06.753045 containerd[1979]: time="2026-01-24T00:33:06.752358366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:06.753045 containerd[1979]: time="2026-01-24T00:33:06.752388542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:06.753045 containerd[1979]: time="2026-01-24T00:33:06.752507747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:06.787793 kubelet[3124]: I0124 00:33:06.787297 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s6zk7" podStartSLOduration=21.784711618 podStartE2EDuration="21.784711618s" podCreationTimestamp="2026-01-24 00:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:33:06.780657913 +0000 UTC m=+27.300872005" watchObservedRunningTime="2026-01-24 00:33:06.784711618 +0000 UTC m=+27.304925706" Jan 24 00:33:06.794615 systemd[1]: Started cri-containerd-2c0f088777b2fefcd7a2baf2a6cb7b93d0e5850c89f9cd7101511b4807b51f95.scope - libcontainer container 2c0f088777b2fefcd7a2baf2a6cb7b93d0e5850c89f9cd7101511b4807b51f95. Jan 24 00:33:06.847215 containerd[1979]: time="2026-01-24T00:33:06.847087281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jjfzx,Uid:d3e5c367-d627-44d2-a25a-69f5ef368df2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c0f088777b2fefcd7a2baf2a6cb7b93d0e5850c89f9cd7101511b4807b51f95\"" Jan 24 00:33:06.855467 containerd[1979]: time="2026-01-24T00:33:06.855338119Z" level=info msg="CreateContainer within sandbox \"2c0f088777b2fefcd7a2baf2a6cb7b93d0e5850c89f9cd7101511b4807b51f95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:33:06.878362 containerd[1979]: time="2026-01-24T00:33:06.878303387Z" level=info msg="CreateContainer within sandbox \"2c0f088777b2fefcd7a2baf2a6cb7b93d0e5850c89f9cd7101511b4807b51f95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46c7bfe1d1fd0ad25023e45ba5ec3f9624ec20d3da9ec19930c1715830644333\"" Jan 24 00:33:06.879074 containerd[1979]: time="2026-01-24T00:33:06.879005628Z" level=info msg="StartContainer for \"46c7bfe1d1fd0ad25023e45ba5ec3f9624ec20d3da9ec19930c1715830644333\"" Jan 24 00:33:06.915995 systemd[1]: Started cri-containerd-46c7bfe1d1fd0ad25023e45ba5ec3f9624ec20d3da9ec19930c1715830644333.scope - libcontainer container 46c7bfe1d1fd0ad25023e45ba5ec3f9624ec20d3da9ec19930c1715830644333. Jan 24 00:33:06.950985 containerd[1979]: time="2026-01-24T00:33:06.950938404Z" level=info msg="StartContainer for \"46c7bfe1d1fd0ad25023e45ba5ec3f9624ec20d3da9ec19930c1715830644333\" returns successfully" Jan 24 00:33:07.278900 systemd-networkd[1906]: veth3ac8e39c: Gained IPv6LL Jan 24 00:33:07.599067 systemd-networkd[1906]: cni0: Gained IPv6LL Jan 24 00:33:07.640338 systemd[1]: run-containerd-runc-k8s.io-2c0f088777b2fefcd7a2baf2a6cb7b93d0e5850c89f9cd7101511b4807b51f95-runc.PonqW0.mount: Deactivated successfully. Jan 24 00:33:07.791706 kubelet[3124]: I0124 00:33:07.791645 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jjfzx" podStartSLOduration=22.791624588 podStartE2EDuration="22.791624588s" podCreationTimestamp="2026-01-24 00:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:33:07.772979089 +0000 UTC m=+28.293193176" watchObservedRunningTime="2026-01-24 00:33:07.791624588 +0000 UTC m=+28.311838675" Jan 24 00:33:07.983062 systemd-networkd[1906]: vethed9b3ea9: Gained IPv6LL Jan 24 00:33:10.623385 ntpd[1955]: Listen normally on 9 cni0 192.168.0.1:123 Jan 24 00:33:10.623467 ntpd[1955]: Listen normally on 10 cni0 [fe80::3455:caff:fe9d:5a51%5]:123 Jan 24 00:33:10.623825 ntpd[1955]: 24 Jan 00:33:10 ntpd[1955]: Listen normally on 9 cni0 192.168.0.1:123 Jan 24 00:33:10.623825 ntpd[1955]: 24 Jan 00:33:10 ntpd[1955]: Listen normally on 10 cni0 [fe80::3455:caff:fe9d:5a51%5]:123 Jan 24 00:33:10.623825 ntpd[1955]: 24 Jan 00:33:10 ntpd[1955]: Listen normally on 11 veth3ac8e39c [fe80::3c4d:31ff:fe11:4cf0%6]:123 Jan 24 00:33:10.623825 ntpd[1955]: 24 Jan 00:33:10 ntpd[1955]: Listen normally on 12 vethed9b3ea9 [fe80::28ef:c5ff:fe65:85d2%7]:123 Jan 24 00:33:10.623511 ntpd[1955]: Listen normally on 11 veth3ac8e39c [fe80::3c4d:31ff:fe11:4cf0%6]:123 Jan 24 00:33:10.623541 ntpd[1955]: Listen normally on 12 vethed9b3ea9 [fe80::28ef:c5ff:fe65:85d2%7]:123 Jan 24 00:33:13.121057 systemd[1]: Started sshd@5-172.31.28.170:22-4.153.228.146:45654.service - OpenSSH per-connection server daemon (4.153.228.146:45654). Jan 24 00:33:13.613034 sshd[4255]: Accepted publickey for core from 4.153.228.146 port 45654 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:13.614734 sshd[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:13.620290 systemd-logind[1963]: New session 6 of user core. Jan 24 00:33:13.624966 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:33:14.035605 sshd[4255]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:14.039794 systemd-logind[1963]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:33:14.040407 systemd[1]: sshd@5-172.31.28.170:22-4.153.228.146:45654.service: Deactivated successfully. Jan 24 00:33:14.042224 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:33:14.043675 systemd-logind[1963]: Removed session 6. Jan 24 00:33:19.134126 systemd[1]: Started sshd@6-172.31.28.170:22-4.153.228.146:49710.service - OpenSSH per-connection server daemon (4.153.228.146:49710). Jan 24 00:33:19.654952 sshd[4296]: Accepted publickey for core from 4.153.228.146 port 49710 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:19.656487 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:19.662920 systemd-logind[1963]: New session 7 of user core. Jan 24 00:33:19.672036 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:33:20.107688 sshd[4296]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:20.111922 systemd-logind[1963]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:33:20.112415 systemd[1]: sshd@6-172.31.28.170:22-4.153.228.146:49710.service: Deactivated successfully. Jan 24 00:33:20.114406 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:33:20.115665 systemd-logind[1963]: Removed session 7. Jan 24 00:33:25.204205 systemd[1]: Started sshd@7-172.31.28.170:22-4.153.228.146:32904.service - OpenSSH per-connection server daemon (4.153.228.146:32904). Jan 24 00:33:25.731423 sshd[4331]: Accepted publickey for core from 4.153.228.146 port 32904 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:25.733035 sshd[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:25.738496 systemd-logind[1963]: New session 8 of user core. Jan 24 00:33:25.746002 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:33:26.167454 sshd[4331]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:26.170953 systemd[1]: sshd@7-172.31.28.170:22-4.153.228.146:32904.service: Deactivated successfully. Jan 24 00:33:26.173233 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:33:26.174146 systemd-logind[1963]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:33:26.175048 systemd-logind[1963]: Removed session 8. Jan 24 00:33:26.263154 systemd[1]: Started sshd@8-172.31.28.170:22-4.153.228.146:32916.service - OpenSSH per-connection server daemon (4.153.228.146:32916). Jan 24 00:33:26.787343 sshd[4345]: Accepted publickey for core from 4.153.228.146 port 32916 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:26.788790 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:26.794597 systemd-logind[1963]: New session 9 of user core. Jan 24 00:33:26.798073 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:33:27.266046 sshd[4345]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:27.269879 systemd[1]: sshd@8-172.31.28.170:22-4.153.228.146:32916.service: Deactivated successfully. Jan 24 00:33:27.271695 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:33:27.272975 systemd-logind[1963]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:33:27.274262 systemd-logind[1963]: Removed session 9. Jan 24 00:33:27.350947 systemd[1]: Started sshd@9-172.31.28.170:22-4.153.228.146:32932.service - OpenSSH per-connection server daemon (4.153.228.146:32932). Jan 24 00:33:27.834840 sshd[4356]: Accepted publickey for core from 4.153.228.146 port 32932 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:27.836230 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:27.840460 systemd-logind[1963]: New session 10 of user core. Jan 24 00:33:27.844941 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:33:28.262132 sshd[4356]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:28.266897 systemd[1]: sshd@9-172.31.28.170:22-4.153.228.146:32932.service: Deactivated successfully. Jan 24 00:33:28.269164 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:33:28.270255 systemd-logind[1963]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:33:28.271451 systemd-logind[1963]: Removed session 10. Jan 24 00:33:33.360058 systemd[1]: Started sshd@10-172.31.28.170:22-4.153.228.146:32940.service - OpenSSH per-connection server daemon (4.153.228.146:32940). Jan 24 00:33:33.878335 sshd[4411]: Accepted publickey for core from 4.153.228.146 port 32940 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:33.879895 sshd[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:33.884898 systemd-logind[1963]: New session 11 of user core. Jan 24 00:33:33.887957 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:33:34.316589 sshd[4411]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:34.320019 systemd[1]: sshd@10-172.31.28.170:22-4.153.228.146:32940.service: Deactivated successfully. Jan 24 00:33:34.322373 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:33:34.324532 systemd-logind[1963]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:33:34.326339 systemd-logind[1963]: Removed session 11. Jan 24 00:33:34.398169 systemd[1]: Started sshd@11-172.31.28.170:22-4.153.228.146:32946.service - OpenSSH per-connection server daemon (4.153.228.146:32946). Jan 24 00:33:34.896528 sshd[4424]: Accepted publickey for core from 4.153.228.146 port 32946 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:34.898251 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:34.902794 systemd-logind[1963]: New session 12 of user core. Jan 24 00:33:34.906955 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:33:35.648141 sshd[4424]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:35.651994 systemd[1]: sshd@11-172.31.28.170:22-4.153.228.146:32946.service: Deactivated successfully. Jan 24 00:33:35.654009 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:33:35.654866 systemd-logind[1963]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:33:35.655960 systemd-logind[1963]: Removed session 12. Jan 24 00:33:35.742085 systemd[1]: Started sshd@12-172.31.28.170:22-4.153.228.146:37310.service - OpenSSH per-connection server daemon (4.153.228.146:37310). Jan 24 00:33:36.233801 sshd[4435]: Accepted publickey for core from 4.153.228.146 port 37310 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:36.235242 sshd[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:36.240060 systemd-logind[1963]: New session 13 of user core. Jan 24 00:33:36.241949 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:33:37.626369 sshd[4435]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:37.630443 systemd[1]: sshd@12-172.31.28.170:22-4.153.228.146:37310.service: Deactivated successfully. Jan 24 00:33:37.632994 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:33:37.635099 systemd-logind[1963]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:33:37.636876 systemd-logind[1963]: Removed session 13. Jan 24 00:33:37.728146 systemd[1]: Started sshd@13-172.31.28.170:22-4.153.228.146:37312.service - OpenSSH per-connection server daemon (4.153.228.146:37312). Jan 24 00:33:38.251555 sshd[4455]: Accepted publickey for core from 4.153.228.146 port 37312 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:38.253062 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:38.258445 systemd-logind[1963]: New session 14 of user core. Jan 24 00:33:38.263960 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:33:38.810230 sshd[4455]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:38.814081 systemd[1]: sshd@13-172.31.28.170:22-4.153.228.146:37312.service: Deactivated successfully. Jan 24 00:33:38.815963 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:33:38.816700 systemd-logind[1963]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:33:38.818122 systemd-logind[1963]: Removed session 14. Jan 24 00:33:38.896103 systemd[1]: Started sshd@14-172.31.28.170:22-4.153.228.146:37328.service - OpenSSH per-connection server daemon (4.153.228.146:37328). Jan 24 00:33:39.391241 sshd[4487]: Accepted publickey for core from 4.153.228.146 port 37328 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:39.392806 sshd[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:39.397427 systemd-logind[1963]: New session 15 of user core. Jan 24 00:33:39.401969 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:33:39.821200 sshd[4487]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:39.825185 systemd[1]: sshd@14-172.31.28.170:22-4.153.228.146:37328.service: Deactivated successfully. Jan 24 00:33:39.828361 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:33:39.829477 systemd-logind[1963]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:33:39.830974 systemd-logind[1963]: Removed session 15. Jan 24 00:33:44.907965 systemd[1]: Started sshd@15-172.31.28.170:22-4.153.228.146:39148.service - OpenSSH per-connection server daemon (4.153.228.146:39148). Jan 24 00:33:45.399409 sshd[4525]: Accepted publickey for core from 4.153.228.146 port 39148 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:45.400986 sshd[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:45.406712 systemd-logind[1963]: New session 16 of user core. Jan 24 00:33:45.413003 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:33:45.815817 sshd[4525]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:45.819210 systemd[1]: sshd@15-172.31.28.170:22-4.153.228.146:39148.service: Deactivated successfully. Jan 24 00:33:45.822149 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:33:45.823725 systemd-logind[1963]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:33:45.825247 systemd-logind[1963]: Removed session 16. Jan 24 00:33:50.906164 systemd[1]: Started sshd@16-172.31.28.170:22-4.153.228.146:39160.service - OpenSSH per-connection server daemon (4.153.228.146:39160). Jan 24 00:33:51.406525 sshd[4561]: Accepted publickey for core from 4.153.228.146 port 39160 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:51.408104 sshd[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:51.412924 systemd-logind[1963]: New session 17 of user core. Jan 24 00:33:51.418020 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:33:51.815218 sshd[4561]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:51.818989 systemd[1]: sshd@16-172.31.28.170:22-4.153.228.146:39160.service: Deactivated successfully. Jan 24 00:33:51.821481 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:33:51.822323 systemd-logind[1963]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:33:51.823738 systemd-logind[1963]: Removed session 17. Jan 24 00:33:56.920116 systemd[1]: Started sshd@17-172.31.28.170:22-4.153.228.146:59204.service - OpenSSH per-connection server daemon (4.153.228.146:59204). Jan 24 00:33:57.449099 sshd[4595]: Accepted publickey for core from 4.153.228.146 port 59204 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:57.450710 sshd[4595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:57.455446 systemd-logind[1963]: New session 18 of user core. Jan 24 00:33:57.466169 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:33:57.887179 sshd[4595]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:57.890180 systemd[1]: sshd@17-172.31.28.170:22-4.153.228.146:59204.service: Deactivated successfully. Jan 24 00:33:57.892370 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:33:57.895305 systemd-logind[1963]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:33:57.896509 systemd-logind[1963]: Removed session 18.