Jan 20 06:47:52.901553 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 20 04:11:16 -00 2026 Jan 20 06:47:52.901580 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:47:52.901593 kernel: BIOS-provided physical RAM map: Jan 20 06:47:52.901600 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 06:47:52.901607 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 20 06:47:52.901614 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 20 06:47:52.901622 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 20 06:47:52.901630 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 20 06:47:52.901637 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 20 06:47:52.901644 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 20 06:47:52.901654 kernel: NX (Execute Disable) protection: active Jan 20 06:47:52.901661 kernel: APIC: Static calls initialized Jan 20 06:47:52.901668 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jan 20 06:47:52.901676 kernel: extended physical RAM map: Jan 20 06:47:52.901685 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 06:47:52.901695 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jan 20 06:47:52.901704 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jan 20 06:47:52.901712 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jan 20 06:47:52.901720 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 20 06:47:52.901728 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 20 06:47:52.901736 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 20 06:47:52.901744 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 20 06:47:52.901752 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 20 06:47:52.901760 kernel: efi: EFI v2.7 by EDK II Jan 20 06:47:52.901768 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 20 06:47:52.901779 kernel: secureboot: Secure boot disabled Jan 20 06:47:52.901787 kernel: SMBIOS 2.7 present. Jan 20 06:47:52.901795 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 20 06:47:52.901803 kernel: DMI: Memory slots populated: 1/1 Jan 20 06:47:52.901811 kernel: Hypervisor detected: KVM Jan 20 06:47:52.901819 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 20 06:47:52.901827 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 06:47:52.901835 kernel: kvm-clock: using sched offset of 6454068777 cycles Jan 20 06:47:52.901844 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 06:47:52.901853 kernel: tsc: Detected 2500.004 MHz processor Jan 20 06:47:52.901864 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 06:47:52.901872 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 06:47:52.901881 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 20 06:47:52.901889 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 06:47:52.901898 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 06:47:52.901910 kernel: Using GB pages for direct mapping Jan 20 06:47:52.902997 kernel: ACPI: Early table checksum verification disabled Jan 20 06:47:52.903008 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 20 06:47:52.903957 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 20 06:47:52.903969 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 20 06:47:52.903978 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 20 06:47:52.903987 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 20 06:47:52.904000 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 20 06:47:52.904010 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 20 06:47:52.904019 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 20 06:47:52.904028 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 20 06:47:52.904038 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 20 06:47:52.904047 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 20 06:47:52.904056 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 20 06:47:52.904067 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 20 06:47:52.904077 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 20 06:47:52.904086 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 20 06:47:52.904095 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 20 06:47:52.904104 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 20 06:47:52.904113 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 20 06:47:52.904122 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 20 06:47:52.904133 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 20 06:47:52.904142 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 20 06:47:52.904151 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 20 06:47:52.904160 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 20 06:47:52.904169 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 20 06:47:52.904178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 20 06:47:52.904187 kernel: NUMA: Initialized distance table, cnt=1 Jan 20 06:47:52.904198 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Jan 20 06:47:52.904207 kernel: Zone ranges: Jan 20 06:47:52.904216 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 06:47:52.904225 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 20 06:47:52.904234 kernel: Normal empty Jan 20 06:47:52.904244 kernel: Device empty Jan 20 06:47:52.904252 kernel: Movable zone start for each node Jan 20 06:47:52.904261 kernel: Early memory node ranges Jan 20 06:47:52.904272 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 06:47:52.904281 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 20 06:47:52.904290 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 20 06:47:52.904299 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 20 06:47:52.904308 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 06:47:52.904317 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 06:47:52.904326 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 20 06:47:52.904337 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 20 06:47:52.904346 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 20 06:47:52.904355 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 06:47:52.904365 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 20 06:47:52.904373 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 06:47:52.904383 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 06:47:52.904392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 06:47:52.904400 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 06:47:52.904412 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 06:47:52.904421 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 06:47:52.904430 kernel: TSC deadline timer available Jan 20 06:47:52.904439 kernel: CPU topo: Max. logical packages: 1 Jan 20 06:47:52.904448 kernel: CPU topo: Max. logical dies: 1 Jan 20 06:47:52.904457 kernel: CPU topo: Max. dies per package: 1 Jan 20 06:47:52.904466 kernel: CPU topo: Max. threads per core: 2 Jan 20 06:47:52.904477 kernel: CPU topo: Num. cores per package: 1 Jan 20 06:47:52.904486 kernel: CPU topo: Num. threads per package: 2 Jan 20 06:47:52.904495 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 20 06:47:52.904504 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 06:47:52.904513 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 20 06:47:52.904522 kernel: Booting paravirtualized kernel on KVM Jan 20 06:47:52.904531 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 06:47:52.904541 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 20 06:47:52.904552 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 20 06:47:52.904561 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 20 06:47:52.904570 kernel: pcpu-alloc: [0] 0 1 Jan 20 06:47:52.904579 kernel: kvm-guest: PV spinlocks enabled Jan 20 06:47:52.904588 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 06:47:52.904599 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:47:52.904611 kernel: random: crng init done Jan 20 06:47:52.904620 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 06:47:52.904629 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 20 06:47:52.904638 kernel: Fallback order for Node 0: 0 Jan 20 06:47:52.904647 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jan 20 06:47:52.904656 kernel: Policy zone: DMA32 Jan 20 06:47:52.904676 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 06:47:52.904685 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 06:47:52.904695 kernel: Kernel/User page tables isolation: enabled Jan 20 06:47:52.904706 kernel: ftrace: allocating 40128 entries in 157 pages Jan 20 06:47:52.904716 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 06:47:52.904726 kernel: Dynamic Preempt: voluntary Jan 20 06:47:52.904735 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 06:47:52.904746 kernel: rcu: RCU event tracing is enabled. Jan 20 06:47:52.904755 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 06:47:52.904767 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 06:47:52.904777 kernel: Rude variant of Tasks RCU enabled. Jan 20 06:47:52.904786 kernel: Tracing variant of Tasks RCU enabled. Jan 20 06:47:52.904795 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 06:47:52.904804 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 06:47:52.904814 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 06:47:52.904826 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 06:47:52.904836 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 06:47:52.904845 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 20 06:47:52.904855 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 06:47:52.904865 kernel: Console: colour dummy device 80x25 Jan 20 06:47:52.904874 kernel: printk: legacy console [tty0] enabled Jan 20 06:47:52.904883 kernel: printk: legacy console [ttyS0] enabled Jan 20 06:47:52.904895 kernel: ACPI: Core revision 20240827 Jan 20 06:47:52.904905 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 20 06:47:52.904924 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 06:47:52.904934 kernel: x2apic enabled Jan 20 06:47:52.905967 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 06:47:52.905978 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jan 20 06:47:52.905988 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Jan 20 06:47:52.906003 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 20 06:47:52.906013 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 20 06:47:52.906022 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 06:47:52.906031 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 06:47:52.906040 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 06:47:52.906050 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 20 06:47:52.906060 kernel: RETBleed: Vulnerable Jan 20 06:47:52.906069 kernel: Speculative Store Bypass: Vulnerable Jan 20 06:47:52.906078 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 06:47:52.906090 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 06:47:52.906099 kernel: GDS: Unknown: Dependent on hypervisor status Jan 20 06:47:52.906108 kernel: active return thunk: its_return_thunk Jan 20 06:47:52.906117 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 20 06:47:52.906126 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 06:47:52.906136 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 06:47:52.906145 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 06:47:52.906154 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 20 06:47:52.906163 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 20 06:47:52.906172 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 20 06:47:52.906183 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 20 06:47:52.906193 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 20 06:47:52.906202 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 20 06:47:52.906211 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 06:47:52.906220 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 20 06:47:52.906229 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 20 06:47:52.906239 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 20 06:47:52.906248 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 20 06:47:52.906257 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 20 06:47:52.906266 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 20 06:47:52.906276 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 20 06:47:52.906288 kernel: Freeing SMP alternatives memory: 32K Jan 20 06:47:52.906297 kernel: pid_max: default: 32768 minimum: 301 Jan 20 06:47:52.906306 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 06:47:52.906315 kernel: landlock: Up and running. Jan 20 06:47:52.906324 kernel: SELinux: Initializing. Jan 20 06:47:52.906333 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 20 06:47:52.906342 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 20 06:47:52.906352 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 20 06:47:52.906362 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 20 06:47:52.906371 kernel: signal: max sigframe size: 3632 Jan 20 06:47:52.906383 kernel: rcu: Hierarchical SRCU implementation. Jan 20 06:47:52.906394 kernel: rcu: Max phase no-delay instances is 400. Jan 20 06:47:52.906403 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 06:47:52.906420 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 06:47:52.906430 kernel: smp: Bringing up secondary CPUs ... Jan 20 06:47:52.906439 kernel: smpboot: x86: Booting SMP configuration: Jan 20 06:47:52.906449 kernel: .... node #0, CPUs: #1 Jan 20 06:47:52.906462 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 20 06:47:52.906473 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 20 06:47:52.906482 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 06:47:52.906492 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Jan 20 06:47:52.906502 kernel: Memory: 1924432K/2037804K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 108808K reserved, 0K cma-reserved) Jan 20 06:47:52.906512 kernel: devtmpfs: initialized Jan 20 06:47:52.906522 kernel: x86/mm: Memory block size: 128MB Jan 20 06:47:52.906534 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 20 06:47:52.906544 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 06:47:52.906554 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 06:47:52.906564 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 06:47:52.906574 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 06:47:52.906584 kernel: audit: initializing netlink subsys (disabled) Jan 20 06:47:52.906593 kernel: audit: type=2000 audit(1768891668.914:1): state=initialized audit_enabled=0 res=1 Jan 20 06:47:52.906605 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 06:47:52.906615 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 06:47:52.906625 kernel: cpuidle: using governor menu Jan 20 06:47:52.906635 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 06:47:52.906644 kernel: dca service started, version 1.12.1 Jan 20 06:47:52.906654 kernel: PCI: Using configuration type 1 for base access Jan 20 06:47:52.906664 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 06:47:52.906676 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 06:47:52.906685 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 06:47:52.906695 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 06:47:52.906704 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 06:47:52.906714 kernel: ACPI: Added _OSI(Module Device) Jan 20 06:47:52.906723 kernel: ACPI: Added _OSI(Processor Device) Jan 20 06:47:52.906733 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 06:47:52.906745 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 20 06:47:52.906754 kernel: ACPI: Interpreter enabled Jan 20 06:47:52.906764 kernel: ACPI: PM: (supports S0 S5) Jan 20 06:47:52.906774 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 06:47:52.906784 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 06:47:52.906794 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 06:47:52.906803 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 20 06:47:52.906815 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 06:47:52.911148 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 20 06:47:52.911307 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 20 06:47:52.911439 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 20 06:47:52.911452 kernel: acpiphp: Slot [3] registered Jan 20 06:47:52.911462 kernel: acpiphp: Slot [4] registered Jan 20 06:47:52.911478 kernel: acpiphp: Slot [5] registered Jan 20 06:47:52.911487 kernel: acpiphp: Slot [6] registered Jan 20 06:47:52.911497 kernel: acpiphp: Slot [7] registered Jan 20 06:47:52.911506 kernel: acpiphp: Slot [8] registered Jan 20 06:47:52.911516 kernel: acpiphp: Slot [9] registered Jan 20 06:47:52.911526 kernel: acpiphp: Slot [10] registered Jan 20 06:47:52.911536 kernel: acpiphp: Slot [11] registered Jan 20 06:47:52.911548 kernel: acpiphp: Slot [12] registered Jan 20 06:47:52.911557 kernel: acpiphp: Slot [13] registered Jan 20 06:47:52.911567 kernel: acpiphp: Slot [14] registered Jan 20 06:47:52.911577 kernel: acpiphp: Slot [15] registered Jan 20 06:47:52.911586 kernel: acpiphp: Slot [16] registered Jan 20 06:47:52.911596 kernel: acpiphp: Slot [17] registered Jan 20 06:47:52.911605 kernel: acpiphp: Slot [18] registered Jan 20 06:47:52.911615 kernel: acpiphp: Slot [19] registered Jan 20 06:47:52.911627 kernel: acpiphp: Slot [20] registered Jan 20 06:47:52.911637 kernel: acpiphp: Slot [21] registered Jan 20 06:47:52.911646 kernel: acpiphp: Slot [22] registered Jan 20 06:47:52.911656 kernel: acpiphp: Slot [23] registered Jan 20 06:47:52.911665 kernel: acpiphp: Slot [24] registered Jan 20 06:47:52.911675 kernel: acpiphp: Slot [25] registered Jan 20 06:47:52.911684 kernel: acpiphp: Slot [26] registered Jan 20 06:47:52.911696 kernel: acpiphp: Slot [27] registered Jan 20 06:47:52.911716 kernel: acpiphp: Slot [28] registered Jan 20 06:47:52.911726 kernel: acpiphp: Slot [29] registered Jan 20 06:47:52.911736 kernel: acpiphp: Slot [30] registered Jan 20 06:47:52.911746 kernel: acpiphp: Slot [31] registered Jan 20 06:47:52.911755 kernel: PCI host bridge to bus 0000:00 Jan 20 06:47:52.911897 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 06:47:52.914111 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 06:47:52.914242 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 06:47:52.914359 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 20 06:47:52.914580 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 20 06:47:52.914698 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 06:47:52.914857 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 20 06:47:52.915205 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 20 06:47:52.915354 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jan 20 06:47:52.915488 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 20 06:47:52.915614 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 20 06:47:52.915739 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 20 06:47:52.915868 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 20 06:47:52.916008 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 20 06:47:52.916135 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 20 06:47:52.916260 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 20 06:47:52.916392 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 06:47:52.916518 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jan 20 06:47:52.916647 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 20 06:47:52.916772 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 06:47:52.916905 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jan 20 06:47:52.918904 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jan 20 06:47:52.919072 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jan 20 06:47:52.919208 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jan 20 06:47:52.919222 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 06:47:52.919233 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 06:47:52.919242 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 06:47:52.919252 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 06:47:52.919262 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 20 06:47:52.919272 kernel: iommu: Default domain type: Translated Jan 20 06:47:52.919285 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 06:47:52.919295 kernel: efivars: Registered efivars operations Jan 20 06:47:52.919304 kernel: PCI: Using ACPI for IRQ routing Jan 20 06:47:52.919314 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 06:47:52.919324 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jan 20 06:47:52.919333 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 20 06:47:52.919341 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 20 06:47:52.919471 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 20 06:47:52.919602 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 20 06:47:52.919730 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 06:47:52.919742 kernel: vgaarb: loaded Jan 20 06:47:52.919753 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 20 06:47:52.919762 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 20 06:47:52.919772 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 06:47:52.919782 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 06:47:52.919795 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 06:47:52.919804 kernel: pnp: PnP ACPI init Jan 20 06:47:52.919814 kernel: pnp: PnP ACPI: found 5 devices Jan 20 06:47:52.919824 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 06:47:52.919834 kernel: NET: Registered PF_INET protocol family Jan 20 06:47:52.919844 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 06:47:52.919854 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 20 06:47:52.919866 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 06:47:52.919877 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 20 06:47:52.919887 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 20 06:47:52.919896 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 20 06:47:52.919906 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 20 06:47:52.919931 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 20 06:47:52.919941 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 06:47:52.919954 kernel: NET: Registered PF_XDP protocol family Jan 20 06:47:52.920079 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 06:47:52.920196 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 06:47:52.920312 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 06:47:52.920426 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 20 06:47:52.920540 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 20 06:47:52.920674 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 20 06:47:52.920687 kernel: PCI: CLS 0 bytes, default 64 Jan 20 06:47:52.920697 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 20 06:47:52.920707 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jan 20 06:47:52.920717 kernel: clocksource: Switched to clocksource tsc Jan 20 06:47:52.920727 kernel: Initialise system trusted keyrings Jan 20 06:47:52.920736 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 20 06:47:52.920749 kernel: Key type asymmetric registered Jan 20 06:47:52.920758 kernel: Asymmetric key parser 'x509' registered Jan 20 06:47:52.920768 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 06:47:52.920778 kernel: io scheduler mq-deadline registered Jan 20 06:47:52.920788 kernel: io scheduler kyber registered Jan 20 06:47:52.920798 kernel: io scheduler bfq registered Jan 20 06:47:52.920808 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 06:47:52.920820 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 06:47:52.920830 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 06:47:52.920839 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 06:47:52.920849 kernel: i8042: Warning: Keylock active Jan 20 06:47:52.920859 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 06:47:52.920868 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 06:47:52.921483 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 20 06:47:52.921620 kernel: rtc_cmos 00:00: registered as rtc0 Jan 20 06:47:52.921742 kernel: rtc_cmos 00:00: setting system clock to 2026-01-20T06:47:49 UTC (1768891669) Jan 20 06:47:52.921862 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 20 06:47:52.921891 kernel: intel_pstate: CPU model not supported Jan 20 06:47:52.921905 kernel: efifb: probing for efifb Jan 20 06:47:52.921933 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jan 20 06:47:52.921947 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 20 06:47:52.921957 kernel: efifb: scrolling: redraw Jan 20 06:47:52.921967 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 06:47:52.921978 kernel: Console: switching to colour frame buffer device 100x37 Jan 20 06:47:52.921988 kernel: fb0: EFI VGA frame buffer device Jan 20 06:47:52.921998 kernel: pstore: Using crash dump compression: deflate Jan 20 06:47:52.922008 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 06:47:52.922019 kernel: NET: Registered PF_INET6 protocol family Jan 20 06:47:52.922032 kernel: Segment Routing with IPv6 Jan 20 06:47:52.922042 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 06:47:52.922052 kernel: NET: Registered PF_PACKET protocol family Jan 20 06:47:52.922062 kernel: Key type dns_resolver registered Jan 20 06:47:52.922072 kernel: IPI shorthand broadcast: enabled Jan 20 06:47:52.922082 kernel: sched_clock: Marking stable (1454001966, 146475854)->(1671499388, -71021568) Jan 20 06:47:52.922093 kernel: registered taskstats version 1 Jan 20 06:47:52.922105 kernel: Loading compiled-in X.509 certificates Jan 20 06:47:52.922116 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3e9049adf8f1d71dd06c731465288f6e1d353052' Jan 20 06:47:52.922128 kernel: Demotion targets for Node 0: null Jan 20 06:47:52.922138 kernel: Key type .fscrypt registered Jan 20 06:47:52.922148 kernel: Key type fscrypt-provisioning registered Jan 20 06:47:52.922159 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 06:47:52.922169 kernel: ima: Allocated hash algorithm: sha1 Jan 20 06:47:52.922181 kernel: ima: No architecture policies found Jan 20 06:47:52.922192 kernel: clk: Disabling unused clocks Jan 20 06:47:52.922201 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 20 06:47:52.922212 kernel: Write protecting the kernel read-only data: 47104k Jan 20 06:47:52.922224 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 20 06:47:52.922237 kernel: Run /init as init process Jan 20 06:47:52.922247 kernel: with arguments: Jan 20 06:47:52.922257 kernel: /init Jan 20 06:47:52.922267 kernel: with environment: Jan 20 06:47:52.922277 kernel: HOME=/ Jan 20 06:47:52.922287 kernel: TERM=linux Jan 20 06:47:52.922396 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 20 06:47:52.922426 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 20 06:47:52.922541 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 20 06:47:52.922556 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 06:47:52.922566 kernel: GPT:25804799 != 33554431 Jan 20 06:47:52.922576 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 06:47:52.922590 kernel: GPT:25804799 != 33554431 Jan 20 06:47:52.922600 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 06:47:52.922610 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 20 06:47:52.922620 kernel: SCSI subsystem initialized Jan 20 06:47:52.922631 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 06:47:52.922641 kernel: device-mapper: uevent: version 1.0.3 Jan 20 06:47:52.922652 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 06:47:52.922665 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 06:47:52.922675 kernel: raid6: avx512x4 gen() 17872 MB/s Jan 20 06:47:52.922685 kernel: raid6: avx512x2 gen() 17881 MB/s Jan 20 06:47:52.922695 kernel: raid6: avx512x1 gen() 17935 MB/s Jan 20 06:47:52.922705 kernel: raid6: avx2x4 gen() 17594 MB/s Jan 20 06:47:52.922715 kernel: raid6: avx2x2 gen() 17821 MB/s Jan 20 06:47:52.922726 kernel: raid6: avx2x1 gen() 13812 MB/s Jan 20 06:47:52.922736 kernel: raid6: using algorithm avx512x1 gen() 17935 MB/s Jan 20 06:47:52.922749 kernel: raid6: .... xor() 21125 MB/s, rmw enabled Jan 20 06:47:52.922759 kernel: raid6: using avx512x2 recovery algorithm Jan 20 06:47:52.922770 kernel: xor: automatically using best checksumming function avx Jan 20 06:47:52.922780 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 06:47:52.922790 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 06:47:52.922801 kernel: BTRFS: device fsid 98f50efd-4872-4dd8-af35-5e494490b9aa devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (152) Jan 20 06:47:52.922811 kernel: BTRFS info (device dm-0): first mount of filesystem 98f50efd-4872-4dd8-af35-5e494490b9aa Jan 20 06:47:52.922824 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:47:52.922834 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 20 06:47:52.922844 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 06:47:52.922855 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 06:47:52.922865 kernel: loop: module loaded Jan 20 06:47:52.922875 kernel: loop0: detected capacity change from 0 to 100552 Jan 20 06:47:52.922885 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 06:47:52.922899 systemd[1]: Successfully made /usr/ read-only. Jan 20 06:47:52.922999 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 06:47:52.923016 systemd[1]: Detected virtualization amazon. Jan 20 06:47:52.923027 systemd[1]: Detected architecture x86-64. Jan 20 06:47:52.923037 systemd[1]: Running in initrd. Jan 20 06:47:52.923050 systemd[1]: No hostname configured, using default hostname. Jan 20 06:47:52.923065 systemd[1]: Hostname set to . Jan 20 06:47:52.923075 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 06:47:52.923089 systemd[1]: Queued start job for default target initrd.target. Jan 20 06:47:52.923099 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 06:47:52.923110 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:47:52.923121 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:47:52.923135 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 06:47:52.923147 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 06:47:52.923159 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 06:47:52.923170 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 06:47:52.923181 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:47:52.923195 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:47:52.923206 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 06:47:52.923217 systemd[1]: Reached target paths.target - Path Units. Jan 20 06:47:52.923228 systemd[1]: Reached target slices.target - Slice Units. Jan 20 06:47:52.923239 systemd[1]: Reached target swap.target - Swaps. Jan 20 06:47:52.923250 systemd[1]: Reached target timers.target - Timer Units. Jan 20 06:47:52.923261 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 06:47:52.923274 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 06:47:52.923285 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:47:52.923295 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 06:47:52.923306 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 06:47:52.923317 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:47:52.923328 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 06:47:52.923339 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:47:52.923352 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 06:47:52.923363 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 06:47:52.923374 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 06:47:52.923386 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 06:47:52.923396 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 06:47:52.923407 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 06:47:52.923419 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 06:47:52.923432 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 06:47:52.923443 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 06:47:52.923455 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:47:52.923468 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 06:47:52.923479 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:47:52.923490 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 06:47:52.923501 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 06:47:52.923513 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 06:47:52.923524 kernel: Bridge firewalling registered Jan 20 06:47:52.923534 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 06:47:52.923548 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 06:47:52.923583 systemd-journald[289]: Collecting audit messages is enabled. Jan 20 06:47:52.923608 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 06:47:52.923623 kernel: audit: type=1130 audit(1768891672.902:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:52.923634 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 06:47:52.923645 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:47:52.923656 kernel: audit: type=1130 audit(1768891672.921:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:52.923669 systemd-journald[289]: Journal started Jan 20 06:47:52.923693 systemd-journald[289]: Runtime Journal (/run/log/journal/ec29a6e6df653aa6ff4a958c09fa77f6) is 4.7M, max 38M, 33.2M free. Jan 20 06:47:52.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:52.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:52.889581 systemd-modules-load[291]: Inserted module 'br_netfilter' Jan 20 06:47:52.931954 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 06:47:52.938976 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 06:47:52.939050 kernel: audit: type=1130 audit(1768891672.934:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:52.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:52.940254 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:47:52.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:52.947979 kernel: audit: type=1130 audit(1768891672.942:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:52.949227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:47:52.958713 kernel: audit: type=1130 audit(1768891672.948:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:52.958754 kernel: audit: type=1334 audit(1768891672.952:7): prog-id=6 op=LOAD Jan 20 06:47:52.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:52.952000 audit: BPF prog-id=6 op=LOAD Jan 20 06:47:52.959361 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 06:47:52.963105 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 06:47:52.982780 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 06:47:52.992954 kernel: audit: type=1130 audit(1768891672.983:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:52.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:52.989310 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 06:47:52.995474 systemd-tmpfiles[315]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 06:47:53.005033 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:47:53.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.013948 kernel: audit: type=1130 audit(1768891673.006:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.028618 dracut-cmdline[328]: dracut-109 Jan 20 06:47:53.034349 dracut-cmdline[328]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:47:53.131498 systemd-resolved[314]: Positive Trust Anchors: Jan 20 06:47:53.131523 systemd-resolved[314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 06:47:53.131528 systemd-resolved[314]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 06:47:53.131593 systemd-resolved[314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 06:47:53.167994 systemd-resolved[314]: Defaulting to hostname 'linux'. Jan 20 06:47:53.169865 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 06:47:53.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.171467 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:47:53.177797 kernel: audit: type=1130 audit(1768891673.170:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.262968 kernel: Loading iSCSI transport class v2.0-870. Jan 20 06:47:53.342117 kernel: iscsi: registered transport (tcp) Jan 20 06:47:53.407179 kernel: iscsi: registered transport (qla4xxx) Jan 20 06:47:53.407251 kernel: QLogic iSCSI HBA Driver Jan 20 06:47:53.435477 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 06:47:53.457457 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:47:53.465015 kernel: audit: type=1130 audit(1768891673.457:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.460679 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 06:47:53.509121 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 06:47:53.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.513096 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 06:47:53.517038 kernel: audit: type=1130 audit(1768891673.508:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.518181 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 06:47:53.554897 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 06:47:53.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.560121 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:47:53.570706 kernel: audit: type=1130 audit(1768891673.555:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.570750 kernel: audit: type=1334 audit(1768891673.557:14): prog-id=7 op=LOAD Jan 20 06:47:53.570766 kernel: audit: type=1334 audit(1768891673.557:15): prog-id=8 op=LOAD Jan 20 06:47:53.557000 audit: BPF prog-id=7 op=LOAD Jan 20 06:47:53.557000 audit: BPF prog-id=8 op=LOAD Jan 20 06:47:53.601998 systemd-udevd[561]: Using default interface naming scheme 'v257'. Jan 20 06:47:53.620673 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:47:53.625608 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 06:47:53.635888 kernel: audit: type=1130 audit(1768891673.623:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.661892 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 06:47:53.669829 kernel: audit: type=1130 audit(1768891673.661:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.666102 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 06:47:53.663000 audit: BPF prog-id=9 op=LOAD Jan 20 06:47:53.675236 kernel: audit: type=1334 audit(1768891673.663:18): prog-id=9 op=LOAD Jan 20 06:47:53.675399 dracut-pre-trigger[637]: rd.md=0: removing MD RAID activation Jan 20 06:47:53.711710 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 06:47:53.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.717101 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 06:47:53.722135 kernel: audit: type=1130 audit(1768891673.712:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.744085 systemd-networkd[670]: lo: Link UP Jan 20 06:47:53.744097 systemd-networkd[670]: lo: Gained carrier Jan 20 06:47:53.744829 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 06:47:53.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.748159 systemd[1]: Reached target network.target - Network. Jan 20 06:47:53.792969 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:47:53.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.797108 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 06:47:53.926185 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 20 06:47:53.926668 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 20 06:47:53.932037 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 20 06:47:53.937013 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:77:87:a4:a3:eb Jan 20 06:47:53.938259 (udev-worker)[713]: Network interface NamePolicy= disabled on kernel command line. Jan 20 06:47:53.951177 systemd-networkd[670]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:47:53.951187 systemd-networkd[670]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 06:47:53.959318 systemd-networkd[670]: eth0: Link UP Jan 20 06:47:53.960519 systemd-networkd[670]: eth0: Gained carrier Jan 20 06:47:53.960536 systemd-networkd[670]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:47:53.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:53.961228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:47:53.961363 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:47:53.963065 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:47:53.967238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:47:53.972035 systemd-networkd[670]: eth0: DHCPv4 address 172.31.26.220/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 20 06:47:54.007973 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jan 20 06:47:54.043095 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 06:47:54.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:54.042593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:47:54.083303 kernel: AES CTR mode by8 optimization enabled Jan 20 06:47:54.105019 kernel: nvme nvme0: using unchecked data buffer Jan 20 06:47:54.206550 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 20 06:47:54.208351 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 06:47:54.228631 disk-uuid[825]: Primary Header is updated. Jan 20 06:47:54.228631 disk-uuid[825]: Secondary Entries is updated. Jan 20 06:47:54.228631 disk-uuid[825]: Secondary Header is updated. Jan 20 06:47:54.293156 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 20 06:47:54.317441 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 20 06:47:54.374162 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 20 06:47:54.647157 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 06:47:54.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:54.648473 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 06:47:54.650037 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:47:54.650538 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 06:47:54.652532 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 06:47:54.693572 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 06:47:54.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:55.361397 disk-uuid[826]: Warning: The kernel is still using the old partition table. Jan 20 06:47:55.361397 disk-uuid[826]: The new table will be used at the next reboot or after you Jan 20 06:47:55.361397 disk-uuid[826]: run partprobe(8) or kpartx(8) Jan 20 06:47:55.361397 disk-uuid[826]: The operation has completed successfully. Jan 20 06:47:55.368888 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 06:47:55.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:55.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:55.369056 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 06:47:55.371138 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 06:47:55.419955 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1071) Jan 20 06:47:55.423198 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:47:55.423270 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:47:55.464036 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 20 06:47:55.464114 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 20 06:47:55.472944 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:47:55.474236 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 06:47:55.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:55.475954 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 06:47:55.613156 systemd-networkd[670]: eth0: Gained IPv6LL Jan 20 06:47:56.660614 ignition[1090]: Ignition 2.24.0 Jan 20 06:47:56.660631 ignition[1090]: Stage: fetch-offline Jan 20 06:47:56.660708 ignition[1090]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:47:56.660718 ignition[1090]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 06:47:56.660947 ignition[1090]: Ignition finished successfully Jan 20 06:47:56.664366 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 06:47:56.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:56.665975 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 06:47:56.694941 ignition[1096]: Ignition 2.24.0 Jan 20 06:47:56.694957 ignition[1096]: Stage: fetch Jan 20 06:47:56.695173 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:47:56.695181 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 06:47:56.695247 ignition[1096]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 06:47:56.704095 ignition[1096]: PUT result: OK Jan 20 06:47:56.706131 ignition[1096]: parsed url from cmdline: "" Jan 20 06:47:56.706229 ignition[1096]: no config URL provided Jan 20 06:47:56.706240 ignition[1096]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 06:47:56.706256 ignition[1096]: no config at "/usr/lib/ignition/user.ign" Jan 20 06:47:56.706274 ignition[1096]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 06:47:56.706989 ignition[1096]: PUT result: OK Jan 20 06:47:56.707045 ignition[1096]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 20 06:47:56.707717 ignition[1096]: GET result: OK Jan 20 06:47:56.707777 ignition[1096]: parsing config with SHA512: 087d45a1aa936a708968ae08543446b4984e72652ca3f8d641e42970d80bd074909bcd7e0dda06d2dad3a684a7386cf91fd58060ea891e2887984142c0e58b41 Jan 20 06:47:56.715231 unknown[1096]: fetched base config from "system" Jan 20 06:47:56.715258 unknown[1096]: fetched base config from "system" Jan 20 06:47:56.715800 ignition[1096]: fetch: fetch complete Jan 20 06:47:56.715267 unknown[1096]: fetched user config from "aws" Jan 20 06:47:56.715807 ignition[1096]: fetch: fetch passed Jan 20 06:47:56.715869 ignition[1096]: Ignition finished successfully Jan 20 06:47:56.719082 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 06:47:56.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:56.720686 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 06:47:56.750643 ignition[1102]: Ignition 2.24.0 Jan 20 06:47:56.750660 ignition[1102]: Stage: kargs Jan 20 06:47:56.750959 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:47:56.750972 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 06:47:56.751080 ignition[1102]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 06:47:56.752054 ignition[1102]: PUT result: OK Jan 20 06:47:56.756247 ignition[1102]: kargs: kargs passed Jan 20 06:47:56.756352 ignition[1102]: Ignition finished successfully Jan 20 06:47:56.758078 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 06:47:56.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:56.760296 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 06:47:56.791377 ignition[1108]: Ignition 2.24.0 Jan 20 06:47:56.791393 ignition[1108]: Stage: disks Jan 20 06:47:56.791679 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:47:56.791691 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 06:47:56.791801 ignition[1108]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 06:47:56.793308 ignition[1108]: PUT result: OK Jan 20 06:47:56.798561 ignition[1108]: disks: disks passed Jan 20 06:47:56.799577 ignition[1108]: Ignition finished successfully Jan 20 06:47:56.801341 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 06:47:56.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:56.802494 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 06:47:56.803039 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 06:47:56.803559 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 06:47:56.804139 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 06:47:56.804708 systemd[1]: Reached target basic.target - Basic System. Jan 20 06:47:56.806562 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 06:47:56.900322 systemd-fsck[1116]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 20 06:47:56.903062 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 06:47:56.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:56.905956 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 06:47:57.146978 kernel: EXT4-fs (nvme0n1p9): mounted filesystem cccfbfd8-bb77-4a2f-9af9-c87f4957b904 r/w with ordered data mode. Quota mode: none. Jan 20 06:47:57.148652 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 06:47:57.149995 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 06:47:57.211184 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 06:47:57.214023 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 06:47:57.215422 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 06:47:57.216123 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 06:47:57.216152 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 06:47:57.227606 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 06:47:57.230254 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 06:47:57.243960 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1135) Jan 20 06:47:57.247949 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:47:57.248014 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:47:57.253988 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 20 06:47:57.254066 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 20 06:47:57.256471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 06:47:59.362025 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 06:47:59.370107 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 20 06:47:59.370148 kernel: audit: type=1130 audit(1768891679.361:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:59.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:59.366076 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 06:47:59.377139 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 06:47:59.388318 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 06:47:59.390485 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:47:59.422232 ignition[1231]: INFO : Ignition 2.24.0 Jan 20 06:47:59.422232 ignition[1231]: INFO : Stage: mount Jan 20 06:47:59.424036 ignition[1231]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:47:59.424036 ignition[1231]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 06:47:59.424036 ignition[1231]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 06:47:59.426566 ignition[1231]: INFO : PUT result: OK Jan 20 06:47:59.431974 ignition[1231]: INFO : mount: mount passed Jan 20 06:47:59.432546 ignition[1231]: INFO : Ignition finished successfully Jan 20 06:47:59.434650 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 06:47:59.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:59.438094 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 06:47:59.444601 kernel: audit: type=1130 audit(1768891679.435:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:59.449338 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 06:47:59.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:59.455962 kernel: audit: type=1130 audit(1768891679.449:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:59.460649 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 06:47:59.491944 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1243) Jan 20 06:47:59.496477 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:47:59.496542 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:47:59.504240 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 20 06:47:59.504321 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 20 06:47:59.506131 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 06:47:59.536289 ignition[1259]: INFO : Ignition 2.24.0 Jan 20 06:47:59.536289 ignition[1259]: INFO : Stage: files Jan 20 06:47:59.537543 ignition[1259]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:47:59.537543 ignition[1259]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 06:47:59.537543 ignition[1259]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 06:47:59.537543 ignition[1259]: INFO : PUT result: OK Jan 20 06:47:59.541239 ignition[1259]: DEBUG : files: compiled without relabeling support, skipping Jan 20 06:47:59.542874 ignition[1259]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 06:47:59.542874 ignition[1259]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 06:47:59.646606 ignition[1259]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 06:47:59.647572 ignition[1259]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 06:47:59.648279 ignition[1259]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 06:47:59.647900 unknown[1259]: wrote ssh authorized keys file for user: core Jan 20 06:47:59.650976 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 06:47:59.651745 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 06:47:59.738036 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 06:47:59.906899 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 06:47:59.906899 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 06:47:59.909141 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 06:47:59.909141 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 06:47:59.909141 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 06:47:59.909141 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 06:47:59.909141 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 06:47:59.909141 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 06:47:59.909141 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 06:47:59.915112 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 06:47:59.915112 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 06:47:59.915112 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:47:59.915112 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:47:59.915112 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:47:59.915112 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 06:48:00.384941 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 06:48:01.782628 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:48:01.782628 ignition[1259]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 06:48:01.867718 ignition[1259]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 06:48:01.871228 ignition[1259]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 06:48:01.871228 ignition[1259]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 06:48:01.871228 ignition[1259]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 06:48:01.880782 ignition[1259]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 06:48:01.880782 ignition[1259]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 06:48:01.880782 ignition[1259]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 06:48:01.880782 ignition[1259]: INFO : files: files passed Jan 20 06:48:01.880782 ignition[1259]: INFO : Ignition finished successfully Jan 20 06:48:01.930775 kernel: audit: type=1130 audit(1768891681.876:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:01.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:01.876787 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 06:48:01.893117 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 06:48:01.938335 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 06:48:01.952051 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 06:48:01.952206 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 06:48:01.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:01.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.005420 kernel: audit: type=1130 audit(1768891681.967:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.005504 kernel: audit: type=1131 audit(1768891681.967:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.112783 initrd-setup-root-after-ignition[1292]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:48:02.112783 initrd-setup-root-after-ignition[1292]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:48:02.125123 initrd-setup-root-after-ignition[1296]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:48:02.125349 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 06:48:02.140684 kernel: audit: type=1130 audit(1768891682.130:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.130578 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 06:48:02.146888 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 06:48:02.266499 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 06:48:02.266680 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 06:48:02.280325 kernel: audit: type=1130 audit(1768891682.268:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.280368 kernel: audit: type=1131 audit(1768891682.268:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.269678 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 06:48:02.280892 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 06:48:02.282123 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 06:48:02.283567 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 06:48:02.318887 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 06:48:02.327136 kernel: audit: type=1130 audit(1768891682.318:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.322123 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 06:48:02.348539 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 06:48:02.348904 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:48:02.350158 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:48:02.351327 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 06:48:02.352251 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 06:48:02.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.352502 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 06:48:02.353663 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 06:48:02.355478 systemd[1]: Stopped target basic.target - Basic System. Jan 20 06:48:02.356335 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 06:48:02.357083 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 06:48:02.357936 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 06:48:02.358792 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 06:48:02.359815 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 06:48:02.360584 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 06:48:02.361479 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 06:48:02.362672 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 06:48:02.363576 systemd[1]: Stopped target swap.target - Swaps. Jan 20 06:48:02.364341 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 06:48:02.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.364590 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 06:48:02.365633 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:48:02.367212 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:48:02.368116 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 06:48:02.368279 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:48:02.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.368949 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 06:48:02.369185 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 06:48:02.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.370645 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 06:48:02.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.370905 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 06:48:02.371682 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 06:48:02.371899 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 06:48:02.375020 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 06:48:02.377661 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 06:48:02.377895 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:48:02.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.389252 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 06:48:02.391088 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 06:48:02.391405 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:48:02.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.393591 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 06:48:02.393841 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:48:02.397330 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 06:48:02.397567 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 06:48:02.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.411171 ignition[1316]: INFO : Ignition 2.24.0 Jan 20 06:48:02.411171 ignition[1316]: INFO : Stage: umount Jan 20 06:48:02.415751 ignition[1316]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:48:02.415751 ignition[1316]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 20 06:48:02.415751 ignition[1316]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 20 06:48:02.415751 ignition[1316]: INFO : PUT result: OK Jan 20 06:48:02.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.413527 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 06:48:02.413665 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 06:48:02.426410 ignition[1316]: INFO : umount: umount passed Jan 20 06:48:02.426410 ignition[1316]: INFO : Ignition finished successfully Jan 20 06:48:02.430066 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 06:48:02.430880 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 06:48:02.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.432326 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 06:48:02.432399 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 06:48:02.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.434564 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 06:48:02.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.434648 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 06:48:02.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.435902 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 06:48:02.436003 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 06:48:02.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.436478 systemd[1]: Stopped target network.target - Network. Jan 20 06:48:02.437185 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 06:48:02.437270 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 06:48:02.437968 systemd[1]: Stopped target paths.target - Path Units. Jan 20 06:48:02.439008 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 06:48:02.439489 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:48:02.440625 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 06:48:02.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.441802 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 06:48:02.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.442981 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 06:48:02.443027 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 06:48:02.443372 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 06:48:02.443403 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 06:48:02.444083 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 06:48:02.444124 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:48:02.444534 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 06:48:02.444611 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 06:48:02.445580 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 06:48:02.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.445646 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 06:48:02.446443 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 06:48:02.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.447104 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 06:48:02.449286 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 06:48:02.450237 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 06:48:02.450486 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 06:48:02.453375 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 06:48:02.453505 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 06:48:02.458859 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 06:48:02.459139 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 06:48:02.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.461230 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 06:48:02.461369 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 06:48:02.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.462000 audit: BPF prog-id=6 op=UNLOAD Jan 20 06:48:02.463000 audit: BPF prog-id=9 op=UNLOAD Jan 20 06:48:02.464827 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 06:48:02.465855 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 06:48:02.466034 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:48:02.467867 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 06:48:02.468407 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 06:48:02.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.468486 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 06:48:02.470681 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 06:48:02.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.470764 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:48:02.473805 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 06:48:02.473888 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 06:48:02.474561 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:48:02.489969 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 06:48:02.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.490195 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:48:02.492850 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 06:48:02.494038 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 06:48:02.495660 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 06:48:02.495730 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:48:02.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.497792 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 06:48:02.497887 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 06:48:02.500023 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 06:48:02.500085 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 06:48:02.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.501699 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 06:48:02.501796 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 06:48:02.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.507877 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 06:48:02.510534 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 06:48:02.510652 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:48:02.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.514161 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 06:48:02.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.514254 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:48:02.515987 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:48:02.516059 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:48:02.517469 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 06:48:02.520159 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 06:48:02.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.529109 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 06:48:02.529251 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 06:48:02.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:02.531372 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 06:48:02.534126 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 06:48:02.560586 systemd[1]: Switching root. Jan 20 06:48:02.677291 systemd-journald[289]: Journal stopped Jan 20 06:48:06.282700 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Jan 20 06:48:06.282804 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 06:48:06.282835 kernel: SELinux: policy capability open_perms=1 Jan 20 06:48:06.282857 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 06:48:06.282878 kernel: SELinux: policy capability always_check_network=0 Jan 20 06:48:06.282899 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 06:48:06.283417 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 06:48:06.283447 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 06:48:06.283469 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 06:48:06.283494 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 06:48:06.283518 systemd[1]: Successfully loaded SELinux policy in 111.285ms. Jan 20 06:48:06.283561 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.055ms. Jan 20 06:48:06.283585 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 06:48:06.283609 systemd[1]: Detected virtualization amazon. Jan 20 06:48:06.283631 systemd[1]: Detected architecture x86-64. Jan 20 06:48:06.283654 systemd[1]: Detected first boot. Jan 20 06:48:06.283681 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 06:48:06.283702 kernel: kauditd_printk_skb: 39 callbacks suppressed Jan 20 06:48:06.283729 kernel: audit: type=1334 audit(1768891684.493:83): prog-id=10 op=LOAD Jan 20 06:48:06.283756 kernel: audit: type=1334 audit(1768891684.493:84): prog-id=10 op=UNLOAD Jan 20 06:48:06.283778 kernel: audit: type=1334 audit(1768891684.493:85): prog-id=11 op=LOAD Jan 20 06:48:06.283799 kernel: audit: type=1334 audit(1768891684.493:86): prog-id=11 op=UNLOAD Jan 20 06:48:06.283822 zram_generator::config[1359]: No configuration found. Jan 20 06:48:06.283847 kernel: Guest personality initialized and is inactive Jan 20 06:48:06.283868 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 06:48:06.283889 kernel: Initialized host personality Jan 20 06:48:06.283937 kernel: NET: Registered PF_VSOCK protocol family Jan 20 06:48:06.283957 systemd[1]: Populated /etc with preset unit settings. Jan 20 06:48:06.283976 kernel: audit: type=1334 audit(1768891685.923:87): prog-id=12 op=LOAD Jan 20 06:48:06.283996 kernel: audit: type=1334 audit(1768891685.923:88): prog-id=3 op=UNLOAD Jan 20 06:48:06.284022 kernel: audit: type=1334 audit(1768891685.923:89): prog-id=13 op=LOAD Jan 20 06:48:06.284040 kernel: audit: type=1334 audit(1768891685.923:90): prog-id=14 op=LOAD Jan 20 06:48:06.284059 kernel: audit: type=1334 audit(1768891685.923:91): prog-id=4 op=UNLOAD Jan 20 06:48:06.284079 kernel: audit: type=1334 audit(1768891685.923:92): prog-id=5 op=UNLOAD Jan 20 06:48:06.284104 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 06:48:06.284128 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 06:48:06.284150 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 06:48:06.284180 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 06:48:06.284201 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 06:48:06.284220 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 06:48:06.284252 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 06:48:06.284273 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 06:48:06.284297 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 06:48:06.284319 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 06:48:06.284339 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 06:48:06.284361 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:48:06.284384 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:48:06.284405 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 06:48:06.284426 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 06:48:06.284451 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 06:48:06.284472 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 06:48:06.284493 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 06:48:06.284514 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:48:06.284535 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:48:06.284557 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 06:48:06.284579 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 06:48:06.284603 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 06:48:06.284626 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 06:48:06.284647 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:48:06.284669 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 06:48:06.284690 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 06:48:06.284711 systemd[1]: Reached target slices.target - Slice Units. Jan 20 06:48:06.284731 systemd[1]: Reached target swap.target - Swaps. Jan 20 06:48:06.284753 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 06:48:06.284778 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 06:48:06.284800 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 06:48:06.284822 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:48:06.284843 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 06:48:06.284863 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:48:06.284884 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 06:48:06.284905 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 06:48:06.284997 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 06:48:06.285019 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:48:06.285040 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 06:48:06.285060 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 06:48:06.285081 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 06:48:06.285101 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 06:48:06.285122 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:48:06.285148 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 06:48:06.285168 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 06:48:06.285188 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 06:48:06.285211 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 06:48:06.285231 systemd[1]: Reached target machines.target - Containers. Jan 20 06:48:06.285252 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 06:48:06.285275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:48:06.285296 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 06:48:06.285316 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 06:48:06.285335 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:48:06.285357 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 06:48:06.285386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:48:06.285410 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 06:48:06.285432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:48:06.285452 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 06:48:06.285475 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 06:48:06.285495 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 06:48:06.285519 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 06:48:06.285540 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 06:48:06.285560 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:48:06.285584 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 06:48:06.285605 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 06:48:06.285626 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 06:48:06.285647 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 06:48:06.285668 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 06:48:06.285688 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 06:48:06.285709 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:48:06.285732 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 06:48:06.285752 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 06:48:06.285772 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 06:48:06.285793 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 06:48:06.285814 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 06:48:06.285834 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 06:48:06.285855 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:48:06.285881 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 06:48:06.285904 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 06:48:06.285940 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:48:06.285964 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:48:06.285987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:48:06.286012 kernel: fuse: init (API version 7.41) Jan 20 06:48:06.286034 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:48:06.286057 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 06:48:06.286083 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 06:48:06.286145 systemd-journald[1437]: Collecting audit messages is enabled. Jan 20 06:48:06.286186 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:48:06.286210 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:48:06.286236 systemd-journald[1437]: Journal started Jan 20 06:48:06.286279 systemd-journald[1437]: Runtime Journal (/run/log/journal/ec29a6e6df653aa6ff4a958c09fa77f6) is 4.7M, max 38M, 33.2M free. Jan 20 06:48:05.998000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 06:48:06.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.152000 audit: BPF prog-id=14 op=UNLOAD Jan 20 06:48:06.152000 audit: BPF prog-id=13 op=UNLOAD Jan 20 06:48:06.155000 audit: BPF prog-id=15 op=LOAD Jan 20 06:48:06.157000 audit: BPF prog-id=16 op=LOAD Jan 20 06:48:06.157000 audit: BPF prog-id=17 op=LOAD Jan 20 06:48:06.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.277000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 06:48:06.277000 audit[1437]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc5e7b5510 a2=4000 a3=0 items=0 ppid=1 pid=1437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:06.277000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 06:48:05.904323 systemd[1]: Queued start job for default target multi-user.target. Jan 20 06:48:05.924413 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 20 06:48:06.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:05.924836 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 06:48:06.289984 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 06:48:06.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.293354 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 06:48:06.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.294871 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:48:06.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.297398 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 06:48:06.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.308885 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 06:48:06.311463 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 06:48:06.315573 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 06:48:06.321056 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 06:48:06.321896 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 06:48:06.321967 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 06:48:06.324898 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 06:48:06.328544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:48:06.328746 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:48:06.336213 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 06:48:06.339638 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 06:48:06.341548 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 06:48:06.346366 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 06:48:06.348442 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 06:48:06.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.350855 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 06:48:06.357230 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 06:48:06.362487 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 06:48:06.364410 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 06:48:06.373439 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 06:48:06.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.407690 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 06:48:06.413089 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 06:48:06.435747 systemd-journald[1437]: Time spent on flushing to /var/log/journal/ec29a6e6df653aa6ff4a958c09fa77f6 is 96.844ms for 1147 entries. Jan 20 06:48:06.435747 systemd-journald[1437]: System Journal (/var/log/journal/ec29a6e6df653aa6ff4a958c09fa77f6) is 8M, max 588.1M, 580.1M free. Jan 20 06:48:06.556860 systemd-journald[1437]: Received client request to flush runtime journal. Jan 20 06:48:06.556960 kernel: ACPI: bus type drm_connector registered Jan 20 06:48:06.557005 kernel: loop1: detected capacity change from 0 to 111560 Jan 20 06:48:06.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.452766 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 06:48:06.456001 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 06:48:06.461413 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 06:48:06.513582 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:48:06.520513 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 06:48:06.520779 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 06:48:06.544491 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:48:06.560992 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 06:48:06.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.589955 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 06:48:06.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.591000 audit: BPF prog-id=18 op=LOAD Jan 20 06:48:06.591000 audit: BPF prog-id=19 op=LOAD Jan 20 06:48:06.592000 audit: BPF prog-id=20 op=LOAD Jan 20 06:48:06.598000 audit: BPF prog-id=21 op=LOAD Jan 20 06:48:06.596204 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 06:48:06.600607 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 06:48:06.604086 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 06:48:06.629252 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 06:48:06.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.635000 audit: BPF prog-id=22 op=LOAD Jan 20 06:48:06.635000 audit: BPF prog-id=23 op=LOAD Jan 20 06:48:06.635000 audit: BPF prog-id=24 op=LOAD Jan 20 06:48:06.638615 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 06:48:06.640000 audit: BPF prog-id=25 op=LOAD Jan 20 06:48:06.641000 audit: BPF prog-id=26 op=LOAD Jan 20 06:48:06.642000 audit: BPF prog-id=27 op=LOAD Jan 20 06:48:06.644213 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 06:48:06.713824 systemd-tmpfiles[1511]: ACLs are not supported, ignoring. Jan 20 06:48:06.714310 systemd-tmpfiles[1511]: ACLs are not supported, ignoring. Jan 20 06:48:06.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.725049 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:48:06.754608 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 06:48:06.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.755577 systemd-nsresourced[1513]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 06:48:06.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.757115 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 06:48:06.889030 kernel: loop2: detected capacity change from 0 to 73176 Jan 20 06:48:06.893806 systemd-oomd[1509]: No swap; memory pressure usage will be degraded Jan 20 06:48:06.894622 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 06:48:06.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:06.925316 systemd-resolved[1510]: Positive Trust Anchors: Jan 20 06:48:06.925329 systemd-resolved[1510]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 06:48:06.925334 systemd-resolved[1510]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 06:48:06.925371 systemd-resolved[1510]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 06:48:06.931619 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 06:48:07.005293 systemd-resolved[1510]: Defaulting to hostname 'linux'. Jan 20 06:48:07.007239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 06:48:07.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:07.008463 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:48:07.196949 kernel: loop3: detected capacity change from 0 to 224512 Jan 20 06:48:07.326220 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 06:48:07.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:07.326000 audit: BPF prog-id=8 op=UNLOAD Jan 20 06:48:07.326000 audit: BPF prog-id=7 op=UNLOAD Jan 20 06:48:07.326000 audit: BPF prog-id=28 op=LOAD Jan 20 06:48:07.326000 audit: BPF prog-id=29 op=LOAD Jan 20 06:48:07.328495 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:48:07.368106 systemd-udevd[1536]: Using default interface naming scheme 'v257'. Jan 20 06:48:07.472866 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:48:07.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:07.474000 audit: BPF prog-id=30 op=LOAD Jan 20 06:48:07.477339 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 06:48:07.526951 kernel: loop4: detected capacity change from 0 to 50784 Jan 20 06:48:07.555612 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 06:48:07.557218 (udev-worker)[1554]: Network interface NamePolicy= disabled on kernel command line. Jan 20 06:48:07.596879 systemd-networkd[1540]: lo: Link UP Jan 20 06:48:07.596891 systemd-networkd[1540]: lo: Gained carrier Jan 20 06:48:07.598181 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 06:48:07.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:07.599771 systemd[1]: Reached target network.target - Network. Jan 20 06:48:07.603615 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 06:48:07.607460 systemd-networkd[1540]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:48:07.607631 systemd-networkd[1540]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 06:48:07.608727 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 06:48:07.610627 systemd-networkd[1540]: eth0: Link UP Jan 20 06:48:07.611030 systemd-networkd[1540]: eth0: Gained carrier Jan 20 06:48:07.611147 systemd-networkd[1540]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:48:07.625556 systemd-networkd[1540]: eth0: DHCPv4 address 172.31.26.220/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 20 06:48:07.663887 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 06:48:07.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:07.673188 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 06:48:07.673294 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 06:48:07.677973 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 20 06:48:07.688170 kernel: ACPI: button: Power Button [PWRF] Jan 20 06:48:07.692956 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 20 06:48:07.705948 kernel: ACPI: button: Sleep Button [SLPF] Jan 20 06:48:07.825860 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:48:07.867130 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:48:07.868283 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:48:07.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:07.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:07.872107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:48:07.877084 kernel: loop5: detected capacity change from 0 to 111560 Jan 20 06:48:07.903939 kernel: loop6: detected capacity change from 0 to 73176 Jan 20 06:48:07.925951 kernel: loop7: detected capacity change from 0 to 224512 Jan 20 06:48:07.957953 kernel: loop1: detected capacity change from 0 to 50784 Jan 20 06:48:07.977360 (sd-merge)[1586]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 20 06:48:07.981332 (sd-merge)[1586]: Merged extensions into '/usr'. Jan 20 06:48:07.992639 systemd[1]: Reload requested from client PID 1491 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 06:48:07.992661 systemd[1]: Reloading... Jan 20 06:48:08.108944 zram_generator::config[1660]: No configuration found. Jan 20 06:48:08.456145 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 20 06:48:08.457412 systemd[1]: Reloading finished in 463 ms. Jan 20 06:48:08.478164 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 06:48:08.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.479275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:48:08.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.525390 systemd[1]: Starting ensure-sysext.service... Jan 20 06:48:08.529113 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 06:48:08.530826 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 06:48:08.533000 audit: BPF prog-id=31 op=LOAD Jan 20 06:48:08.533000 audit: BPF prog-id=21 op=UNLOAD Jan 20 06:48:08.534000 audit: BPF prog-id=32 op=LOAD Jan 20 06:48:08.534000 audit: BPF prog-id=15 op=UNLOAD Jan 20 06:48:08.534000 audit: BPF prog-id=33 op=LOAD Jan 20 06:48:08.534000 audit: BPF prog-id=34 op=LOAD Jan 20 06:48:08.534000 audit: BPF prog-id=16 op=UNLOAD Jan 20 06:48:08.534000 audit: BPF prog-id=17 op=UNLOAD Jan 20 06:48:08.536000 audit: BPF prog-id=35 op=LOAD Jan 20 06:48:08.536000 audit: BPF prog-id=25 op=UNLOAD Jan 20 06:48:08.536000 audit: BPF prog-id=36 op=LOAD Jan 20 06:48:08.536000 audit: BPF prog-id=37 op=LOAD Jan 20 06:48:08.536000 audit: BPF prog-id=26 op=UNLOAD Jan 20 06:48:08.536000 audit: BPF prog-id=27 op=UNLOAD Jan 20 06:48:08.537000 audit: BPF prog-id=38 op=LOAD Jan 20 06:48:08.537000 audit: BPF prog-id=18 op=UNLOAD Jan 20 06:48:08.539000 audit: BPF prog-id=39 op=LOAD Jan 20 06:48:08.539000 audit: BPF prog-id=40 op=LOAD Jan 20 06:48:08.539000 audit: BPF prog-id=19 op=UNLOAD Jan 20 06:48:08.539000 audit: BPF prog-id=20 op=UNLOAD Jan 20 06:48:08.539000 audit: BPF prog-id=41 op=LOAD Jan 20 06:48:08.539000 audit: BPF prog-id=42 op=LOAD Jan 20 06:48:08.539000 audit: BPF prog-id=28 op=UNLOAD Jan 20 06:48:08.539000 audit: BPF prog-id=29 op=UNLOAD Jan 20 06:48:08.540000 audit: BPF prog-id=43 op=LOAD Jan 20 06:48:08.540000 audit: BPF prog-id=22 op=UNLOAD Jan 20 06:48:08.540000 audit: BPF prog-id=44 op=LOAD Jan 20 06:48:08.540000 audit: BPF prog-id=45 op=LOAD Jan 20 06:48:08.540000 audit: BPF prog-id=23 op=UNLOAD Jan 20 06:48:08.540000 audit: BPF prog-id=24 op=UNLOAD Jan 20 06:48:08.541000 audit: BPF prog-id=46 op=LOAD Jan 20 06:48:08.541000 audit: BPF prog-id=30 op=UNLOAD Jan 20 06:48:08.549379 systemd[1]: Reload requested from client PID 1759 ('systemctl') (unit ensure-sysext.service)... Jan 20 06:48:08.549519 systemd[1]: Reloading... Jan 20 06:48:08.558427 systemd-tmpfiles[1761]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 06:48:08.558480 systemd-tmpfiles[1761]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 06:48:08.559282 systemd-tmpfiles[1761]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 06:48:08.560544 systemd-tmpfiles[1761]: ACLs are not supported, ignoring. Jan 20 06:48:08.560610 systemd-tmpfiles[1761]: ACLs are not supported, ignoring. Jan 20 06:48:08.567429 systemd-tmpfiles[1761]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 06:48:08.567581 systemd-tmpfiles[1761]: Skipping /boot Jan 20 06:48:08.581511 systemd-tmpfiles[1761]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 06:48:08.581529 systemd-tmpfiles[1761]: Skipping /boot Jan 20 06:48:08.625943 zram_generator::config[1793]: No configuration found. Jan 20 06:48:08.869355 systemd[1]: Reloading finished in 319 ms. Jan 20 06:48:08.890000 audit: BPF prog-id=47 op=LOAD Jan 20 06:48:08.890000 audit: BPF prog-id=43 op=UNLOAD Jan 20 06:48:08.891000 audit: BPF prog-id=48 op=LOAD Jan 20 06:48:08.891000 audit: BPF prog-id=49 op=LOAD Jan 20 06:48:08.891000 audit: BPF prog-id=44 op=UNLOAD Jan 20 06:48:08.891000 audit: BPF prog-id=45 op=UNLOAD Jan 20 06:48:08.891000 audit: BPF prog-id=50 op=LOAD Jan 20 06:48:08.891000 audit: BPF prog-id=31 op=UNLOAD Jan 20 06:48:08.892000 audit: BPF prog-id=51 op=LOAD Jan 20 06:48:08.892000 audit: BPF prog-id=46 op=UNLOAD Jan 20 06:48:08.893000 audit: BPF prog-id=52 op=LOAD Jan 20 06:48:08.893000 audit: BPF prog-id=38 op=UNLOAD Jan 20 06:48:08.894000 audit: BPF prog-id=53 op=LOAD Jan 20 06:48:08.894000 audit: BPF prog-id=54 op=LOAD Jan 20 06:48:08.894000 audit: BPF prog-id=39 op=UNLOAD Jan 20 06:48:08.894000 audit: BPF prog-id=40 op=UNLOAD Jan 20 06:48:08.894000 audit: BPF prog-id=55 op=LOAD Jan 20 06:48:08.894000 audit: BPF prog-id=35 op=UNLOAD Jan 20 06:48:08.895000 audit: BPF prog-id=56 op=LOAD Jan 20 06:48:08.899000 audit: BPF prog-id=57 op=LOAD Jan 20 06:48:08.899000 audit: BPF prog-id=36 op=UNLOAD Jan 20 06:48:08.899000 audit: BPF prog-id=37 op=UNLOAD Jan 20 06:48:08.899000 audit: BPF prog-id=58 op=LOAD Jan 20 06:48:08.899000 audit: BPF prog-id=59 op=LOAD Jan 20 06:48:08.899000 audit: BPF prog-id=41 op=UNLOAD Jan 20 06:48:08.899000 audit: BPF prog-id=42 op=UNLOAD Jan 20 06:48:08.900000 audit: BPF prog-id=60 op=LOAD Jan 20 06:48:08.900000 audit: BPF prog-id=32 op=UNLOAD Jan 20 06:48:08.900000 audit: BPF prog-id=61 op=LOAD Jan 20 06:48:08.900000 audit: BPF prog-id=62 op=LOAD Jan 20 06:48:08.900000 audit: BPF prog-id=33 op=UNLOAD Jan 20 06:48:08.900000 audit: BPF prog-id=34 op=UNLOAD Jan 20 06:48:08.903839 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 06:48:08.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.904773 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:48:08.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.915324 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 06:48:08.920061 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 06:48:08.922169 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 06:48:08.926208 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 06:48:08.929286 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 06:48:08.933557 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:48:08.933738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:48:08.938032 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:48:08.941502 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:48:08.943212 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:48:08.943746 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:48:08.943976 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:48:08.944073 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:48:08.944162 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:48:08.950724 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:48:08.950949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:48:08.951131 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:48:08.951290 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:48:08.951377 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:48:08.951461 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:48:08.956576 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:48:08.956900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:48:08.958000 audit[1855]: SYSTEM_BOOT pid=1855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.961863 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 06:48:08.963125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:48:08.963324 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:48:08.963433 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:48:08.963587 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 06:48:08.964119 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:48:08.965394 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:48:08.966760 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:48:08.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.973196 systemd[1]: Finished ensure-sysext.service. Jan 20 06:48:08.979720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:48:08.980030 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:48:08.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.981084 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:48:08.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.981304 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:48:08.982209 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 06:48:08.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:08.982434 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 06:48:08.984971 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 06:48:08.992398 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 06:48:08.992479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 06:48:09.081507 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 06:48:09.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:48:09.144000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 06:48:09.144000 audit[1886]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffea6aff8f0 a2=420 a3=0 items=0 ppid=1851 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:09.144000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 06:48:09.146189 augenrules[1886]: No rules Jan 20 06:48:09.147136 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 06:48:09.147448 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 06:48:09.284363 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 06:48:09.285059 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 06:48:09.629200 systemd-networkd[1540]: eth0: Gained IPv6LL Jan 20 06:48:09.634065 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 06:48:09.635018 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 06:48:11.728824 ldconfig[1853]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 06:48:11.734203 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 06:48:11.736064 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 06:48:11.756465 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 06:48:11.757168 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 06:48:11.757709 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 06:48:11.758141 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 06:48:11.758629 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 06:48:11.759149 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 06:48:11.759573 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 06:48:11.759942 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 06:48:11.760330 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 06:48:11.760634 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 06:48:11.761036 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 06:48:11.761074 systemd[1]: Reached target paths.target - Path Units. Jan 20 06:48:11.761447 systemd[1]: Reached target timers.target - Timer Units. Jan 20 06:48:11.763075 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 06:48:11.764977 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 06:48:11.767611 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 06:48:11.768188 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 06:48:11.768544 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 06:48:11.771077 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 06:48:11.771817 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 06:48:11.772969 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 06:48:11.774243 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 06:48:11.774721 systemd[1]: Reached target basic.target - Basic System. Jan 20 06:48:11.775131 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 06:48:11.775168 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 06:48:11.776338 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 06:48:11.779088 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 06:48:11.781129 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 06:48:11.786866 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 06:48:11.790252 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 06:48:11.793195 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 06:48:11.793601 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 06:48:11.796115 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 06:48:11.800846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:48:11.805146 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 06:48:11.808751 systemd[1]: Started ntpd.service - Network Time Service. Jan 20 06:48:11.812175 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 06:48:11.814098 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 06:48:11.820133 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 20 06:48:11.828680 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 06:48:11.836296 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 06:48:11.846055 jq[1903]: false Jan 20 06:48:11.852532 google_oslogin_nss_cache[1905]: oslogin_cache_refresh[1905]: Refreshing passwd entry cache Jan 20 06:48:11.849463 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 06:48:11.848874 oslogin_cache_refresh[1905]: Refreshing passwd entry cache Jan 20 06:48:11.849879 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 06:48:11.850433 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 06:48:11.852175 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 06:48:11.869614 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 06:48:11.877731 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 06:48:11.880282 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 06:48:11.880523 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 06:48:11.885336 google_oslogin_nss_cache[1905]: oslogin_cache_refresh[1905]: Failure getting users, quitting Jan 20 06:48:11.885336 google_oslogin_nss_cache[1905]: oslogin_cache_refresh[1905]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 06:48:11.885329 oslogin_cache_refresh[1905]: Failure getting users, quitting Jan 20 06:48:11.885499 google_oslogin_nss_cache[1905]: oslogin_cache_refresh[1905]: Refreshing group entry cache Jan 20 06:48:11.885348 oslogin_cache_refresh[1905]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 06:48:11.885395 oslogin_cache_refresh[1905]: Refreshing group entry cache Jan 20 06:48:11.890880 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 06:48:11.892274 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 06:48:11.898606 extend-filesystems[1904]: Found /dev/nvme0n1p6 Jan 20 06:48:11.903067 google_oslogin_nss_cache[1905]: oslogin_cache_refresh[1905]: Failure getting groups, quitting Jan 20 06:48:11.903067 google_oslogin_nss_cache[1905]: oslogin_cache_refresh[1905]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 06:48:11.900732 oslogin_cache_refresh[1905]: Failure getting groups, quitting Jan 20 06:48:11.900745 oslogin_cache_refresh[1905]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 06:48:11.908205 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 06:48:11.912199 extend-filesystems[1904]: Found /dev/nvme0n1p9 Jan 20 06:48:11.913119 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 06:48:11.917300 jq[1924]: true Jan 20 06:48:11.941516 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 06:48:11.943016 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 06:48:11.944582 extend-filesystems[1904]: Checking size of /dev/nvme0n1p9 Jan 20 06:48:11.946139 update_engine[1919]: I20260120 06:48:11.946063 1919 main.cc:92] Flatcar Update Engine starting Jan 20 06:48:11.957633 tar[1931]: linux-amd64/LICENSE Jan 20 06:48:11.957633 tar[1931]: linux-amd64/helm Jan 20 06:48:11.961624 ntpd[1908]: ntpd 4.2.8p18@1.4062-o Tue Jan 20 03:41:42 UTC 2026 (1): Starting Jan 20 06:48:11.963141 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: ntpd 4.2.8p18@1.4062-o Tue Jan 20 03:41:42 UTC 2026 (1): Starting Jan 20 06:48:11.963141 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 20 06:48:11.963141 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: ---------------------------------------------------- Jan 20 06:48:11.963141 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, Jan 20 06:48:11.963141 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 20 06:48:11.963141 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: corporation. Support and training for ntp-4 are Jan 20 06:48:11.963141 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: available at https://www.nwtime.org/support Jan 20 06:48:11.963141 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: ---------------------------------------------------- Jan 20 06:48:11.961684 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 20 06:48:11.961692 ntpd[1908]: ---------------------------------------------------- Jan 20 06:48:11.961698 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, Jan 20 06:48:11.961705 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 20 06:48:11.961712 ntpd[1908]: corporation. Support and training for ntp-4 are Jan 20 06:48:11.961720 ntpd[1908]: available at https://www.nwtime.org/support Jan 20 06:48:11.961726 ntpd[1908]: ---------------------------------------------------- Jan 20 06:48:11.973054 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: proto: precision = 0.059 usec (-24) Jan 20 06:48:11.973054 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: basedate set to 2026-01-08 Jan 20 06:48:11.973054 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: gps base set to 2026-01-11 (week 2401) Jan 20 06:48:11.973054 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 Jan 20 06:48:11.973054 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 20 06:48:11.973054 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 Jan 20 06:48:11.973054 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: Listen normally on 3 eth0 172.31.26.220:123 Jan 20 06:48:11.973054 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: Listen normally on 4 lo [::1]:123 Jan 20 06:48:11.973054 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: Listen normally on 5 eth0 [fe80::477:87ff:fea4:a3eb%2]:123 Jan 20 06:48:11.973054 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: Listening on routing socket on fd #22 for interface updates Jan 20 06:48:11.968417 ntpd[1908]: proto: precision = 0.059 usec (-24) Jan 20 06:48:11.969659 ntpd[1908]: basedate set to 2026-01-08 Jan 20 06:48:11.969677 ntpd[1908]: gps base set to 2026-01-11 (week 2401) Jan 20 06:48:11.969789 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 Jan 20 06:48:11.969811 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 20 06:48:11.970001 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 Jan 20 06:48:11.970022 ntpd[1908]: Listen normally on 3 eth0 172.31.26.220:123 Jan 20 06:48:11.970044 ntpd[1908]: Listen normally on 4 lo [::1]:123 Jan 20 06:48:11.970065 ntpd[1908]: Listen normally on 5 eth0 [fe80::477:87ff:fea4:a3eb%2]:123 Jan 20 06:48:11.970084 ntpd[1908]: Listening on routing socket on fd #22 for interface updates Jan 20 06:48:11.978995 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 06:48:11.983090 jq[1952]: true Jan 20 06:48:11.993443 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 20 06:48:11.996125 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 20 06:48:11.996125 ntpd[1908]: 20 Jan 06:48:11 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 20 06:48:11.993481 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 20 06:48:12.023754 extend-filesystems[1904]: Resized partition /dev/nvme0n1p9 Jan 20 06:48:12.029950 extend-filesystems[1977]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 06:48:12.035545 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 20 06:48:12.052626 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 20 06:48:12.071237 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 20 06:48:12.071590 extend-filesystems[1977]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 20 06:48:12.071590 extend-filesystems[1977]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 20 06:48:12.071590 extend-filesystems[1977]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 20 06:48:12.078118 extend-filesystems[1904]: Resized filesystem in /dev/nvme0n1p9 Jan 20 06:48:12.073325 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 06:48:12.073689 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 06:48:12.083288 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 20 06:48:12.127373 systemd-logind[1916]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 06:48:12.127806 systemd-logind[1916]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 20 06:48:12.127908 systemd-logind[1916]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 06:48:12.130273 systemd-logind[1916]: New seat seat0. Jan 20 06:48:12.131685 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 06:48:12.154184 bash[1996]: Updated "/home/core/.ssh/authorized_keys" Jan 20 06:48:12.159175 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 06:48:12.164807 systemd[1]: Starting sshkeys.service... Jan 20 06:48:12.196270 coreos-metadata[1900]: Jan 20 06:48:12.193 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 20 06:48:12.196270 coreos-metadata[1900]: Jan 20 06:48:12.195 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 20 06:48:12.197966 coreos-metadata[1900]: Jan 20 06:48:12.197 INFO Fetch successful Jan 20 06:48:12.198060 coreos-metadata[1900]: Jan 20 06:48:12.197 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 20 06:48:12.200460 coreos-metadata[1900]: Jan 20 06:48:12.199 INFO Fetch successful Jan 20 06:48:12.200460 coreos-metadata[1900]: Jan 20 06:48:12.199 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 20 06:48:12.202388 coreos-metadata[1900]: Jan 20 06:48:12.202 INFO Fetch successful Jan 20 06:48:12.202388 coreos-metadata[1900]: Jan 20 06:48:12.202 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 20 06:48:12.206404 dbus-daemon[1901]: [system] SELinux support is enabled Jan 20 06:48:12.206742 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 06:48:12.209946 coreos-metadata[1900]: Jan 20 06:48:12.209 INFO Fetch successful Jan 20 06:48:12.209946 coreos-metadata[1900]: Jan 20 06:48:12.209 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 20 06:48:12.232588 coreos-metadata[1900]: Jan 20 06:48:12.232 INFO Fetch failed with 404: resource not found Jan 20 06:48:12.232588 coreos-metadata[1900]: Jan 20 06:48:12.232 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 20 06:48:12.233662 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 06:48:12.233699 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 06:48:12.234439 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 06:48:12.234462 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 06:48:12.238369 coreos-metadata[1900]: Jan 20 06:48:12.235 INFO Fetch successful Jan 20 06:48:12.238369 coreos-metadata[1900]: Jan 20 06:48:12.235 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 20 06:48:12.242944 coreos-metadata[1900]: Jan 20 06:48:12.238 INFO Fetch successful Jan 20 06:48:12.242944 coreos-metadata[1900]: Jan 20 06:48:12.238 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 20 06:48:12.242944 coreos-metadata[1900]: Jan 20 06:48:12.241 INFO Fetch successful Jan 20 06:48:12.242944 coreos-metadata[1900]: Jan 20 06:48:12.241 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 20 06:48:12.242944 coreos-metadata[1900]: Jan 20 06:48:12.241 INFO Fetch successful Jan 20 06:48:12.242944 coreos-metadata[1900]: Jan 20 06:48:12.241 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 20 06:48:12.243237 coreos-metadata[1900]: Jan 20 06:48:12.243 INFO Fetch successful Jan 20 06:48:12.250794 dbus-daemon[1901]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 06:48:12.270112 dbus-daemon[1901]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.4' (uid=244 pid=1540 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 20 06:48:12.276298 systemd[1]: Started update-engine.service - Update Engine. Jan 20 06:48:12.285949 update_engine[1919]: I20260120 06:48:12.284215 1919 update_check_scheduler.cc:74] Next update check in 7m17s Jan 20 06:48:12.290898 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 20 06:48:12.338489 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 06:48:12.357741 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 20 06:48:12.366089 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 20 06:48:12.443197 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 06:48:12.445668 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 06:48:12.507014 amazon-ssm-agent[1991]: Initializing new seelog logger Jan 20 06:48:12.507014 amazon-ssm-agent[1991]: New Seelog Logger Creation Complete Jan 20 06:48:12.507014 amazon-ssm-agent[1991]: 2026/01/20 06:48:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 06:48:12.507014 amazon-ssm-agent[1991]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 06:48:12.507014 amazon-ssm-agent[1991]: 2026/01/20 06:48:12 processing appconfig overrides Jan 20 06:48:12.515644 amazon-ssm-agent[1991]: 2026/01/20 06:48:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 06:48:12.515644 amazon-ssm-agent[1991]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 06:48:12.515644 amazon-ssm-agent[1991]: 2026/01/20 06:48:12 processing appconfig overrides Jan 20 06:48:12.515644 amazon-ssm-agent[1991]: 2026/01/20 06:48:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 06:48:12.515644 amazon-ssm-agent[1991]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 06:48:12.515644 amazon-ssm-agent[1991]: 2026/01/20 06:48:12 processing appconfig overrides Jan 20 06:48:12.515644 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.5091 INFO Proxy environment variables: Jan 20 06:48:12.529317 amazon-ssm-agent[1991]: 2026/01/20 06:48:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 06:48:12.529317 amazon-ssm-agent[1991]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 06:48:12.529317 amazon-ssm-agent[1991]: 2026/01/20 06:48:12 processing appconfig overrides Jan 20 06:48:12.530350 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 20 06:48:12.532690 dbus-daemon[1901]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 20 06:48:12.533964 dbus-daemon[1901]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2015 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 20 06:48:12.547734 systemd[1]: Starting polkit.service - Authorization Manager... Jan 20 06:48:12.614944 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.5126 INFO https_proxy: Jan 20 06:48:12.721428 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.5126 INFO http_proxy: Jan 20 06:48:12.733036 coreos-metadata[2028]: Jan 20 06:48:12.729 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 20 06:48:12.745012 coreos-metadata[2028]: Jan 20 06:48:12.744 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 20 06:48:12.747216 coreos-metadata[2028]: Jan 20 06:48:12.747 INFO Fetch successful Jan 20 06:48:12.747216 coreos-metadata[2028]: Jan 20 06:48:12.747 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 20 06:48:12.751325 coreos-metadata[2028]: Jan 20 06:48:12.750 INFO Fetch successful Jan 20 06:48:12.754370 unknown[2028]: wrote ssh authorized keys file for user: core Jan 20 06:48:12.835330 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.5126 INFO no_proxy: Jan 20 06:48:12.858008 update-ssh-keys[2086]: Updated "/home/core/.ssh/authorized_keys" Jan 20 06:48:12.860631 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 20 06:48:12.861720 polkitd[2043]: Started polkitd version 126 Jan 20 06:48:12.867325 systemd[1]: Finished sshkeys.service. Jan 20 06:48:12.900448 polkitd[2043]: Loading rules from directory /etc/polkit-1/rules.d Jan 20 06:48:12.900966 polkitd[2043]: Loading rules from directory /run/polkit-1/rules.d Jan 20 06:48:12.901036 polkitd[2043]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 20 06:48:12.901425 polkitd[2043]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 20 06:48:12.901452 polkitd[2043]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 20 06:48:12.901500 polkitd[2043]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 20 06:48:12.906375 polkitd[2043]: Finished loading, compiling and executing 2 rules Jan 20 06:48:12.911167 systemd[1]: Started polkit.service - Authorization Manager. Jan 20 06:48:12.912630 dbus-daemon[1901]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 20 06:48:12.917170 polkitd[2043]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 20 06:48:12.936694 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.5128 INFO Checking if agent identity type OnPrem can be assumed Jan 20 06:48:13.016211 systemd-hostnamed[2015]: Hostname set to (transient) Jan 20 06:48:13.019107 systemd-resolved[1510]: System hostname changed to 'ip-172-31-26-220'. Jan 20 06:48:13.047946 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.5130 INFO Checking if agent identity type EC2 can be assumed Jan 20 06:48:13.128449 containerd[1959]: time="2026-01-20T06:48:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 06:48:13.130616 containerd[1959]: time="2026-01-20T06:48:13.130569963Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 06:48:13.144039 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.9687 INFO Agent will take identity from EC2 Jan 20 06:48:13.158155 containerd[1959]: time="2026-01-20T06:48:13.158103992Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.238µs" Jan 20 06:48:13.158331 containerd[1959]: time="2026-01-20T06:48:13.158297281Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 06:48:13.158449 containerd[1959]: time="2026-01-20T06:48:13.158431021Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 06:48:13.158543 containerd[1959]: time="2026-01-20T06:48:13.158527696Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 06:48:13.158805 containerd[1959]: time="2026-01-20T06:48:13.158781825Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 06:48:13.161014 containerd[1959]: time="2026-01-20T06:48:13.159963790Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 06:48:13.161014 containerd[1959]: time="2026-01-20T06:48:13.160090267Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 06:48:13.161014 containerd[1959]: time="2026-01-20T06:48:13.160107569Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 06:48:13.161014 containerd[1959]: time="2026-01-20T06:48:13.160366972Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 06:48:13.161014 containerd[1959]: time="2026-01-20T06:48:13.160384829Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 06:48:13.161014 containerd[1959]: time="2026-01-20T06:48:13.160400070Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 06:48:13.161014 containerd[1959]: time="2026-01-20T06:48:13.160412208Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 06:48:13.161014 containerd[1959]: time="2026-01-20T06:48:13.160582195Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 06:48:13.161014 containerd[1959]: time="2026-01-20T06:48:13.160596482Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 06:48:13.161014 containerd[1959]: time="2026-01-20T06:48:13.160695146Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 06:48:13.161014 containerd[1959]: time="2026-01-20T06:48:13.160900008Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 06:48:13.161439 containerd[1959]: time="2026-01-20T06:48:13.160953463Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 06:48:13.161439 containerd[1959]: time="2026-01-20T06:48:13.160969322Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 06:48:13.161645 containerd[1959]: time="2026-01-20T06:48:13.161604247Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 06:48:13.163152 containerd[1959]: time="2026-01-20T06:48:13.163121651Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 06:48:13.163634 containerd[1959]: time="2026-01-20T06:48:13.163592392Z" level=info msg="metadata content store policy set" policy=shared Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.171581878Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.171678550Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.171814128Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.171832048Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.171849281Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.171866725Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.171882848Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.171897194Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.171913304Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.171943713Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.171961398Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.171977927Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.171992854Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 06:48:13.172565 containerd[1959]: time="2026-01-20T06:48:13.172013587Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172174126Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172201425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172231140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172246696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172263511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172279584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172295948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172319912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172334409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172349114Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172363079Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172393404Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172462308Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.172480596Z" level=info msg="Start snapshots syncer" Jan 20 06:48:13.175173 containerd[1959]: time="2026-01-20T06:48:13.173747920Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 06:48:13.175707 containerd[1959]: time="2026-01-20T06:48:13.174782894Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 06:48:13.175707 containerd[1959]: time="2026-01-20T06:48:13.174857440Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 06:48:13.176945 containerd[1959]: time="2026-01-20T06:48:13.176551279Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 06:48:13.178126 containerd[1959]: time="2026-01-20T06:48:13.177160757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 06:48:13.178126 containerd[1959]: time="2026-01-20T06:48:13.177198273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 06:48:13.178126 containerd[1959]: time="2026-01-20T06:48:13.177218946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 06:48:13.178126 containerd[1959]: time="2026-01-20T06:48:13.177234509Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 06:48:13.178126 containerd[1959]: time="2026-01-20T06:48:13.177252737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 06:48:13.178126 containerd[1959]: time="2026-01-20T06:48:13.177275873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 06:48:13.178126 containerd[1959]: time="2026-01-20T06:48:13.177297020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 06:48:13.178126 containerd[1959]: time="2026-01-20T06:48:13.177313157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 06:48:13.178126 containerd[1959]: time="2026-01-20T06:48:13.177330458Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 06:48:13.180200 containerd[1959]: time="2026-01-20T06:48:13.178569837Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 06:48:13.180200 containerd[1959]: time="2026-01-20T06:48:13.178612303Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 06:48:13.180200 containerd[1959]: time="2026-01-20T06:48:13.178628328Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 06:48:13.180200 containerd[1959]: time="2026-01-20T06:48:13.178645387Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 06:48:13.180200 containerd[1959]: time="2026-01-20T06:48:13.178659131Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 06:48:13.180200 containerd[1959]: time="2026-01-20T06:48:13.178674922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 06:48:13.180200 containerd[1959]: time="2026-01-20T06:48:13.178695147Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 06:48:13.180200 containerd[1959]: time="2026-01-20T06:48:13.178720758Z" level=info msg="runtime interface created" Jan 20 06:48:13.180200 containerd[1959]: time="2026-01-20T06:48:13.178729383Z" level=info msg="created NRI interface" Jan 20 06:48:13.180200 containerd[1959]: time="2026-01-20T06:48:13.178743154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 06:48:13.180200 containerd[1959]: time="2026-01-20T06:48:13.178767811Z" level=info msg="Connect containerd service" Jan 20 06:48:13.180200 containerd[1959]: time="2026-01-20T06:48:13.178805250Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 06:48:13.181630 locksmithd[2016]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 06:48:13.183385 containerd[1959]: time="2026-01-20T06:48:13.182569061Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 06:48:13.243996 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.9703 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 20 06:48:13.346714 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.9703 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 20 06:48:13.395494 sshd_keygen[1969]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 06:48:13.444000 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.9703 INFO [amazon-ssm-agent] Starting Core Agent Jan 20 06:48:13.454073 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 06:48:13.460059 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 06:48:13.499303 amazon-ssm-agent[1991]: 2026/01/20 06:48:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 06:48:13.499303 amazon-ssm-agent[1991]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 20 06:48:13.499303 amazon-ssm-agent[1991]: 2026/01/20 06:48:13 processing appconfig overrides Jan 20 06:48:13.506063 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 06:48:13.507286 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 06:48:13.513054 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 06:48:13.522174 tar[1931]: linux-amd64/README.md Jan 20 06:48:13.538076 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.9703 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 20 06:48:13.538076 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.9703 INFO [Registrar] Starting registrar module Jan 20 06:48:13.538076 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.9845 INFO [EC2Identity] Checking disk for registration info Jan 20 06:48:13.538076 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.9845 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 20 06:48:13.538076 amazon-ssm-agent[1991]: 2026-01-20 06:48:12.9845 INFO [EC2Identity] Generating registration keypair Jan 20 06:48:13.538076 amazon-ssm-agent[1991]: 2026-01-20 06:48:13.4486 INFO [EC2Identity] Checking write access before registering Jan 20 06:48:13.538909 amazon-ssm-agent[1991]: 2026-01-20 06:48:13.4492 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 20 06:48:13.538909 amazon-ssm-agent[1991]: 2026-01-20 06:48:13.4982 INFO [EC2Identity] EC2 registration was successful. Jan 20 06:48:13.538909 amazon-ssm-agent[1991]: 2026-01-20 06:48:13.4982 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 20 06:48:13.538909 amazon-ssm-agent[1991]: 2026-01-20 06:48:13.4983 INFO [CredentialRefresher] credentialRefresher has started Jan 20 06:48:13.538909 amazon-ssm-agent[1991]: 2026-01-20 06:48:13.4983 INFO [CredentialRefresher] Starting credentials refresher loop Jan 20 06:48:13.538909 amazon-ssm-agent[1991]: 2026-01-20 06:48:13.5377 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 20 06:48:13.538909 amazon-ssm-agent[1991]: 2026-01-20 06:48:13.5380 INFO [CredentialRefresher] Credentials ready Jan 20 06:48:13.541158 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 06:48:13.544075 amazon-ssm-agent[1991]: 2026-01-20 06:48:13.5382 INFO [CredentialRefresher] Next credential rotation will be in 29.999992302066666 minutes Jan 20 06:48:13.545567 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 06:48:13.549309 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 06:48:13.553168 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 06:48:13.554451 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 06:48:13.627021 containerd[1959]: time="2026-01-20T06:48:13.626974569Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 06:48:13.627214 containerd[1959]: time="2026-01-20T06:48:13.626992129Z" level=info msg="Start subscribing containerd event" Jan 20 06:48:13.627253 containerd[1959]: time="2026-01-20T06:48:13.627230935Z" level=info msg="Start recovering state" Jan 20 06:48:13.627330 containerd[1959]: time="2026-01-20T06:48:13.627195690Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 06:48:13.627399 containerd[1959]: time="2026-01-20T06:48:13.627343598Z" level=info msg="Start event monitor" Jan 20 06:48:13.627481 containerd[1959]: time="2026-01-20T06:48:13.627471392Z" level=info msg="Start cni network conf syncer for default" Jan 20 06:48:13.627523 containerd[1959]: time="2026-01-20T06:48:13.627515333Z" level=info msg="Start streaming server" Jan 20 06:48:13.627576 containerd[1959]: time="2026-01-20T06:48:13.627568163Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 06:48:13.627616 containerd[1959]: time="2026-01-20T06:48:13.627609157Z" level=info msg="runtime interface starting up..." Jan 20 06:48:13.627666 containerd[1959]: time="2026-01-20T06:48:13.627658115Z" level=info msg="starting plugins..." Jan 20 06:48:13.627724 containerd[1959]: time="2026-01-20T06:48:13.627709080Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 06:48:13.628541 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 06:48:13.629450 containerd[1959]: time="2026-01-20T06:48:13.628679379Z" level=info msg="containerd successfully booted in 0.500809s" Jan 20 06:48:14.551313 amazon-ssm-agent[1991]: 2026-01-20 06:48:14.5511 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 20 06:48:14.651942 amazon-ssm-agent[1991]: 2026-01-20 06:48:14.5541 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2184) started Jan 20 06:48:14.753231 amazon-ssm-agent[1991]: 2026-01-20 06:48:14.5542 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 20 06:48:16.935158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:48:16.936756 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 06:48:16.937772 systemd[1]: Startup finished in 3.801s (kernel) + 11.988s (initrd) + 13.085s (userspace) = 28.874s. Jan 20 06:48:16.942999 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:48:18.404525 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 06:48:18.405779 systemd[1]: Started sshd@0-172.31.26.220:22-68.220.241.50:38804.service - OpenSSH per-connection server daemon (68.220.241.50:38804). Jan 20 06:48:18.949372 kubelet[2200]: E0120 06:48:18.949309 2200 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:48:18.952129 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:48:18.952318 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:48:18.952835 systemd[1]: kubelet.service: Consumed 1.039s CPU time, 263.4M memory peak. Jan 20 06:48:19.981338 sshd[2210]: Accepted publickey for core from 68.220.241.50 port 38804 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:48:19.981431 systemd-resolved[1510]: Clock change detected. Flushing caches. Jan 20 06:48:19.985225 sshd-session[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:48:19.993207 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 06:48:19.994576 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 06:48:20.003568 systemd-logind[1916]: New session 1 of user core. Jan 20 06:48:20.018811 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 06:48:20.022301 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 06:48:20.040551 (systemd)[2218]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:48:20.043970 systemd-logind[1916]: New session 2 of user core. Jan 20 06:48:20.203960 systemd[2218]: Queued start job for default target default.target. Jan 20 06:48:20.214657 systemd[2218]: Created slice app.slice - User Application Slice. Jan 20 06:48:20.214695 systemd[2218]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 06:48:20.214711 systemd[2218]: Reached target paths.target - Paths. Jan 20 06:48:20.214774 systemd[2218]: Reached target timers.target - Timers. Jan 20 06:48:20.216347 systemd[2218]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 06:48:20.217466 systemd[2218]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 06:48:20.229737 systemd[2218]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 06:48:20.231310 systemd[2218]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 06:48:20.231552 systemd[2218]: Reached target sockets.target - Sockets. Jan 20 06:48:20.231707 systemd[2218]: Reached target basic.target - Basic System. Jan 20 06:48:20.231869 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 06:48:20.232918 systemd[2218]: Reached target default.target - Main User Target. Jan 20 06:48:20.233029 systemd[2218]: Startup finished in 182ms. Jan 20 06:48:20.237878 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 06:48:20.485095 systemd[1]: Started sshd@1-172.31.26.220:22-68.220.241.50:38818.service - OpenSSH per-connection server daemon (68.220.241.50:38818). Jan 20 06:48:20.923404 sshd[2232]: Accepted publickey for core from 68.220.241.50 port 38818 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:48:20.924861 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:48:20.930255 systemd-logind[1916]: New session 3 of user core. Jan 20 06:48:20.938721 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 06:48:21.161259 sshd[2236]: Connection closed by 68.220.241.50 port 38818 Jan 20 06:48:21.161785 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Jan 20 06:48:21.166423 systemd-logind[1916]: Session 3 logged out. Waiting for processes to exit. Jan 20 06:48:21.166594 systemd[1]: sshd@1-172.31.26.220:22-68.220.241.50:38818.service: Deactivated successfully. Jan 20 06:48:21.168261 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 06:48:21.170100 systemd-logind[1916]: Removed session 3. Jan 20 06:48:21.247404 systemd[1]: Started sshd@2-172.31.26.220:22-68.220.241.50:38834.service - OpenSSH per-connection server daemon (68.220.241.50:38834). Jan 20 06:48:21.669475 sshd[2242]: Accepted publickey for core from 68.220.241.50 port 38834 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:48:21.670936 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:48:21.677415 systemd-logind[1916]: New session 4 of user core. Jan 20 06:48:21.686765 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 06:48:21.897212 sshd[2246]: Connection closed by 68.220.241.50 port 38834 Jan 20 06:48:21.897770 sshd-session[2242]: pam_unix(sshd:session): session closed for user core Jan 20 06:48:21.902629 systemd-logind[1916]: Session 4 logged out. Waiting for processes to exit. Jan 20 06:48:21.902756 systemd[1]: sshd@2-172.31.26.220:22-68.220.241.50:38834.service: Deactivated successfully. Jan 20 06:48:21.904472 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 06:48:21.907006 systemd-logind[1916]: Removed session 4. Jan 20 06:48:21.984375 systemd[1]: Started sshd@3-172.31.26.220:22-68.220.241.50:38840.service - OpenSSH per-connection server daemon (68.220.241.50:38840). Jan 20 06:48:22.416714 sshd[2252]: Accepted publickey for core from 68.220.241.50 port 38840 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:48:22.418163 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:48:22.424414 systemd-logind[1916]: New session 5 of user core. Jan 20 06:48:22.430711 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 06:48:22.649735 sshd[2256]: Connection closed by 68.220.241.50 port 38840 Jan 20 06:48:22.651610 sshd-session[2252]: pam_unix(sshd:session): session closed for user core Jan 20 06:48:22.656709 systemd[1]: sshd@3-172.31.26.220:22-68.220.241.50:38840.service: Deactivated successfully. Jan 20 06:48:22.658712 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 06:48:22.660164 systemd-logind[1916]: Session 5 logged out. Waiting for processes to exit. Jan 20 06:48:22.661412 systemd-logind[1916]: Removed session 5. Jan 20 06:48:22.762130 systemd[1]: Started sshd@4-172.31.26.220:22-68.220.241.50:38164.service - OpenSSH per-connection server daemon (68.220.241.50:38164). Jan 20 06:48:23.241508 sshd[2262]: Accepted publickey for core from 68.220.241.50 port 38164 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:48:23.242836 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:48:23.247771 systemd-logind[1916]: New session 6 of user core. Jan 20 06:48:23.254682 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 06:48:23.536466 sudo[2267]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 06:48:23.536932 sudo[2267]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:48:24.878962 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 06:48:24.893135 (dockerd)[2285]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 06:48:26.021122 dockerd[2285]: time="2026-01-20T06:48:26.020383996Z" level=info msg="Starting up" Jan 20 06:48:26.023578 dockerd[2285]: time="2026-01-20T06:48:26.023529364Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 06:48:26.037063 dockerd[2285]: time="2026-01-20T06:48:26.037013882Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 06:48:26.053355 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport954341067-merged.mount: Deactivated successfully. Jan 20 06:48:26.087531 dockerd[2285]: time="2026-01-20T06:48:26.087486814Z" level=info msg="Loading containers: start." Jan 20 06:48:26.099473 kernel: Initializing XFRM netlink socket Jan 20 06:48:26.443645 (udev-worker)[2306]: Network interface NamePolicy= disabled on kernel command line. Jan 20 06:48:26.493353 systemd-networkd[1540]: docker0: Link UP Jan 20 06:48:26.498551 dockerd[2285]: time="2026-01-20T06:48:26.498511652Z" level=info msg="Loading containers: done." Jan 20 06:48:26.516604 dockerd[2285]: time="2026-01-20T06:48:26.516536990Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 06:48:26.516890 dockerd[2285]: time="2026-01-20T06:48:26.516627338Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 06:48:26.516890 dockerd[2285]: time="2026-01-20T06:48:26.516717233Z" level=info msg="Initializing buildkit" Jan 20 06:48:26.544648 dockerd[2285]: time="2026-01-20T06:48:26.544597391Z" level=info msg="Completed buildkit initialization" Jan 20 06:48:26.554391 dockerd[2285]: time="2026-01-20T06:48:26.554330911Z" level=info msg="Daemon has completed initialization" Jan 20 06:48:26.554391 dockerd[2285]: time="2026-01-20T06:48:26.554386387Z" level=info msg="API listen on /run/docker.sock" Jan 20 06:48:26.554817 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 06:48:27.050351 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck626718625-merged.mount: Deactivated successfully. Jan 20 06:48:28.356407 containerd[1959]: time="2026-01-20T06:48:28.356350411Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 06:48:28.922354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount705952311.mount: Deactivated successfully. Jan 20 06:48:30.120418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 06:48:30.123923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:48:30.142629 containerd[1959]: time="2026-01-20T06:48:30.142590827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:30.146049 containerd[1959]: time="2026-01-20T06:48:30.145873577Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 20 06:48:30.147871 containerd[1959]: time="2026-01-20T06:48:30.147814904Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:30.152474 containerd[1959]: time="2026-01-20T06:48:30.152395963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:30.154983 containerd[1959]: time="2026-01-20T06:48:30.154588679Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.798194171s" Jan 20 06:48:30.154983 containerd[1959]: time="2026-01-20T06:48:30.154630999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 06:48:30.155112 containerd[1959]: time="2026-01-20T06:48:30.155086910Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 06:48:30.389209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:48:30.399891 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:48:30.452373 kubelet[2559]: E0120 06:48:30.452300 2559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:48:30.456035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:48:30.456182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:48:30.456640 systemd[1]: kubelet.service: Consumed 182ms CPU time, 108.8M memory peak. Jan 20 06:48:31.774902 containerd[1959]: time="2026-01-20T06:48:31.774848648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:31.776988 containerd[1959]: time="2026-01-20T06:48:31.776940945Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 20 06:48:31.778801 containerd[1959]: time="2026-01-20T06:48:31.778734700Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:31.782459 containerd[1959]: time="2026-01-20T06:48:31.781419012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:31.782810 containerd[1959]: time="2026-01-20T06:48:31.782777244Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.627661242s" Jan 20 06:48:31.782919 containerd[1959]: time="2026-01-20T06:48:31.782902433Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 06:48:31.783620 containerd[1959]: time="2026-01-20T06:48:31.783588232Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 06:48:33.219577 containerd[1959]: time="2026-01-20T06:48:33.218499762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:33.219577 containerd[1959]: time="2026-01-20T06:48:33.219542013Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 20 06:48:33.220494 containerd[1959]: time="2026-01-20T06:48:33.220461728Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:33.223171 containerd[1959]: time="2026-01-20T06:48:33.223124496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:33.224154 containerd[1959]: time="2026-01-20T06:48:33.224126571Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.440509683s" Jan 20 06:48:33.224257 containerd[1959]: time="2026-01-20T06:48:33.224244710Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 06:48:33.225029 containerd[1959]: time="2026-01-20T06:48:33.224956895Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 06:48:34.231953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2451197219.mount: Deactivated successfully. Jan 20 06:48:34.781308 containerd[1959]: time="2026-01-20T06:48:34.781243497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:34.782306 containerd[1959]: time="2026-01-20T06:48:34.782251779Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 20 06:48:34.784760 containerd[1959]: time="2026-01-20T06:48:34.783838956Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:34.785832 containerd[1959]: time="2026-01-20T06:48:34.785798450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:34.786357 containerd[1959]: time="2026-01-20T06:48:34.786331257Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.561185378s" Jan 20 06:48:34.786468 containerd[1959]: time="2026-01-20T06:48:34.786451476Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 06:48:34.786970 containerd[1959]: time="2026-01-20T06:48:34.786903336Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 06:48:35.247569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2146210449.mount: Deactivated successfully. Jan 20 06:48:36.322878 containerd[1959]: time="2026-01-20T06:48:36.322815567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:36.324086 containerd[1959]: time="2026-01-20T06:48:36.323910938Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17815171" Jan 20 06:48:36.325311 containerd[1959]: time="2026-01-20T06:48:36.325273346Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:36.328046 containerd[1959]: time="2026-01-20T06:48:36.327997970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:36.329254 containerd[1959]: time="2026-01-20T06:48:36.329098723Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.542010111s" Jan 20 06:48:36.329254 containerd[1959]: time="2026-01-20T06:48:36.329140701Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 06:48:36.329633 containerd[1959]: time="2026-01-20T06:48:36.329616091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 06:48:36.753604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3897373167.mount: Deactivated successfully. Jan 20 06:48:36.759680 containerd[1959]: time="2026-01-20T06:48:36.759627199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:48:36.760780 containerd[1959]: time="2026-01-20T06:48:36.760470264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=316581" Jan 20 06:48:36.761795 containerd[1959]: time="2026-01-20T06:48:36.761749025Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:48:36.764195 containerd[1959]: time="2026-01-20T06:48:36.764159482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:48:36.765299 containerd[1959]: time="2026-01-20T06:48:36.765102572Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 435.405545ms" Jan 20 06:48:36.765299 containerd[1959]: time="2026-01-20T06:48:36.765142981Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 06:48:36.765709 containerd[1959]: time="2026-01-20T06:48:36.765672531Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 06:48:37.243134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4178505832.mount: Deactivated successfully. Jan 20 06:48:39.597402 containerd[1959]: time="2026-01-20T06:48:39.597306674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:39.600169 containerd[1959]: time="2026-01-20T06:48:39.599414145Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55728979" Jan 20 06:48:39.602132 containerd[1959]: time="2026-01-20T06:48:39.602058551Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:39.606834 containerd[1959]: time="2026-01-20T06:48:39.606781916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:39.607791 containerd[1959]: time="2026-01-20T06:48:39.607671201Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.841966428s" Jan 20 06:48:39.607791 containerd[1959]: time="2026-01-20T06:48:39.607704166Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 06:48:40.620184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 06:48:40.624737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:48:40.942599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:48:40.955272 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:48:41.036593 kubelet[2723]: E0120 06:48:41.036518 2723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:48:41.041917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:48:41.042284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:48:41.043164 systemd[1]: kubelet.service: Consumed 229ms CPU time, 110.5M memory peak. Jan 20 06:48:42.424600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:48:42.424875 systemd[1]: kubelet.service: Consumed 229ms CPU time, 110.5M memory peak. Jan 20 06:48:42.427770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:48:42.465726 systemd[1]: Reload requested from client PID 2736 ('systemctl') (unit session-6.scope)... Jan 20 06:48:42.465745 systemd[1]: Reloading... Jan 20 06:48:42.616542 zram_generator::config[2784]: No configuration found. Jan 20 06:48:42.903609 systemd[1]: Reloading finished in 437 ms. Jan 20 06:48:42.978211 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 06:48:42.978308 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 06:48:42.978929 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:48:42.978982 systemd[1]: kubelet.service: Consumed 150ms CPU time, 98.2M memory peak. Jan 20 06:48:42.981420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:48:43.244068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:48:43.254911 (kubelet)[2847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 06:48:43.307036 kubelet[2847]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:48:43.307702 kubelet[2847]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 06:48:43.307702 kubelet[2847]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:48:43.310819 kubelet[2847]: I0120 06:48:43.310738 2847 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 06:48:43.595699 kubelet[2847]: I0120 06:48:43.595577 2847 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 06:48:43.595699 kubelet[2847]: I0120 06:48:43.595610 2847 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 06:48:43.596121 kubelet[2847]: I0120 06:48:43.595903 2847 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 06:48:43.649377 kubelet[2847]: E0120 06:48:43.649326 2847 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.220:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:43.653231 kubelet[2847]: I0120 06:48:43.653194 2847 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 06:48:43.674527 kubelet[2847]: I0120 06:48:43.674502 2847 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 06:48:43.683458 kubelet[2847]: I0120 06:48:43.683221 2847 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 06:48:43.687494 kubelet[2847]: I0120 06:48:43.687253 2847 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 06:48:43.687833 kubelet[2847]: I0120 06:48:43.687638 2847 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-220","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 06:48:43.690383 kubelet[2847]: I0120 06:48:43.690338 2847 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 06:48:43.690383 kubelet[2847]: I0120 06:48:43.690375 2847 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 06:48:43.691948 kubelet[2847]: I0120 06:48:43.691908 2847 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:48:43.700007 kubelet[2847]: I0120 06:48:43.699698 2847 kubelet.go:446] "Attempting to sync node with API server" Jan 20 06:48:43.700007 kubelet[2847]: I0120 06:48:43.699750 2847 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 06:48:43.702088 kubelet[2847]: I0120 06:48:43.702054 2847 kubelet.go:352] "Adding apiserver pod source" Jan 20 06:48:43.702088 kubelet[2847]: I0120 06:48:43.702085 2847 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 06:48:43.705458 kubelet[2847]: W0120 06:48:43.705384 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-220&limit=500&resourceVersion=0": dial tcp 172.31.26.220:6443: connect: connection refused Jan 20 06:48:43.705618 kubelet[2847]: E0120 06:48:43.705582 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-220&limit=500&resourceVersion=0\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:43.707695 kubelet[2847]: I0120 06:48:43.707658 2847 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 06:48:43.713707 kubelet[2847]: I0120 06:48:43.713554 2847 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 06:48:43.714463 kubelet[2847]: W0120 06:48:43.714321 2847 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 06:48:43.722746 kubelet[2847]: I0120 06:48:43.722484 2847 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 06:48:43.722746 kubelet[2847]: I0120 06:48:43.722543 2847 server.go:1287] "Started kubelet" Jan 20 06:48:43.726805 kubelet[2847]: W0120 06:48:43.726259 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.220:6443: connect: connection refused Jan 20 06:48:43.726805 kubelet[2847]: E0120 06:48:43.726332 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:43.726805 kubelet[2847]: I0120 06:48:43.726468 2847 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 06:48:43.731455 kubelet[2847]: I0120 06:48:43.731373 2847 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 06:48:43.731964 kubelet[2847]: I0120 06:48:43.731798 2847 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 06:48:43.737682 kubelet[2847]: E0120 06:48:43.733541 2847 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.220:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-220.188c5da6969bdfe6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-220,UID:ip-172-31-26-220,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-220,},FirstTimestamp:2026-01-20 06:48:43.722514406 +0000 UTC m=+0.462946044,LastTimestamp:2026-01-20 06:48:43.722514406 +0000 UTC m=+0.462946044,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-220,}" Jan 20 06:48:43.741404 kubelet[2847]: I0120 06:48:43.740537 2847 server.go:479] "Adding debug handlers to kubelet server" Jan 20 06:48:43.742399 kubelet[2847]: I0120 06:48:43.742218 2847 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 06:48:43.749992 kubelet[2847]: I0120 06:48:43.749951 2847 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 06:48:43.754401 kubelet[2847]: I0120 06:48:43.753369 2847 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 06:48:43.754401 kubelet[2847]: E0120 06:48:43.753693 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:43.754401 kubelet[2847]: I0120 06:48:43.754039 2847 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 06:48:43.754401 kubelet[2847]: I0120 06:48:43.754096 2847 reconciler.go:26] "Reconciler: start to sync state" Jan 20 06:48:43.760925 kubelet[2847]: I0120 06:48:43.760893 2847 factory.go:221] Registration of the systemd container factory successfully Jan 20 06:48:43.761071 kubelet[2847]: I0120 06:48:43.761021 2847 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 06:48:43.764580 kubelet[2847]: W0120 06:48:43.764521 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.220:6443: connect: connection refused Jan 20 06:48:43.764766 kubelet[2847]: E0120 06:48:43.764593 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:43.764766 kubelet[2847]: E0120 06:48:43.764713 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-220?timeout=10s\": dial tcp 172.31.26.220:6443: connect: connection refused" interval="200ms" Jan 20 06:48:43.769228 kubelet[2847]: E0120 06:48:43.769114 2847 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.220:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-220.188c5da6969bdfe6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-220,UID:ip-172-31-26-220,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-220,},FirstTimestamp:2026-01-20 06:48:43.722514406 +0000 UTC m=+0.462946044,LastTimestamp:2026-01-20 06:48:43.722514406 +0000 UTC m=+0.462946044,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-220,}" Jan 20 06:48:43.772873 kubelet[2847]: I0120 06:48:43.772493 2847 factory.go:221] Registration of the containerd container factory successfully Jan 20 06:48:43.779215 kubelet[2847]: I0120 06:48:43.779177 2847 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 06:48:43.781894 kubelet[2847]: I0120 06:48:43.781861 2847 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 06:48:43.782033 kubelet[2847]: I0120 06:48:43.782024 2847 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 06:48:43.782092 kubelet[2847]: I0120 06:48:43.782086 2847 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 06:48:43.782133 kubelet[2847]: I0120 06:48:43.782128 2847 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 06:48:43.782481 kubelet[2847]: E0120 06:48:43.782219 2847 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 06:48:43.791131 kubelet[2847]: W0120 06:48:43.791087 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.220:6443: connect: connection refused Jan 20 06:48:43.791340 kubelet[2847]: E0120 06:48:43.791259 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:43.797799 kubelet[2847]: E0120 06:48:43.797766 2847 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 06:48:43.805488 kubelet[2847]: I0120 06:48:43.805351 2847 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 06:48:43.805488 kubelet[2847]: I0120 06:48:43.805369 2847 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 06:48:43.805488 kubelet[2847]: I0120 06:48:43.805389 2847 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:48:43.824897 kubelet[2847]: I0120 06:48:43.824840 2847 policy_none.go:49] "None policy: Start" Jan 20 06:48:43.824897 kubelet[2847]: I0120 06:48:43.824897 2847 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 06:48:43.825067 kubelet[2847]: I0120 06:48:43.824919 2847 state_mem.go:35] "Initializing new in-memory state store" Jan 20 06:48:43.838834 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 06:48:43.850032 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 06:48:43.854086 kubelet[2847]: E0120 06:48:43.854053 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:43.864062 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 06:48:43.866532 kubelet[2847]: I0120 06:48:43.866432 2847 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 06:48:43.866709 kubelet[2847]: I0120 06:48:43.866633 2847 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 06:48:43.866709 kubelet[2847]: I0120 06:48:43.866646 2847 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 06:48:43.867366 kubelet[2847]: I0120 06:48:43.867055 2847 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 06:48:43.868011 kubelet[2847]: E0120 06:48:43.867997 2847 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 06:48:43.868150 kubelet[2847]: E0120 06:48:43.868140 2847 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-220\" not found" Jan 20 06:48:43.901029 systemd[1]: Created slice kubepods-burstable-pod83614d0a862c112565d5f8c3528dd43c.slice - libcontainer container kubepods-burstable-pod83614d0a862c112565d5f8c3528dd43c.slice. Jan 20 06:48:43.918796 kubelet[2847]: E0120 06:48:43.918578 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:43.922881 systemd[1]: Created slice kubepods-burstable-podeb7ef865947b9ed81e443ac59ec37d5e.slice - libcontainer container kubepods-burstable-podeb7ef865947b9ed81e443ac59ec37d5e.slice. Jan 20 06:48:43.933345 kubelet[2847]: E0120 06:48:43.933022 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:43.935969 systemd[1]: Created slice kubepods-burstable-pod43a86a769e1e7202935a7f036247c4e6.slice - libcontainer container kubepods-burstable-pod43a86a769e1e7202935a7f036247c4e6.slice. Jan 20 06:48:43.938333 kubelet[2847]: E0120 06:48:43.938287 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:43.965515 kubelet[2847]: E0120 06:48:43.965431 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-220?timeout=10s\": dial tcp 172.31.26.220:6443: connect: connection refused" interval="400ms" Jan 20 06:48:43.969359 kubelet[2847]: I0120 06:48:43.969333 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-220" Jan 20 06:48:43.969694 kubelet[2847]: E0120 06:48:43.969668 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.220:6443/api/v1/nodes\": dial tcp 172.31.26.220:6443: connect: connection refused" node="ip-172-31-26-220" Jan 20 06:48:44.056429 kubelet[2847]: I0120 06:48:44.056118 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83614d0a862c112565d5f8c3528dd43c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-220\" (UID: \"83614d0a862c112565d5f8c3528dd43c\") " pod="kube-system/kube-apiserver-ip-172-31-26-220" Jan 20 06:48:44.056429 kubelet[2847]: I0120 06:48:44.056173 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eb7ef865947b9ed81e443ac59ec37d5e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-220\" (UID: \"eb7ef865947b9ed81e443ac59ec37d5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:44.056429 kubelet[2847]: I0120 06:48:44.056196 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83614d0a862c112565d5f8c3528dd43c-ca-certs\") pod \"kube-apiserver-ip-172-31-26-220\" (UID: \"83614d0a862c112565d5f8c3528dd43c\") " pod="kube-system/kube-apiserver-ip-172-31-26-220" Jan 20 06:48:44.056429 kubelet[2847]: I0120 06:48:44.056212 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83614d0a862c112565d5f8c3528dd43c-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-220\" (UID: \"83614d0a862c112565d5f8c3528dd43c\") " pod="kube-system/kube-apiserver-ip-172-31-26-220" Jan 20 06:48:44.056429 kubelet[2847]: I0120 06:48:44.056228 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb7ef865947b9ed81e443ac59ec37d5e-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-220\" (UID: \"eb7ef865947b9ed81e443ac59ec37d5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:44.056994 kubelet[2847]: I0120 06:48:44.056242 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb7ef865947b9ed81e443ac59ec37d5e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-220\" (UID: \"eb7ef865947b9ed81e443ac59ec37d5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:44.056994 kubelet[2847]: I0120 06:48:44.056257 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb7ef865947b9ed81e443ac59ec37d5e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-220\" (UID: \"eb7ef865947b9ed81e443ac59ec37d5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:44.056994 kubelet[2847]: I0120 06:48:44.056274 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb7ef865947b9ed81e443ac59ec37d5e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-220\" (UID: \"eb7ef865947b9ed81e443ac59ec37d5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:44.056994 kubelet[2847]: I0120 06:48:44.056290 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43a86a769e1e7202935a7f036247c4e6-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-220\" (UID: \"43a86a769e1e7202935a7f036247c4e6\") " pod="kube-system/kube-scheduler-ip-172-31-26-220" Jan 20 06:48:44.059945 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 20 06:48:44.172472 kubelet[2847]: I0120 06:48:44.172237 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-220" Jan 20 06:48:44.172940 kubelet[2847]: E0120 06:48:44.172908 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.220:6443/api/v1/nodes\": dial tcp 172.31.26.220:6443: connect: connection refused" node="ip-172-31-26-220" Jan 20 06:48:44.221198 containerd[1959]: time="2026-01-20T06:48:44.221148798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-220,Uid:83614d0a862c112565d5f8c3528dd43c,Namespace:kube-system,Attempt:0,}" Jan 20 06:48:44.233911 containerd[1959]: time="2026-01-20T06:48:44.233870691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-220,Uid:eb7ef865947b9ed81e443ac59ec37d5e,Namespace:kube-system,Attempt:0,}" Jan 20 06:48:44.240084 containerd[1959]: time="2026-01-20T06:48:44.240013228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-220,Uid:43a86a769e1e7202935a7f036247c4e6,Namespace:kube-system,Attempt:0,}" Jan 20 06:48:44.370785 kubelet[2847]: E0120 06:48:44.370745 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-220?timeout=10s\": dial tcp 172.31.26.220:6443: connect: connection refused" interval="800ms" Jan 20 06:48:44.407470 containerd[1959]: time="2026-01-20T06:48:44.406663670Z" level=info msg="connecting to shim 8b80b6d302d173f1e18bd9ab39a371e423890e697efe3d62faed87cd28a78c1d" address="unix:///run/containerd/s/079f8a1b885462de044ebed469e75d0895d07225db142d0c1db8049091e6301a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:48:44.408028 containerd[1959]: time="2026-01-20T06:48:44.407980926Z" level=info msg="connecting to shim 07a68297eb4c92e3f134791f084b1b4bf1b1fd0ac32fc5364747e676ac936575" address="unix:///run/containerd/s/e1291a832b9efd1ca694e2af27e6b2e6190f96f9a97e3b45cc2e94539e1bf6bf" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:48:44.417903 containerd[1959]: time="2026-01-20T06:48:44.417855882Z" level=info msg="connecting to shim 7341b46686ea4470323755b1a67848a75f66a16d830adcd3f2fce07399de1074" address="unix:///run/containerd/s/e8a16de0c54042b3d2a411aa220af59ba33de8190973e5661fcf0a8ed2b2f024" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:48:44.533982 systemd[1]: Started cri-containerd-07a68297eb4c92e3f134791f084b1b4bf1b1fd0ac32fc5364747e676ac936575.scope - libcontainer container 07a68297eb4c92e3f134791f084b1b4bf1b1fd0ac32fc5364747e676ac936575. Jan 20 06:48:44.536788 systemd[1]: Started cri-containerd-7341b46686ea4470323755b1a67848a75f66a16d830adcd3f2fce07399de1074.scope - libcontainer container 7341b46686ea4470323755b1a67848a75f66a16d830adcd3f2fce07399de1074. Jan 20 06:48:44.539627 systemd[1]: Started cri-containerd-8b80b6d302d173f1e18bd9ab39a371e423890e697efe3d62faed87cd28a78c1d.scope - libcontainer container 8b80b6d302d173f1e18bd9ab39a371e423890e697efe3d62faed87cd28a78c1d. Jan 20 06:48:44.580567 kubelet[2847]: I0120 06:48:44.580537 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-220" Jan 20 06:48:44.582818 kubelet[2847]: E0120 06:48:44.582782 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.220:6443/api/v1/nodes\": dial tcp 172.31.26.220:6443: connect: connection refused" node="ip-172-31-26-220" Jan 20 06:48:44.654784 containerd[1959]: time="2026-01-20T06:48:44.654696264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-220,Uid:eb7ef865947b9ed81e443ac59ec37d5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b80b6d302d173f1e18bd9ab39a371e423890e697efe3d62faed87cd28a78c1d\"" Jan 20 06:48:44.669237 containerd[1959]: time="2026-01-20T06:48:44.668627894Z" level=info msg="CreateContainer within sandbox \"8b80b6d302d173f1e18bd9ab39a371e423890e697efe3d62faed87cd28a78c1d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 06:48:44.704397 containerd[1959]: time="2026-01-20T06:48:44.704355290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-220,Uid:43a86a769e1e7202935a7f036247c4e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7341b46686ea4470323755b1a67848a75f66a16d830adcd3f2fce07399de1074\"" Jan 20 06:48:44.706419 containerd[1959]: time="2026-01-20T06:48:44.706286021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-220,Uid:83614d0a862c112565d5f8c3528dd43c,Namespace:kube-system,Attempt:0,} returns sandbox id \"07a68297eb4c92e3f134791f084b1b4bf1b1fd0ac32fc5364747e676ac936575\"" Jan 20 06:48:44.709410 containerd[1959]: time="2026-01-20T06:48:44.709376116Z" level=info msg="CreateContainer within sandbox \"07a68297eb4c92e3f134791f084b1b4bf1b1fd0ac32fc5364747e676ac936575\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 06:48:44.711466 containerd[1959]: time="2026-01-20T06:48:44.711405608Z" level=info msg="CreateContainer within sandbox \"7341b46686ea4470323755b1a67848a75f66a16d830adcd3f2fce07399de1074\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 06:48:44.726704 containerd[1959]: time="2026-01-20T06:48:44.726649671Z" level=info msg="Container c0c63a504cbb5542ae7e2a31167fe84dda37101ed0433eb2a0fd7e6ea0527eb6: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:48:44.727296 containerd[1959]: time="2026-01-20T06:48:44.727260764Z" level=info msg="Container cd67054dd64b38df5951d1246a887e9f10fb6d297bb6252e774d444a43a182c1: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:48:44.734807 containerd[1959]: time="2026-01-20T06:48:44.734757455Z" level=info msg="Container e168ad227643f299340902b7267d27b0c4b5f8757d5c8ef7dd056a8073c08d2a: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:48:44.745798 containerd[1959]: time="2026-01-20T06:48:44.745746797Z" level=info msg="CreateContainer within sandbox \"8b80b6d302d173f1e18bd9ab39a371e423890e697efe3d62faed87cd28a78c1d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c0c63a504cbb5542ae7e2a31167fe84dda37101ed0433eb2a0fd7e6ea0527eb6\"" Jan 20 06:48:44.746566 containerd[1959]: time="2026-01-20T06:48:44.746537120Z" level=info msg="StartContainer for \"c0c63a504cbb5542ae7e2a31167fe84dda37101ed0433eb2a0fd7e6ea0527eb6\"" Jan 20 06:48:44.749765 containerd[1959]: time="2026-01-20T06:48:44.749699445Z" level=info msg="connecting to shim c0c63a504cbb5542ae7e2a31167fe84dda37101ed0433eb2a0fd7e6ea0527eb6" address="unix:///run/containerd/s/079f8a1b885462de044ebed469e75d0895d07225db142d0c1db8049091e6301a" protocol=ttrpc version=3 Jan 20 06:48:44.758260 containerd[1959]: time="2026-01-20T06:48:44.758217832Z" level=info msg="CreateContainer within sandbox \"7341b46686ea4470323755b1a67848a75f66a16d830adcd3f2fce07399de1074\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e168ad227643f299340902b7267d27b0c4b5f8757d5c8ef7dd056a8073c08d2a\"" Jan 20 06:48:44.759459 containerd[1959]: time="2026-01-20T06:48:44.758861488Z" level=info msg="StartContainer for \"e168ad227643f299340902b7267d27b0c4b5f8757d5c8ef7dd056a8073c08d2a\"" Jan 20 06:48:44.762591 containerd[1959]: time="2026-01-20T06:48:44.762560035Z" level=info msg="connecting to shim e168ad227643f299340902b7267d27b0c4b5f8757d5c8ef7dd056a8073c08d2a" address="unix:///run/containerd/s/e8a16de0c54042b3d2a411aa220af59ba33de8190973e5661fcf0a8ed2b2f024" protocol=ttrpc version=3 Jan 20 06:48:44.763486 containerd[1959]: time="2026-01-20T06:48:44.763461276Z" level=info msg="CreateContainer within sandbox \"07a68297eb4c92e3f134791f084b1b4bf1b1fd0ac32fc5364747e676ac936575\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cd67054dd64b38df5951d1246a887e9f10fb6d297bb6252e774d444a43a182c1\"" Jan 20 06:48:44.764351 containerd[1959]: time="2026-01-20T06:48:44.764273118Z" level=info msg="StartContainer for \"cd67054dd64b38df5951d1246a887e9f10fb6d297bb6252e774d444a43a182c1\"" Jan 20 06:48:44.765647 containerd[1959]: time="2026-01-20T06:48:44.765601165Z" level=info msg="connecting to shim cd67054dd64b38df5951d1246a887e9f10fb6d297bb6252e774d444a43a182c1" address="unix:///run/containerd/s/e1291a832b9efd1ca694e2af27e6b2e6190f96f9a97e3b45cc2e94539e1bf6bf" protocol=ttrpc version=3 Jan 20 06:48:44.773782 systemd[1]: Started cri-containerd-c0c63a504cbb5542ae7e2a31167fe84dda37101ed0433eb2a0fd7e6ea0527eb6.scope - libcontainer container c0c63a504cbb5542ae7e2a31167fe84dda37101ed0433eb2a0fd7e6ea0527eb6. Jan 20 06:48:44.793705 systemd[1]: Started cri-containerd-e168ad227643f299340902b7267d27b0c4b5f8757d5c8ef7dd056a8073c08d2a.scope - libcontainer container e168ad227643f299340902b7267d27b0c4b5f8757d5c8ef7dd056a8073c08d2a. Jan 20 06:48:44.804887 systemd[1]: Started cri-containerd-cd67054dd64b38df5951d1246a887e9f10fb6d297bb6252e774d444a43a182c1.scope - libcontainer container cd67054dd64b38df5951d1246a887e9f10fb6d297bb6252e774d444a43a182c1. Jan 20 06:48:44.887512 containerd[1959]: time="2026-01-20T06:48:44.887359670Z" level=info msg="StartContainer for \"c0c63a504cbb5542ae7e2a31167fe84dda37101ed0433eb2a0fd7e6ea0527eb6\" returns successfully" Jan 20 06:48:44.908109 containerd[1959]: time="2026-01-20T06:48:44.907984132Z" level=info msg="StartContainer for \"cd67054dd64b38df5951d1246a887e9f10fb6d297bb6252e774d444a43a182c1\" returns successfully" Jan 20 06:48:44.931944 containerd[1959]: time="2026-01-20T06:48:44.931898150Z" level=info msg="StartContainer for \"e168ad227643f299340902b7267d27b0c4b5f8757d5c8ef7dd056a8073c08d2a\" returns successfully" Jan 20 06:48:44.990770 kubelet[2847]: W0120 06:48:44.990696 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.220:6443: connect: connection refused Jan 20 06:48:44.990939 kubelet[2847]: E0120 06:48:44.990782 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:45.087028 kubelet[2847]: W0120 06:48:45.086876 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.220:6443: connect: connection refused Jan 20 06:48:45.087028 kubelet[2847]: E0120 06:48:45.086961 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:45.171611 kubelet[2847]: E0120 06:48:45.171424 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-220?timeout=10s\": dial tcp 172.31.26.220:6443: connect: connection refused" interval="1.6s" Jan 20 06:48:45.257899 kubelet[2847]: W0120 06:48:45.257774 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-220&limit=500&resourceVersion=0": dial tcp 172.31.26.220:6443: connect: connection refused Jan 20 06:48:45.257899 kubelet[2847]: E0120 06:48:45.257868 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-220&limit=500&resourceVersion=0\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:45.265023 kubelet[2847]: W0120 06:48:45.264957 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.220:6443: connect: connection refused Jan 20 06:48:45.265163 kubelet[2847]: E0120 06:48:45.265041 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:45.387904 kubelet[2847]: I0120 06:48:45.387631 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-220" Jan 20 06:48:45.388773 kubelet[2847]: E0120 06:48:45.388737 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.220:6443/api/v1/nodes\": dial tcp 172.31.26.220:6443: connect: connection refused" node="ip-172-31-26-220" Jan 20 06:48:45.732524 kubelet[2847]: E0120 06:48:45.732297 2847 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.220:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:45.828530 kubelet[2847]: E0120 06:48:45.827893 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:45.831657 kubelet[2847]: E0120 06:48:45.831634 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:45.835252 kubelet[2847]: E0120 06:48:45.835225 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:46.711269 kubelet[2847]: W0120 06:48:46.711206 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.220:6443: connect: connection refused Jan 20 06:48:46.711743 kubelet[2847]: E0120 06:48:46.711278 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:46.772863 kubelet[2847]: E0120 06:48:46.772821 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-220?timeout=10s\": dial tcp 172.31.26.220:6443: connect: connection refused" interval="3.2s" Jan 20 06:48:46.835781 kubelet[2847]: E0120 06:48:46.835752 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:46.836324 kubelet[2847]: E0120 06:48:46.836297 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:46.836798 kubelet[2847]: E0120 06:48:46.836754 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:46.990977 kubelet[2847]: I0120 06:48:46.990880 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-220" Jan 20 06:48:46.991658 kubelet[2847]: E0120 06:48:46.991618 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.220:6443/api/v1/nodes\": dial tcp 172.31.26.220:6443: connect: connection refused" node="ip-172-31-26-220" Jan 20 06:48:47.370336 kubelet[2847]: W0120 06:48:47.370280 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.220:6443: connect: connection refused Jan 20 06:48:47.370474 kubelet[2847]: E0120 06:48:47.370343 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:47.838159 kubelet[2847]: E0120 06:48:47.837946 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:47.838159 kubelet[2847]: E0120 06:48:47.838028 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:47.914239 kubelet[2847]: W0120 06:48:47.914138 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-220&limit=500&resourceVersion=0": dial tcp 172.31.26.220:6443: connect: connection refused Jan 20 06:48:47.914239 kubelet[2847]: E0120 06:48:47.914205 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-220&limit=500&resourceVersion=0\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:48.197049 kubelet[2847]: W0120 06:48:48.196985 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.220:6443: connect: connection refused Jan 20 06:48:48.197195 kubelet[2847]: E0120 06:48:48.197057 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.220:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:48:48.839832 kubelet[2847]: E0120 06:48:48.839769 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:50.194303 kubelet[2847]: I0120 06:48:50.193506 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-220" Jan 20 06:48:51.096722 kubelet[2847]: E0120 06:48:51.096642 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:51.644955 kubelet[2847]: E0120 06:48:51.644912 2847 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-220\" not found" node="ip-172-31-26-220" Jan 20 06:48:51.741970 kubelet[2847]: I0120 06:48:51.741931 2847 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-220" Jan 20 06:48:51.741970 kubelet[2847]: E0120 06:48:51.741969 2847 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-26-220\": node \"ip-172-31-26-220\" not found" Jan 20 06:48:51.757622 kubelet[2847]: E0120 06:48:51.757580 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:51.858007 kubelet[2847]: E0120 06:48:51.857960 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:51.959090 kubelet[2847]: E0120 06:48:51.958974 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:52.059878 kubelet[2847]: E0120 06:48:52.059808 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:52.159999 kubelet[2847]: E0120 06:48:52.159953 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:52.261169 kubelet[2847]: E0120 06:48:52.261046 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:52.362086 kubelet[2847]: E0120 06:48:52.362036 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:52.462768 kubelet[2847]: E0120 06:48:52.462730 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:52.563675 kubelet[2847]: E0120 06:48:52.563572 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:52.664419 kubelet[2847]: E0120 06:48:52.664380 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:52.765142 kubelet[2847]: E0120 06:48:52.765095 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:52.866172 kubelet[2847]: E0120 06:48:52.866050 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:52.966869 kubelet[2847]: E0120 06:48:52.966835 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:53.067574 kubelet[2847]: E0120 06:48:53.067534 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:53.168847 kubelet[2847]: E0120 06:48:53.168792 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:53.269264 kubelet[2847]: E0120 06:48:53.269196 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:53.369974 kubelet[2847]: E0120 06:48:53.369927 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:53.470688 kubelet[2847]: E0120 06:48:53.470580 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:53.571353 kubelet[2847]: E0120 06:48:53.571308 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:53.671528 kubelet[2847]: E0120 06:48:53.671458 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:53.730496 kubelet[2847]: I0120 06:48:53.728880 2847 apiserver.go:52] "Watching apiserver" Jan 20 06:48:53.745172 systemd[1]: Reload requested from client PID 3113 ('systemctl') (unit session-6.scope)... Jan 20 06:48:53.745191 systemd[1]: Reloading... Jan 20 06:48:53.754268 kubelet[2847]: I0120 06:48:53.754234 2847 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-220" Jan 20 06:48:53.754742 kubelet[2847]: I0120 06:48:53.754721 2847 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 06:48:53.771007 kubelet[2847]: I0120 06:48:53.770738 2847 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:53.780727 kubelet[2847]: I0120 06:48:53.780701 2847 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-220" Jan 20 06:48:53.850690 kubelet[2847]: I0120 06:48:53.850629 2847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-220" podStartSLOduration=0.850613057 podStartE2EDuration="850.613057ms" podCreationTimestamp="2026-01-20 06:48:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:48:53.823373064 +0000 UTC m=+10.563804707" watchObservedRunningTime="2026-01-20 06:48:53.850613057 +0000 UTC m=+10.591044695" Jan 20 06:48:53.900484 zram_generator::config[3158]: No configuration found. Jan 20 06:48:54.202489 systemd[1]: Reloading finished in 456 ms. Jan 20 06:48:54.223982 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:48:54.239371 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 06:48:54.239751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:48:54.239845 systemd[1]: kubelet.service: Consumed 917ms CPU time, 129.4M memory peak. Jan 20 06:48:54.243781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:48:54.575619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:48:54.586879 (kubelet)[3222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 06:48:54.657391 kubelet[3222]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:48:54.657391 kubelet[3222]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 06:48:54.657391 kubelet[3222]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:48:54.661183 kubelet[3222]: I0120 06:48:54.661097 3222 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 06:48:54.678205 kubelet[3222]: I0120 06:48:54.678155 3222 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 06:48:54.678205 kubelet[3222]: I0120 06:48:54.678190 3222 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 06:48:54.679250 kubelet[3222]: I0120 06:48:54.678907 3222 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 06:48:54.681055 kubelet[3222]: I0120 06:48:54.681025 3222 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 06:48:54.685983 kubelet[3222]: I0120 06:48:54.685794 3222 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 06:48:54.703942 kubelet[3222]: I0120 06:48:54.703898 3222 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 06:48:54.710501 kubelet[3222]: I0120 06:48:54.710406 3222 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 06:48:54.714172 kubelet[3222]: I0120 06:48:54.713624 3222 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 06:48:54.714172 kubelet[3222]: I0120 06:48:54.713689 3222 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-220","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 06:48:54.714172 kubelet[3222]: I0120 06:48:54.713886 3222 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 06:48:54.714172 kubelet[3222]: I0120 06:48:54.713896 3222 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 06:48:54.714454 kubelet[3222]: I0120 06:48:54.713953 3222 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:48:54.714454 kubelet[3222]: I0120 06:48:54.714102 3222 kubelet.go:446] "Attempting to sync node with API server" Jan 20 06:48:54.714941 kubelet[3222]: I0120 06:48:54.714925 3222 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 06:48:54.715020 kubelet[3222]: I0120 06:48:54.715014 3222 kubelet.go:352] "Adding apiserver pod source" Jan 20 06:48:54.715089 kubelet[3222]: I0120 06:48:54.715080 3222 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 06:48:54.719609 kubelet[3222]: I0120 06:48:54.718627 3222 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 06:48:54.719609 kubelet[3222]: I0120 06:48:54.718999 3222 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 06:48:54.744385 kubelet[3222]: I0120 06:48:54.744325 3222 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 06:48:54.744725 kubelet[3222]: I0120 06:48:54.744697 3222 server.go:1287] "Started kubelet" Jan 20 06:48:54.757860 kubelet[3222]: I0120 06:48:54.754530 3222 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 06:48:54.764472 kubelet[3222]: I0120 06:48:54.763162 3222 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 06:48:54.769088 kubelet[3222]: I0120 06:48:54.769056 3222 server.go:479] "Adding debug handlers to kubelet server" Jan 20 06:48:54.772464 kubelet[3222]: I0120 06:48:54.771494 3222 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 06:48:54.772464 kubelet[3222]: I0120 06:48:54.771909 3222 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 06:48:54.773586 kubelet[3222]: I0120 06:48:54.773557 3222 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 06:48:54.782027 kubelet[3222]: I0120 06:48:54.781914 3222 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 06:48:54.782259 kubelet[3222]: E0120 06:48:54.782225 3222 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-220\" not found" Jan 20 06:48:54.785468 kubelet[3222]: I0120 06:48:54.784959 3222 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 06:48:54.796524 kubelet[3222]: I0120 06:48:54.796417 3222 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 06:48:54.799863 kubelet[3222]: I0120 06:48:54.799824 3222 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 06:48:54.799863 kubelet[3222]: I0120 06:48:54.799863 3222 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 06:48:54.799863 kubelet[3222]: I0120 06:48:54.799890 3222 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 06:48:54.799863 kubelet[3222]: I0120 06:48:54.799901 3222 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 06:48:54.799863 kubelet[3222]: E0120 06:48:54.799987 3222 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 06:48:54.815495 kubelet[3222]: I0120 06:48:54.813534 3222 reconciler.go:26] "Reconciler: start to sync state" Jan 20 06:48:54.815495 kubelet[3222]: I0120 06:48:54.813818 3222 factory.go:221] Registration of the systemd container factory successfully Jan 20 06:48:54.815495 kubelet[3222]: I0120 06:48:54.813924 3222 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 06:48:54.818665 kubelet[3222]: I0120 06:48:54.818637 3222 factory.go:221] Registration of the containerd container factory successfully Jan 20 06:48:54.820507 kubelet[3222]: E0120 06:48:54.819967 3222 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 06:48:54.864952 kubelet[3222]: I0120 06:48:54.864837 3222 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 06:48:54.864952 kubelet[3222]: I0120 06:48:54.864857 3222 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 06:48:54.864952 kubelet[3222]: I0120 06:48:54.864883 3222 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:48:54.865155 kubelet[3222]: I0120 06:48:54.865091 3222 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 06:48:54.865155 kubelet[3222]: I0120 06:48:54.865107 3222 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 06:48:54.865155 kubelet[3222]: I0120 06:48:54.865132 3222 policy_none.go:49] "None policy: Start" Jan 20 06:48:54.865155 kubelet[3222]: I0120 06:48:54.865145 3222 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 06:48:54.865337 kubelet[3222]: I0120 06:48:54.865158 3222 state_mem.go:35] "Initializing new in-memory state store" Jan 20 06:48:54.865337 kubelet[3222]: I0120 06:48:54.865298 3222 state_mem.go:75] "Updated machine memory state" Jan 20 06:48:54.870894 kubelet[3222]: I0120 06:48:54.870487 3222 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 06:48:54.870894 kubelet[3222]: I0120 06:48:54.870682 3222 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 06:48:54.870894 kubelet[3222]: I0120 06:48:54.870695 3222 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 06:48:54.871111 kubelet[3222]: I0120 06:48:54.870948 3222 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 06:48:54.874495 kubelet[3222]: E0120 06:48:54.874378 3222 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 06:48:54.904472 kubelet[3222]: I0120 06:48:54.904418 3222 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:54.904812 kubelet[3222]: I0120 06:48:54.904790 3222 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-220" Jan 20 06:48:54.905209 kubelet[3222]: I0120 06:48:54.905093 3222 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-220" Jan 20 06:48:54.913633 kubelet[3222]: E0120 06:48:54.913516 3222 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-220\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-220" Jan 20 06:48:54.914692 kubelet[3222]: E0120 06:48:54.914654 3222 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-220\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:54.914846 kubelet[3222]: E0120 06:48:54.914699 3222 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-220\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-220" Jan 20 06:48:54.973300 kubelet[3222]: I0120 06:48:54.973272 3222 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-220" Jan 20 06:48:54.981416 kubelet[3222]: I0120 06:48:54.981368 3222 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-26-220" Jan 20 06:48:54.981617 kubelet[3222]: I0120 06:48:54.981461 3222 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-220" Jan 20 06:48:55.014897 kubelet[3222]: I0120 06:48:55.014678 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83614d0a862c112565d5f8c3528dd43c-ca-certs\") pod \"kube-apiserver-ip-172-31-26-220\" (UID: \"83614d0a862c112565d5f8c3528dd43c\") " pod="kube-system/kube-apiserver-ip-172-31-26-220" Jan 20 06:48:55.014897 kubelet[3222]: I0120 06:48:55.014723 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83614d0a862c112565d5f8c3528dd43c-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-220\" (UID: \"83614d0a862c112565d5f8c3528dd43c\") " pod="kube-system/kube-apiserver-ip-172-31-26-220" Jan 20 06:48:55.014897 kubelet[3222]: I0120 06:48:55.014743 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83614d0a862c112565d5f8c3528dd43c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-220\" (UID: \"83614d0a862c112565d5f8c3528dd43c\") " pod="kube-system/kube-apiserver-ip-172-31-26-220" Jan 20 06:48:55.014897 kubelet[3222]: I0120 06:48:55.014767 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb7ef865947b9ed81e443ac59ec37d5e-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-220\" (UID: \"eb7ef865947b9ed81e443ac59ec37d5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:55.014897 kubelet[3222]: I0120 06:48:55.014793 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb7ef865947b9ed81e443ac59ec37d5e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-220\" (UID: \"eb7ef865947b9ed81e443ac59ec37d5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:55.015144 kubelet[3222]: I0120 06:48:55.014811 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43a86a769e1e7202935a7f036247c4e6-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-220\" (UID: \"43a86a769e1e7202935a7f036247c4e6\") " pod="kube-system/kube-scheduler-ip-172-31-26-220" Jan 20 06:48:55.015144 kubelet[3222]: I0120 06:48:55.014827 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eb7ef865947b9ed81e443ac59ec37d5e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-220\" (UID: \"eb7ef865947b9ed81e443ac59ec37d5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:55.015144 kubelet[3222]: I0120 06:48:55.014845 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb7ef865947b9ed81e443ac59ec37d5e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-220\" (UID: \"eb7ef865947b9ed81e443ac59ec37d5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:55.015144 kubelet[3222]: I0120 06:48:55.014860 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb7ef865947b9ed81e443ac59ec37d5e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-220\" (UID: \"eb7ef865947b9ed81e443ac59ec37d5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:55.716734 kubelet[3222]: I0120 06:48:55.716690 3222 apiserver.go:52] "Watching apiserver" Jan 20 06:48:55.786138 kubelet[3222]: I0120 06:48:55.786096 3222 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 06:48:55.841164 kubelet[3222]: I0120 06:48:55.840760 3222 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:55.841164 kubelet[3222]: I0120 06:48:55.841046 3222 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-220" Jan 20 06:48:55.849996 kubelet[3222]: E0120 06:48:55.849921 3222 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-220\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-26-220" Jan 20 06:48:55.852154 kubelet[3222]: E0120 06:48:55.852118 3222 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-220\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-220" Jan 20 06:48:55.894532 kubelet[3222]: I0120 06:48:55.894471 3222 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-220" podStartSLOduration=2.8944520689999997 podStartE2EDuration="2.894452069s" podCreationTimestamp="2026-01-20 06:48:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:48:55.88549703 +0000 UTC m=+1.291394051" watchObservedRunningTime="2026-01-20 06:48:55.894452069 +0000 UTC m=+1.300349069" Jan 20 06:48:56.902308 sudo[2267]: pam_unix(sudo:session): session closed for user root Jan 20 06:48:56.985803 sshd[2266]: Connection closed by 68.220.241.50 port 38164 Jan 20 06:48:56.987614 sshd-session[2262]: pam_unix(sshd:session): session closed for user core Jan 20 06:48:56.993288 systemd-logind[1916]: Session 6 logged out. Waiting for processes to exit. Jan 20 06:48:56.993411 systemd[1]: sshd@4-172.31.26.220:22-68.220.241.50:38164.service: Deactivated successfully. Jan 20 06:48:56.995424 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 06:48:56.995755 systemd[1]: session-6.scope: Consumed 3.948s CPU time, 151M memory peak. Jan 20 06:48:56.998156 systemd-logind[1916]: Removed session 6. Jan 20 06:48:58.082710 update_engine[1919]: I20260120 06:48:58.082608 1919 update_attempter.cc:509] Updating boot flags... Jan 20 06:48:59.968239 kubelet[3222]: I0120 06:48:59.968208 3222 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 06:48:59.968873 containerd[1959]: time="2026-01-20T06:48:59.968783828Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 06:48:59.969603 kubelet[3222]: I0120 06:48:59.969585 3222 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 06:49:00.910164 systemd[1]: Created slice kubepods-besteffort-pod6427073d_d9c8_4cdc_b900_7c23bb8e7dac.slice - libcontainer container kubepods-besteffort-pod6427073d_d9c8_4cdc_b900_7c23bb8e7dac.slice. Jan 20 06:49:00.911671 kubelet[3222]: W0120 06:49:00.911553 3222 reflector.go:569] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-26-220" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-26-220' and this object Jan 20 06:49:00.911671 kubelet[3222]: E0120 06:49:00.911876 3222 reflector.go:166] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-26-220\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'ip-172-31-26-220' and this object" logger="UnhandledError" Jan 20 06:49:00.912480 kubelet[3222]: W0120 06:49:00.912295 3222 reflector.go:569] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ip-172-31-26-220" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-26-220' and this object Jan 20 06:49:00.912480 kubelet[3222]: E0120 06:49:00.912325 3222 reflector.go:166] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-flannel-cfg\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-flannel-cfg\" is forbidden: User \"system:node:ip-172-31-26-220\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'ip-172-31-26-220' and this object" logger="UnhandledError" Jan 20 06:49:00.929335 systemd[1]: Created slice kubepods-burstable-pod6561465f_5574_4f4a_9fdf_61df6943f16e.slice - libcontainer container kubepods-burstable-pod6561465f_5574_4f4a_9fdf_61df6943f16e.slice. Jan 20 06:49:00.955647 kubelet[3222]: I0120 06:49:00.955587 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6561465f-5574-4f4a-9fdf-61df6943f16e-xtables-lock\") pod \"kube-flannel-ds-rdlh5\" (UID: \"6561465f-5574-4f4a-9fdf-61df6943f16e\") " pod="kube-flannel/kube-flannel-ds-rdlh5" Jan 20 06:49:00.955647 kubelet[3222]: I0120 06:49:00.955642 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkqqp\" (UniqueName: \"kubernetes.io/projected/6561465f-5574-4f4a-9fdf-61df6943f16e-kube-api-access-wkqqp\") pod \"kube-flannel-ds-rdlh5\" (UID: \"6561465f-5574-4f4a-9fdf-61df6943f16e\") " pod="kube-flannel/kube-flannel-ds-rdlh5" Jan 20 06:49:00.955869 kubelet[3222]: I0120 06:49:00.955668 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/6561465f-5574-4f4a-9fdf-61df6943f16e-flannel-cfg\") pod \"kube-flannel-ds-rdlh5\" (UID: \"6561465f-5574-4f4a-9fdf-61df6943f16e\") " pod="kube-flannel/kube-flannel-ds-rdlh5" Jan 20 06:49:00.955869 kubelet[3222]: I0120 06:49:00.955692 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6427073d-d9c8-4cdc-b900-7c23bb8e7dac-lib-modules\") pod \"kube-proxy-xdxqn\" (UID: \"6427073d-d9c8-4cdc-b900-7c23bb8e7dac\") " pod="kube-system/kube-proxy-xdxqn" Jan 20 06:49:00.955869 kubelet[3222]: I0120 06:49:00.955712 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dznzp\" (UniqueName: \"kubernetes.io/projected/6427073d-d9c8-4cdc-b900-7c23bb8e7dac-kube-api-access-dznzp\") pod \"kube-proxy-xdxqn\" (UID: \"6427073d-d9c8-4cdc-b900-7c23bb8e7dac\") " pod="kube-system/kube-proxy-xdxqn" Jan 20 06:49:00.955869 kubelet[3222]: I0120 06:49:00.955738 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6561465f-5574-4f4a-9fdf-61df6943f16e-run\") pod \"kube-flannel-ds-rdlh5\" (UID: \"6561465f-5574-4f4a-9fdf-61df6943f16e\") " pod="kube-flannel/kube-flannel-ds-rdlh5" Jan 20 06:49:00.955869 kubelet[3222]: I0120 06:49:00.955773 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/6561465f-5574-4f4a-9fdf-61df6943f16e-cni-plugin\") pod \"kube-flannel-ds-rdlh5\" (UID: \"6561465f-5574-4f4a-9fdf-61df6943f16e\") " pod="kube-flannel/kube-flannel-ds-rdlh5" Jan 20 06:49:00.956057 kubelet[3222]: I0120 06:49:00.955796 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6427073d-d9c8-4cdc-b900-7c23bb8e7dac-kube-proxy\") pod \"kube-proxy-xdxqn\" (UID: \"6427073d-d9c8-4cdc-b900-7c23bb8e7dac\") " pod="kube-system/kube-proxy-xdxqn" Jan 20 06:49:00.956057 kubelet[3222]: I0120 06:49:00.955816 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6427073d-d9c8-4cdc-b900-7c23bb8e7dac-xtables-lock\") pod \"kube-proxy-xdxqn\" (UID: \"6427073d-d9c8-4cdc-b900-7c23bb8e7dac\") " pod="kube-system/kube-proxy-xdxqn" Jan 20 06:49:00.956057 kubelet[3222]: I0120 06:49:00.955838 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/6561465f-5574-4f4a-9fdf-61df6943f16e-cni\") pod \"kube-flannel-ds-rdlh5\" (UID: \"6561465f-5574-4f4a-9fdf-61df6943f16e\") " pod="kube-flannel/kube-flannel-ds-rdlh5" Jan 20 06:49:01.226652 containerd[1959]: time="2026-01-20T06:49:01.226382853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xdxqn,Uid:6427073d-d9c8-4cdc-b900-7c23bb8e7dac,Namespace:kube-system,Attempt:0,}" Jan 20 06:49:01.312684 containerd[1959]: time="2026-01-20T06:49:01.312595664Z" level=info msg="connecting to shim fd2d7ff30e841e75e9327849508897ede9af6eb1d73ae1d36c1578fca84427bf" address="unix:///run/containerd/s/3d046f53e2b957b2cd645b124ea46967efa2ca58083f6b7b884d4a08b640691a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:49:01.356182 systemd[1]: Started cri-containerd-fd2d7ff30e841e75e9327849508897ede9af6eb1d73ae1d36c1578fca84427bf.scope - libcontainer container fd2d7ff30e841e75e9327849508897ede9af6eb1d73ae1d36c1578fca84427bf. Jan 20 06:49:01.625587 containerd[1959]: time="2026-01-20T06:49:01.625524811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xdxqn,Uid:6427073d-d9c8-4cdc-b900-7c23bb8e7dac,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd2d7ff30e841e75e9327849508897ede9af6eb1d73ae1d36c1578fca84427bf\"" Jan 20 06:49:01.662580 containerd[1959]: time="2026-01-20T06:49:01.662415791Z" level=info msg="CreateContainer within sandbox \"fd2d7ff30e841e75e9327849508897ede9af6eb1d73ae1d36c1578fca84427bf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 06:49:01.750842 containerd[1959]: time="2026-01-20T06:49:01.750781120Z" level=info msg="Container db127753204d3efd0c01a6bb6b28595348dab957470da67f45a775f1236f4afc: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:49:01.752305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1780900720.mount: Deactivated successfully. Jan 20 06:49:01.791626 containerd[1959]: time="2026-01-20T06:49:01.790950469Z" level=info msg="CreateContainer within sandbox \"fd2d7ff30e841e75e9327849508897ede9af6eb1d73ae1d36c1578fca84427bf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"db127753204d3efd0c01a6bb6b28595348dab957470da67f45a775f1236f4afc\"" Jan 20 06:49:01.801252 containerd[1959]: time="2026-01-20T06:49:01.801118658Z" level=info msg="StartContainer for \"db127753204d3efd0c01a6bb6b28595348dab957470da67f45a775f1236f4afc\"" Jan 20 06:49:01.831107 containerd[1959]: time="2026-01-20T06:49:01.831046025Z" level=info msg="connecting to shim db127753204d3efd0c01a6bb6b28595348dab957470da67f45a775f1236f4afc" address="unix:///run/containerd/s/3d046f53e2b957b2cd645b124ea46967efa2ca58083f6b7b884d4a08b640691a" protocol=ttrpc version=3 Jan 20 06:49:01.968214 systemd[1]: Started cri-containerd-db127753204d3efd0c01a6bb6b28595348dab957470da67f45a775f1236f4afc.scope - libcontainer container db127753204d3efd0c01a6bb6b28595348dab957470da67f45a775f1236f4afc. Jan 20 06:49:02.058699 kubelet[3222]: E0120 06:49:02.058329 3222 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Jan 20 06:49:02.061498 kubelet[3222]: E0120 06:49:02.058835 3222 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6561465f-5574-4f4a-9fdf-61df6943f16e-flannel-cfg podName:6561465f-5574-4f4a-9fdf-61df6943f16e nodeName:}" failed. No retries permitted until 2026-01-20 06:49:02.558783414 +0000 UTC m=+7.964680421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/6561465f-5574-4f4a-9fdf-61df6943f16e-flannel-cfg") pod "kube-flannel-ds-rdlh5" (UID: "6561465f-5574-4f4a-9fdf-61df6943f16e") : failed to sync configmap cache: timed out waiting for the condition Jan 20 06:49:02.079098 kubelet[3222]: E0120 06:49:02.076056 3222 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 20 06:49:02.079098 kubelet[3222]: E0120 06:49:02.076143 3222 projected.go:194] Error preparing data for projected volume kube-api-access-wkqqp for pod kube-flannel/kube-flannel-ds-rdlh5: failed to sync configmap cache: timed out waiting for the condition Jan 20 06:49:02.079098 kubelet[3222]: E0120 06:49:02.076311 3222 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6561465f-5574-4f4a-9fdf-61df6943f16e-kube-api-access-wkqqp podName:6561465f-5574-4f4a-9fdf-61df6943f16e nodeName:}" failed. No retries permitted until 2026-01-20 06:49:02.57628613 +0000 UTC m=+7.982183135 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wkqqp" (UniqueName: "kubernetes.io/projected/6561465f-5574-4f4a-9fdf-61df6943f16e-kube-api-access-wkqqp") pod "kube-flannel-ds-rdlh5" (UID: "6561465f-5574-4f4a-9fdf-61df6943f16e") : failed to sync configmap cache: timed out waiting for the condition Jan 20 06:49:02.215198 containerd[1959]: time="2026-01-20T06:49:02.215145743Z" level=info msg="StartContainer for \"db127753204d3efd0c01a6bb6b28595348dab957470da67f45a775f1236f4afc\" returns successfully" Jan 20 06:49:02.736351 containerd[1959]: time="2026-01-20T06:49:02.736275535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rdlh5,Uid:6561465f-5574-4f4a-9fdf-61df6943f16e,Namespace:kube-flannel,Attempt:0,}" Jan 20 06:49:02.772230 containerd[1959]: time="2026-01-20T06:49:02.772182265Z" level=info msg="connecting to shim c963b96e7e0fcf260f9598cb716795fa16f2c66503b764ad4d296a9d404ae570" address="unix:///run/containerd/s/2555ebd46920348988ad1c8719ef992dbebaee74a9b40eea9b506cb3160b2a2c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:49:02.813834 systemd[1]: Started cri-containerd-c963b96e7e0fcf260f9598cb716795fa16f2c66503b764ad4d296a9d404ae570.scope - libcontainer container c963b96e7e0fcf260f9598cb716795fa16f2c66503b764ad4d296a9d404ae570. Jan 20 06:49:02.887224 containerd[1959]: time="2026-01-20T06:49:02.887179606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rdlh5,Uid:6561465f-5574-4f4a-9fdf-61df6943f16e,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"c963b96e7e0fcf260f9598cb716795fa16f2c66503b764ad4d296a9d404ae570\"" Jan 20 06:49:02.890258 containerd[1959]: time="2026-01-20T06:49:02.890217024Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 20 06:49:04.105346 kubelet[3222]: I0120 06:49:04.105077 3222 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xdxqn" podStartSLOduration=4.10498344 podStartE2EDuration="4.10498344s" podCreationTimestamp="2026-01-20 06:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:49:02.984227214 +0000 UTC m=+8.390124223" watchObservedRunningTime="2026-01-20 06:49:04.10498344 +0000 UTC m=+9.510880449" Jan 20 06:49:04.559747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1224580390.mount: Deactivated successfully. Jan 20 06:49:05.303241 containerd[1959]: time="2026-01-20T06:49:05.303192530Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:49:05.305594 containerd[1959]: time="2026-01-20T06:49:05.305533289Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=0" Jan 20 06:49:05.306509 containerd[1959]: time="2026-01-20T06:49:05.306278444Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:49:05.310393 containerd[1959]: time="2026-01-20T06:49:05.310246175Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:49:05.311369 containerd[1959]: time="2026-01-20T06:49:05.311180922Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.420913119s" Jan 20 06:49:05.311369 containerd[1959]: time="2026-01-20T06:49:05.311228364Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 20 06:49:05.314110 containerd[1959]: time="2026-01-20T06:49:05.313863473Z" level=info msg="CreateContainer within sandbox \"c963b96e7e0fcf260f9598cb716795fa16f2c66503b764ad4d296a9d404ae570\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 20 06:49:05.394956 containerd[1959]: time="2026-01-20T06:49:05.394072063Z" level=info msg="Container bb5d2965ec1c6711a8a934f84445b6943bebf268f814f3ef934b90a798a6a45e: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:49:05.411090 containerd[1959]: time="2026-01-20T06:49:05.411042459Z" level=info msg="CreateContainer within sandbox \"c963b96e7e0fcf260f9598cb716795fa16f2c66503b764ad4d296a9d404ae570\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"bb5d2965ec1c6711a8a934f84445b6943bebf268f814f3ef934b90a798a6a45e\"" Jan 20 06:49:05.411926 containerd[1959]: time="2026-01-20T06:49:05.411890832Z" level=info msg="StartContainer for \"bb5d2965ec1c6711a8a934f84445b6943bebf268f814f3ef934b90a798a6a45e\"" Jan 20 06:49:05.414018 containerd[1959]: time="2026-01-20T06:49:05.413965249Z" level=info msg="connecting to shim bb5d2965ec1c6711a8a934f84445b6943bebf268f814f3ef934b90a798a6a45e" address="unix:///run/containerd/s/2555ebd46920348988ad1c8719ef992dbebaee74a9b40eea9b506cb3160b2a2c" protocol=ttrpc version=3 Jan 20 06:49:05.450124 systemd[1]: Started cri-containerd-bb5d2965ec1c6711a8a934f84445b6943bebf268f814f3ef934b90a798a6a45e.scope - libcontainer container bb5d2965ec1c6711a8a934f84445b6943bebf268f814f3ef934b90a798a6a45e. Jan 20 06:49:05.497259 containerd[1959]: time="2026-01-20T06:49:05.497216107Z" level=info msg="StartContainer for \"bb5d2965ec1c6711a8a934f84445b6943bebf268f814f3ef934b90a798a6a45e\" returns successfully" Jan 20 06:49:05.537273 systemd[1]: cri-containerd-bb5d2965ec1c6711a8a934f84445b6943bebf268f814f3ef934b90a798a6a45e.scope: Deactivated successfully. Jan 20 06:49:05.589142 containerd[1959]: time="2026-01-20T06:49:05.589025882Z" level=info msg="received container exit event container_id:\"bb5d2965ec1c6711a8a934f84445b6943bebf268f814f3ef934b90a798a6a45e\" id:\"bb5d2965ec1c6711a8a934f84445b6943bebf268f814f3ef934b90a798a6a45e\" pid:3698 exited_at:{seconds:1768891745 nanos:552404860}" Jan 20 06:49:05.617938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb5d2965ec1c6711a8a934f84445b6943bebf268f814f3ef934b90a798a6a45e-rootfs.mount: Deactivated successfully. Jan 20 06:49:05.983232 containerd[1959]: time="2026-01-20T06:49:05.983188950Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 20 06:49:07.833041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3509092770.mount: Deactivated successfully. Jan 20 06:49:08.675912 containerd[1959]: time="2026-01-20T06:49:08.675857084Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:49:08.676909 containerd[1959]: time="2026-01-20T06:49:08.676873888Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=12395800" Jan 20 06:49:08.678127 containerd[1959]: time="2026-01-20T06:49:08.677846207Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:49:08.680178 containerd[1959]: time="2026-01-20T06:49:08.680147860Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:49:08.680950 containerd[1959]: time="2026-01-20T06:49:08.680918628Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.697691419s" Jan 20 06:49:08.681038 containerd[1959]: time="2026-01-20T06:49:08.680952760Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 20 06:49:08.683711 containerd[1959]: time="2026-01-20T06:49:08.683671393Z" level=info msg="CreateContainer within sandbox \"c963b96e7e0fcf260f9598cb716795fa16f2c66503b764ad4d296a9d404ae570\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 06:49:08.699525 containerd[1959]: time="2026-01-20T06:49:08.699482977Z" level=info msg="Container f051700cc91c5b228ed58dd981c5ca13890fbfe0c3026c0106f77942a1a4ccfc: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:49:08.699732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3681084958.mount: Deactivated successfully. Jan 20 06:49:08.709108 containerd[1959]: time="2026-01-20T06:49:08.709062906Z" level=info msg="CreateContainer within sandbox \"c963b96e7e0fcf260f9598cb716795fa16f2c66503b764ad4d296a9d404ae570\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f051700cc91c5b228ed58dd981c5ca13890fbfe0c3026c0106f77942a1a4ccfc\"" Jan 20 06:49:08.709808 containerd[1959]: time="2026-01-20T06:49:08.709661658Z" level=info msg="StartContainer for \"f051700cc91c5b228ed58dd981c5ca13890fbfe0c3026c0106f77942a1a4ccfc\"" Jan 20 06:49:08.711078 containerd[1959]: time="2026-01-20T06:49:08.710650558Z" level=info msg="connecting to shim f051700cc91c5b228ed58dd981c5ca13890fbfe0c3026c0106f77942a1a4ccfc" address="unix:///run/containerd/s/2555ebd46920348988ad1c8719ef992dbebaee74a9b40eea9b506cb3160b2a2c" protocol=ttrpc version=3 Jan 20 06:49:08.735686 systemd[1]: Started cri-containerd-f051700cc91c5b228ed58dd981c5ca13890fbfe0c3026c0106f77942a1a4ccfc.scope - libcontainer container f051700cc91c5b228ed58dd981c5ca13890fbfe0c3026c0106f77942a1a4ccfc. Jan 20 06:49:08.769417 systemd[1]: cri-containerd-f051700cc91c5b228ed58dd981c5ca13890fbfe0c3026c0106f77942a1a4ccfc.scope: Deactivated successfully. Jan 20 06:49:08.774995 containerd[1959]: time="2026-01-20T06:49:08.774259845Z" level=info msg="received container exit event container_id:\"f051700cc91c5b228ed58dd981c5ca13890fbfe0c3026c0106f77942a1a4ccfc\" id:\"f051700cc91c5b228ed58dd981c5ca13890fbfe0c3026c0106f77942a1a4ccfc\" pid:3897 exited_at:{seconds:1768891748 nanos:771401102}" Jan 20 06:49:08.777841 containerd[1959]: time="2026-01-20T06:49:08.777801122Z" level=info msg="StartContainer for \"f051700cc91c5b228ed58dd981c5ca13890fbfe0c3026c0106f77942a1a4ccfc\" returns successfully" Jan 20 06:49:08.810205 kubelet[3222]: I0120 06:49:08.810174 3222 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 06:49:08.823724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f051700cc91c5b228ed58dd981c5ca13890fbfe0c3026c0106f77942a1a4ccfc-rootfs.mount: Deactivated successfully. Jan 20 06:49:08.871894 systemd[1]: Created slice kubepods-burstable-podfc46eef4_230b_4731_90a1_2d59cb57efde.slice - libcontainer container kubepods-burstable-podfc46eef4_230b_4731_90a1_2d59cb57efde.slice. Jan 20 06:49:08.887529 systemd[1]: Created slice kubepods-burstable-pod0352cc48_969b_4108_88a6_c435cc11942c.slice - libcontainer container kubepods-burstable-pod0352cc48_969b_4108_88a6_c435cc11942c.slice. Jan 20 06:49:08.952410 kubelet[3222]: I0120 06:49:08.951641 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghcpm\" (UniqueName: \"kubernetes.io/projected/fc46eef4-230b-4731-90a1-2d59cb57efde-kube-api-access-ghcpm\") pod \"coredns-668d6bf9bc-frcmg\" (UID: \"fc46eef4-230b-4731-90a1-2d59cb57efde\") " pod="kube-system/coredns-668d6bf9bc-frcmg" Jan 20 06:49:08.952410 kubelet[3222]: I0120 06:49:08.951700 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0352cc48-969b-4108-88a6-c435cc11942c-config-volume\") pod \"coredns-668d6bf9bc-bkdt5\" (UID: \"0352cc48-969b-4108-88a6-c435cc11942c\") " pod="kube-system/coredns-668d6bf9bc-bkdt5" Jan 20 06:49:08.952410 kubelet[3222]: I0120 06:49:08.951732 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn96p\" (UniqueName: \"kubernetes.io/projected/0352cc48-969b-4108-88a6-c435cc11942c-kube-api-access-mn96p\") pod \"coredns-668d6bf9bc-bkdt5\" (UID: \"0352cc48-969b-4108-88a6-c435cc11942c\") " pod="kube-system/coredns-668d6bf9bc-bkdt5" Jan 20 06:49:08.952410 kubelet[3222]: I0120 06:49:08.951770 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc46eef4-230b-4731-90a1-2d59cb57efde-config-volume\") pod \"coredns-668d6bf9bc-frcmg\" (UID: \"fc46eef4-230b-4731-90a1-2d59cb57efde\") " pod="kube-system/coredns-668d6bf9bc-frcmg" Jan 20 06:49:09.184302 containerd[1959]: time="2026-01-20T06:49:09.184253947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-frcmg,Uid:fc46eef4-230b-4731-90a1-2d59cb57efde,Namespace:kube-system,Attempt:0,}" Jan 20 06:49:09.195016 containerd[1959]: time="2026-01-20T06:49:09.194235372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bkdt5,Uid:0352cc48-969b-4108-88a6-c435cc11942c,Namespace:kube-system,Attempt:0,}" Jan 20 06:49:09.411136 containerd[1959]: time="2026-01-20T06:49:09.411073227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-frcmg,Uid:fc46eef4-230b-4731-90a1-2d59cb57efde,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ff3aec28661841f4cf45227049e60a45e0c11f9c814516df6bf4686667a3f7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 06:49:09.412110 kubelet[3222]: E0120 06:49:09.411337 3222 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ff3aec28661841f4cf45227049e60a45e0c11f9c814516df6bf4686667a3f7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 06:49:09.412110 kubelet[3222]: E0120 06:49:09.411430 3222 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ff3aec28661841f4cf45227049e60a45e0c11f9c814516df6bf4686667a3f7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-frcmg" Jan 20 06:49:09.412110 kubelet[3222]: E0120 06:49:09.411467 3222 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ff3aec28661841f4cf45227049e60a45e0c11f9c814516df6bf4686667a3f7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-frcmg" Jan 20 06:49:09.412110 kubelet[3222]: E0120 06:49:09.411507 3222 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-frcmg_kube-system(fc46eef4-230b-4731-90a1-2d59cb57efde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-frcmg_kube-system(fc46eef4-230b-4731-90a1-2d59cb57efde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68ff3aec28661841f4cf45227049e60a45e0c11f9c814516df6bf4686667a3f7\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-frcmg" podUID="fc46eef4-230b-4731-90a1-2d59cb57efde" Jan 20 06:49:09.457304 containerd[1959]: time="2026-01-20T06:49:09.457248213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bkdt5,Uid:0352cc48-969b-4108-88a6-c435cc11942c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b0425e9413f63782e9edf8df9c3da56bde7e97f4dbdf5158da1f4b909748099\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 06:49:09.457570 kubelet[3222]: E0120 06:49:09.457522 3222 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b0425e9413f63782e9edf8df9c3da56bde7e97f4dbdf5158da1f4b909748099\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 06:49:09.457636 kubelet[3222]: E0120 06:49:09.457591 3222 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b0425e9413f63782e9edf8df9c3da56bde7e97f4dbdf5158da1f4b909748099\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-bkdt5" Jan 20 06:49:09.457636 kubelet[3222]: E0120 06:49:09.457610 3222 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b0425e9413f63782e9edf8df9c3da56bde7e97f4dbdf5158da1f4b909748099\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-bkdt5" Jan 20 06:49:09.457702 kubelet[3222]: E0120 06:49:09.457646 3222 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bkdt5_kube-system(0352cc48-969b-4108-88a6-c435cc11942c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bkdt5_kube-system(0352cc48-969b-4108-88a6-c435cc11942c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b0425e9413f63782e9edf8df9c3da56bde7e97f4dbdf5158da1f4b909748099\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-bkdt5" podUID="0352cc48-969b-4108-88a6-c435cc11942c" Jan 20 06:49:09.727938 systemd[1]: run-netns-cni\x2d254fe3cd\x2d7767\x2d6fcf\x2d1b57\x2d112435223593.mount: Deactivated successfully. Jan 20 06:49:09.728052 systemd[1]: run-netns-cni\x2dc22e1c4f\x2d91af\x2d9863\x2d3309\x2d359f3160ed9d.mount: Deactivated successfully. Jan 20 06:49:10.000057 containerd[1959]: time="2026-01-20T06:49:09.999947505Z" level=info msg="CreateContainer within sandbox \"c963b96e7e0fcf260f9598cb716795fa16f2c66503b764ad4d296a9d404ae570\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 20 06:49:10.018523 containerd[1959]: time="2026-01-20T06:49:10.015690063Z" level=info msg="Container 859af7a6ccdf0e35c5cf3954b33490fda95fc515c7f6b48c6d9bfeef2de9605d: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:49:10.021407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2680494064.mount: Deactivated successfully. Jan 20 06:49:10.031963 containerd[1959]: time="2026-01-20T06:49:10.031909742Z" level=info msg="CreateContainer within sandbox \"c963b96e7e0fcf260f9598cb716795fa16f2c66503b764ad4d296a9d404ae570\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"859af7a6ccdf0e35c5cf3954b33490fda95fc515c7f6b48c6d9bfeef2de9605d\"" Jan 20 06:49:10.033987 containerd[1959]: time="2026-01-20T06:49:10.033948730Z" level=info msg="StartContainer for \"859af7a6ccdf0e35c5cf3954b33490fda95fc515c7f6b48c6d9bfeef2de9605d\"" Jan 20 06:49:10.035166 containerd[1959]: time="2026-01-20T06:49:10.035126130Z" level=info msg="connecting to shim 859af7a6ccdf0e35c5cf3954b33490fda95fc515c7f6b48c6d9bfeef2de9605d" address="unix:///run/containerd/s/2555ebd46920348988ad1c8719ef992dbebaee74a9b40eea9b506cb3160b2a2c" protocol=ttrpc version=3 Jan 20 06:49:10.061715 systemd[1]: Started cri-containerd-859af7a6ccdf0e35c5cf3954b33490fda95fc515c7f6b48c6d9bfeef2de9605d.scope - libcontainer container 859af7a6ccdf0e35c5cf3954b33490fda95fc515c7f6b48c6d9bfeef2de9605d. Jan 20 06:49:10.100021 containerd[1959]: time="2026-01-20T06:49:10.099969698Z" level=info msg="StartContainer for \"859af7a6ccdf0e35c5cf3954b33490fda95fc515c7f6b48c6d9bfeef2de9605d\" returns successfully" Jan 20 06:49:11.179408 (udev-worker)[4006]: Network interface NamePolicy= disabled on kernel command line. Jan 20 06:49:11.194843 systemd-networkd[1540]: flannel.1: Link UP Jan 20 06:49:11.196417 systemd-networkd[1540]: flannel.1: Gained carrier Jan 20 06:49:12.407715 systemd-networkd[1540]: flannel.1: Gained IPv6LL Jan 20 06:49:14.980743 ntpd[1908]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 20 06:49:14.981252 ntpd[1908]: 20 Jan 06:49:14 ntpd[1908]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 20 06:49:14.981252 ntpd[1908]: 20 Jan 06:49:14 ntpd[1908]: Listen normally on 7 flannel.1 [fe80::f88e:81ff:fedd:b3ea%4]:123 Jan 20 06:49:14.980802 ntpd[1908]: Listen normally on 7 flannel.1 [fe80::f88e:81ff:fedd:b3ea%4]:123 Jan 20 06:49:21.801801 containerd[1959]: time="2026-01-20T06:49:21.801743639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bkdt5,Uid:0352cc48-969b-4108-88a6-c435cc11942c,Namespace:kube-system,Attempt:0,}" Jan 20 06:49:22.047929 systemd-networkd[1540]: cni0: Link UP Jan 20 06:49:22.047937 systemd-networkd[1540]: cni0: Gained carrier Jan 20 06:49:22.053344 (udev-worker)[4147]: Network interface NamePolicy= disabled on kernel command line. Jan 20 06:49:22.053475 systemd-networkd[1540]: cni0: Lost carrier Jan 20 06:49:22.089789 kernel: cni0: port 1(veth08f01dbf) entered blocking state Jan 20 06:49:22.089979 kernel: cni0: port 1(veth08f01dbf) entered disabled state Jan 20 06:49:22.089034 systemd-networkd[1540]: veth08f01dbf: Link UP Jan 20 06:49:22.091539 kernel: veth08f01dbf: entered allmulticast mode Jan 20 06:49:22.093466 kernel: veth08f01dbf: entered promiscuous mode Jan 20 06:49:22.106791 kernel: cni0: port 1(veth08f01dbf) entered blocking state Jan 20 06:49:22.106901 kernel: cni0: port 1(veth08f01dbf) entered forwarding state Jan 20 06:49:22.107163 systemd-networkd[1540]: veth08f01dbf: Gained carrier Jan 20 06:49:22.107624 systemd-networkd[1540]: cni0: Gained carrier Jan 20 06:49:22.111860 containerd[1959]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c938), "name":"cbr0", "type":"bridge"} Jan 20 06:49:22.111860 containerd[1959]: delegateAdd: netconf sent to delegate plugin: Jan 20 06:49:22.150025 containerd[1959]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-20T06:49:22.149966878Z" level=info msg="connecting to shim 64bc68253ea824db208dafe63474b3677b7bcdf2f4c789cfd33160e3f21de4ec" address="unix:///run/containerd/s/1ab534ed32adff0cc1218d847e411e668cd0e8e65bbf3ac495d7b3bcc60dc3c6" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:49:22.186802 systemd[1]: Started cri-containerd-64bc68253ea824db208dafe63474b3677b7bcdf2f4c789cfd33160e3f21de4ec.scope - libcontainer container 64bc68253ea824db208dafe63474b3677b7bcdf2f4c789cfd33160e3f21de4ec. Jan 20 06:49:22.241739 containerd[1959]: time="2026-01-20T06:49:22.241691908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bkdt5,Uid:0352cc48-969b-4108-88a6-c435cc11942c,Namespace:kube-system,Attempt:0,} returns sandbox id \"64bc68253ea824db208dafe63474b3677b7bcdf2f4c789cfd33160e3f21de4ec\"" Jan 20 06:49:22.244689 containerd[1959]: time="2026-01-20T06:49:22.244647988Z" level=info msg="CreateContainer within sandbox \"64bc68253ea824db208dafe63474b3677b7bcdf2f4c789cfd33160e3f21de4ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 06:49:22.279945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1778573475.mount: Deactivated successfully. Jan 20 06:49:22.280701 containerd[1959]: time="2026-01-20T06:49:22.280666565Z" level=info msg="Container 1b1d51a94ee76a22cf925413c71e5d78a4bf46a1e30c285a5458613233385fee: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:49:22.299519 containerd[1959]: time="2026-01-20T06:49:22.299475164Z" level=info msg="CreateContainer within sandbox \"64bc68253ea824db208dafe63474b3677b7bcdf2f4c789cfd33160e3f21de4ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b1d51a94ee76a22cf925413c71e5d78a4bf46a1e30c285a5458613233385fee\"" Jan 20 06:49:22.300114 containerd[1959]: time="2026-01-20T06:49:22.300077406Z" level=info msg="StartContainer for \"1b1d51a94ee76a22cf925413c71e5d78a4bf46a1e30c285a5458613233385fee\"" Jan 20 06:49:22.302232 containerd[1959]: time="2026-01-20T06:49:22.302204482Z" level=info msg="connecting to shim 1b1d51a94ee76a22cf925413c71e5d78a4bf46a1e30c285a5458613233385fee" address="unix:///run/containerd/s/1ab534ed32adff0cc1218d847e411e668cd0e8e65bbf3ac495d7b3bcc60dc3c6" protocol=ttrpc version=3 Jan 20 06:49:22.327705 systemd[1]: Started cri-containerd-1b1d51a94ee76a22cf925413c71e5d78a4bf46a1e30c285a5458613233385fee.scope - libcontainer container 1b1d51a94ee76a22cf925413c71e5d78a4bf46a1e30c285a5458613233385fee. Jan 20 06:49:22.362650 containerd[1959]: time="2026-01-20T06:49:22.362605619Z" level=info msg="StartContainer for \"1b1d51a94ee76a22cf925413c71e5d78a4bf46a1e30c285a5458613233385fee\" returns successfully" Jan 20 06:49:23.037789 kubelet[3222]: I0120 06:49:23.037053 3222 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-rdlh5" podStartSLOduration=17.24395974 podStartE2EDuration="23.037034486s" podCreationTimestamp="2026-01-20 06:49:00 +0000 UTC" firstStartedPulling="2026-01-20 06:49:02.888865305 +0000 UTC m=+8.294762299" lastFinishedPulling="2026-01-20 06:49:08.681940058 +0000 UTC m=+14.087837045" observedRunningTime="2026-01-20 06:49:11.01151745 +0000 UTC m=+16.417414449" watchObservedRunningTime="2026-01-20 06:49:23.037034486 +0000 UTC m=+28.442931495" Jan 20 06:49:23.353553 systemd-networkd[1540]: cni0: Gained IPv6LL Jan 20 06:49:23.543767 systemd-networkd[1540]: veth08f01dbf: Gained IPv6LL Jan 20 06:49:23.801468 containerd[1959]: time="2026-01-20T06:49:23.801392477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-frcmg,Uid:fc46eef4-230b-4731-90a1-2d59cb57efde,Namespace:kube-system,Attempt:0,}" Jan 20 06:49:23.828628 (udev-worker)[4152]: Network interface NamePolicy= disabled on kernel command line. Jan 20 06:49:23.833468 kernel: cni0: port 2(veth20a4acc6) entered blocking state Jan 20 06:49:23.833564 kernel: cni0: port 2(veth20a4acc6) entered disabled state Jan 20 06:49:23.833158 systemd-networkd[1540]: veth20a4acc6: Link UP Jan 20 06:49:23.835562 kernel: veth20a4acc6: entered allmulticast mode Jan 20 06:49:23.835647 kernel: veth20a4acc6: entered promiscuous mode Jan 20 06:49:23.865613 kernel: cni0: port 2(veth20a4acc6) entered blocking state Jan 20 06:49:23.865713 kernel: cni0: port 2(veth20a4acc6) entered forwarding state Jan 20 06:49:23.866579 systemd-networkd[1540]: veth20a4acc6: Gained carrier Jan 20 06:49:23.869638 containerd[1959]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c678), "name":"cbr0", "type":"bridge"} Jan 20 06:49:23.869638 containerd[1959]: delegateAdd: netconf sent to delegate plugin: Jan 20 06:49:23.911688 containerd[1959]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-20T06:49:23.911613299Z" level=info msg="connecting to shim 992a65570c5ccda3fca0275b911e74cdc59d1d3970dd0daaf1a16ee81600c38e" address="unix:///run/containerd/s/583ca5837d5514f4ee895364c8405cf93902b889e81c7cbd177d5c4ed104ab6f" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:49:23.954000 systemd[1]: Started cri-containerd-992a65570c5ccda3fca0275b911e74cdc59d1d3970dd0daaf1a16ee81600c38e.scope - libcontainer container 992a65570c5ccda3fca0275b911e74cdc59d1d3970dd0daaf1a16ee81600c38e. Jan 20 06:49:24.006749 containerd[1959]: time="2026-01-20T06:49:24.006704427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-frcmg,Uid:fc46eef4-230b-4731-90a1-2d59cb57efde,Namespace:kube-system,Attempt:0,} returns sandbox id \"992a65570c5ccda3fca0275b911e74cdc59d1d3970dd0daaf1a16ee81600c38e\"" Jan 20 06:49:24.010258 containerd[1959]: time="2026-01-20T06:49:24.010192072Z" level=info msg="CreateContainer within sandbox \"992a65570c5ccda3fca0275b911e74cdc59d1d3970dd0daaf1a16ee81600c38e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 06:49:24.022333 containerd[1959]: time="2026-01-20T06:49:24.021942471Z" level=info msg="Container 8b4b1cd02c34796c9e63204bde047197f3f78a14dc849e866a6909a816cbcc52: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:49:24.046314 containerd[1959]: time="2026-01-20T06:49:24.046276038Z" level=info msg="CreateContainer within sandbox \"992a65570c5ccda3fca0275b911e74cdc59d1d3970dd0daaf1a16ee81600c38e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b4b1cd02c34796c9e63204bde047197f3f78a14dc849e866a6909a816cbcc52\"" Jan 20 06:49:24.047232 containerd[1959]: time="2026-01-20T06:49:24.047203169Z" level=info msg="StartContainer for \"8b4b1cd02c34796c9e63204bde047197f3f78a14dc849e866a6909a816cbcc52\"" Jan 20 06:49:24.048659 containerd[1959]: time="2026-01-20T06:49:24.048604882Z" level=info msg="connecting to shim 8b4b1cd02c34796c9e63204bde047197f3f78a14dc849e866a6909a816cbcc52" address="unix:///run/containerd/s/583ca5837d5514f4ee895364c8405cf93902b889e81c7cbd177d5c4ed104ab6f" protocol=ttrpc version=3 Jan 20 06:49:24.080814 systemd[1]: Started cri-containerd-8b4b1cd02c34796c9e63204bde047197f3f78a14dc849e866a6909a816cbcc52.scope - libcontainer container 8b4b1cd02c34796c9e63204bde047197f3f78a14dc849e866a6909a816cbcc52. Jan 20 06:49:24.122461 containerd[1959]: time="2026-01-20T06:49:24.122098925Z" level=info msg="StartContainer for \"8b4b1cd02c34796c9e63204bde047197f3f78a14dc849e866a6909a816cbcc52\" returns successfully" Jan 20 06:49:25.050845 kubelet[3222]: I0120 06:49:25.050401 3222 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-frcmg" podStartSLOduration=25.050380349 podStartE2EDuration="25.050380349s" podCreationTimestamp="2026-01-20 06:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:49:25.049850458 +0000 UTC m=+30.455747482" watchObservedRunningTime="2026-01-20 06:49:25.050380349 +0000 UTC m=+30.456277360" Jan 20 06:49:25.051996 kubelet[3222]: I0120 06:49:25.050985 3222 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bkdt5" podStartSLOduration=25.050969094 podStartE2EDuration="25.050969094s" podCreationTimestamp="2026-01-20 06:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:49:23.038978274 +0000 UTC m=+28.444875284" watchObservedRunningTime="2026-01-20 06:49:25.050969094 +0000 UTC m=+30.456866103" Jan 20 06:49:25.399734 systemd-networkd[1540]: veth20a4acc6: Gained IPv6LL Jan 20 06:49:27.980757 ntpd[1908]: Listen normally on 8 cni0 192.168.0.1:123 Jan 20 06:49:27.981131 ntpd[1908]: 20 Jan 06:49:27 ntpd[1908]: Listen normally on 8 cni0 192.168.0.1:123 Jan 20 06:49:27.981131 ntpd[1908]: 20 Jan 06:49:27 ntpd[1908]: Listen normally on 9 cni0 [fe80::a06c:40ff:fee4:e012%5]:123 Jan 20 06:49:27.981131 ntpd[1908]: 20 Jan 06:49:27 ntpd[1908]: Listen normally on 10 veth08f01dbf [fe80::9cdb:c1ff:fe26:cc4a%6]:123 Jan 20 06:49:27.981131 ntpd[1908]: 20 Jan 06:49:27 ntpd[1908]: Listen normally on 11 veth20a4acc6 [fe80::542a:98ff:fe05:39cc%7]:123 Jan 20 06:49:27.980815 ntpd[1908]: Listen normally on 9 cni0 [fe80::a06c:40ff:fee4:e012%5]:123 Jan 20 06:49:27.980843 ntpd[1908]: Listen normally on 10 veth08f01dbf [fe80::9cdb:c1ff:fe26:cc4a%6]:123 Jan 20 06:49:27.980864 ntpd[1908]: Listen normally on 11 veth20a4acc6 [fe80::542a:98ff:fe05:39cc%7]:123 Jan 20 06:49:44.777360 systemd[1]: Started sshd@5-172.31.26.220:22-68.220.241.50:43882.service - OpenSSH per-connection server daemon (68.220.241.50:43882). Jan 20 06:49:45.233019 sshd[4462]: Accepted publickey for core from 68.220.241.50 port 43882 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:49:45.234629 sshd-session[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:49:45.240571 systemd-logind[1916]: New session 7 of user core. Jan 20 06:49:45.248723 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 06:49:45.626711 sshd[4466]: Connection closed by 68.220.241.50 port 43882 Jan 20 06:49:45.628606 sshd-session[4462]: pam_unix(sshd:session): session closed for user core Jan 20 06:49:45.632701 systemd[1]: sshd@5-172.31.26.220:22-68.220.241.50:43882.service: Deactivated successfully. Jan 20 06:49:45.635340 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 06:49:45.636610 systemd-logind[1916]: Session 7 logged out. Waiting for processes to exit. Jan 20 06:49:45.639727 systemd-logind[1916]: Removed session 7. Jan 20 06:49:50.716732 systemd[1]: Started sshd@6-172.31.26.220:22-68.220.241.50:43888.service - OpenSSH per-connection server daemon (68.220.241.50:43888). Jan 20 06:49:51.150245 sshd[4500]: Accepted publickey for core from 68.220.241.50 port 43888 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:49:51.151845 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:49:51.157196 systemd-logind[1916]: New session 8 of user core. Jan 20 06:49:51.162658 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 06:49:51.459235 sshd[4504]: Connection closed by 68.220.241.50 port 43888 Jan 20 06:49:51.460719 sshd-session[4500]: pam_unix(sshd:session): session closed for user core Jan 20 06:49:51.464924 systemd-logind[1916]: Session 8 logged out. Waiting for processes to exit. Jan 20 06:49:51.465005 systemd[1]: sshd@6-172.31.26.220:22-68.220.241.50:43888.service: Deactivated successfully. Jan 20 06:49:51.467580 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 06:49:51.470229 systemd-logind[1916]: Removed session 8. Jan 20 06:49:56.550558 systemd[1]: Started sshd@7-172.31.26.220:22-68.220.241.50:54630.service - OpenSSH per-connection server daemon (68.220.241.50:54630). Jan 20 06:49:56.981635 sshd[4563]: Accepted publickey for core from 68.220.241.50 port 54630 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:49:56.983102 sshd-session[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:49:56.989346 systemd-logind[1916]: New session 9 of user core. Jan 20 06:49:56.994682 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 06:49:57.284101 sshd[4567]: Connection closed by 68.220.241.50 port 54630 Jan 20 06:49:57.285614 sshd-session[4563]: pam_unix(sshd:session): session closed for user core Jan 20 06:49:57.289982 systemd[1]: sshd@7-172.31.26.220:22-68.220.241.50:54630.service: Deactivated successfully. Jan 20 06:49:57.291936 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 06:49:57.293518 systemd-logind[1916]: Session 9 logged out. Waiting for processes to exit. Jan 20 06:49:57.294934 systemd-logind[1916]: Removed session 9. Jan 20 06:49:57.372984 systemd[1]: Started sshd@8-172.31.26.220:22-68.220.241.50:54636.service - OpenSSH per-connection server daemon (68.220.241.50:54636). Jan 20 06:49:57.810327 sshd[4580]: Accepted publickey for core from 68.220.241.50 port 54636 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:49:57.811904 sshd-session[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:49:57.817391 systemd-logind[1916]: New session 10 of user core. Jan 20 06:49:57.829824 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 06:49:58.197312 sshd[4584]: Connection closed by 68.220.241.50 port 54636 Jan 20 06:49:58.198689 sshd-session[4580]: pam_unix(sshd:session): session closed for user core Jan 20 06:49:58.203964 systemd-logind[1916]: Session 10 logged out. Waiting for processes to exit. Jan 20 06:49:58.205190 systemd[1]: sshd@8-172.31.26.220:22-68.220.241.50:54636.service: Deactivated successfully. Jan 20 06:49:58.207653 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 06:49:58.210415 systemd-logind[1916]: Removed session 10. Jan 20 06:49:58.302503 systemd[1]: Started sshd@9-172.31.26.220:22-68.220.241.50:54640.service - OpenSSH per-connection server daemon (68.220.241.50:54640). Jan 20 06:49:58.797498 sshd[4597]: Accepted publickey for core from 68.220.241.50 port 54640 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:49:58.799211 sshd-session[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:49:58.806512 systemd-logind[1916]: New session 11 of user core. Jan 20 06:49:58.811700 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 06:49:59.139406 sshd[4601]: Connection closed by 68.220.241.50 port 54640 Jan 20 06:49:59.140620 sshd-session[4597]: pam_unix(sshd:session): session closed for user core Jan 20 06:49:59.150683 systemd[1]: sshd@9-172.31.26.220:22-68.220.241.50:54640.service: Deactivated successfully. Jan 20 06:49:59.165150 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 06:49:59.177068 systemd-logind[1916]: Session 11 logged out. Waiting for processes to exit. Jan 20 06:49:59.185941 systemd-logind[1916]: Removed session 11. Jan 20 06:50:04.221918 systemd[1]: Started sshd@10-172.31.26.220:22-68.220.241.50:51700.service - OpenSSH per-connection server daemon (68.220.241.50:51700). Jan 20 06:50:04.661469 sshd[4635]: Accepted publickey for core from 68.220.241.50 port 51700 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:50:04.663098 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:04.670153 systemd-logind[1916]: New session 12 of user core. Jan 20 06:50:04.678713 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 06:50:04.991991 sshd[4639]: Connection closed by 68.220.241.50 port 51700 Jan 20 06:50:04.992676 sshd-session[4635]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:04.998117 systemd[1]: sshd@10-172.31.26.220:22-68.220.241.50:51700.service: Deactivated successfully. Jan 20 06:50:05.002020 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 06:50:05.006729 systemd-logind[1916]: Session 12 logged out. Waiting for processes to exit. Jan 20 06:50:05.010522 systemd-logind[1916]: Removed session 12. Jan 20 06:50:05.096151 systemd[1]: Started sshd@11-172.31.26.220:22-68.220.241.50:51714.service - OpenSSH per-connection server daemon (68.220.241.50:51714). Jan 20 06:50:05.570020 sshd[4651]: Accepted publickey for core from 68.220.241.50 port 51714 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:50:05.571862 sshd-session[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:05.588324 systemd-logind[1916]: New session 13 of user core. Jan 20 06:50:05.592869 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 06:50:07.142570 sshd[4655]: Connection closed by 68.220.241.50 port 51714 Jan 20 06:50:07.143231 sshd-session[4651]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:07.149213 systemd[1]: sshd@11-172.31.26.220:22-68.220.241.50:51714.service: Deactivated successfully. Jan 20 06:50:07.149540 systemd-logind[1916]: Session 13 logged out. Waiting for processes to exit. Jan 20 06:50:07.152567 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 06:50:07.156503 systemd-logind[1916]: Removed session 13. Jan 20 06:50:07.243804 systemd[1]: Started sshd@12-172.31.26.220:22-68.220.241.50:51722.service - OpenSSH per-connection server daemon (68.220.241.50:51722). Jan 20 06:50:07.709235 sshd[4688]: Accepted publickey for core from 68.220.241.50 port 51722 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:50:07.710537 sshd-session[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:07.715733 systemd-logind[1916]: New session 14 of user core. Jan 20 06:50:07.720720 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 06:50:08.683852 sshd[4692]: Connection closed by 68.220.241.50 port 51722 Jan 20 06:50:08.684817 sshd-session[4688]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:08.691352 systemd[1]: sshd@12-172.31.26.220:22-68.220.241.50:51722.service: Deactivated successfully. Jan 20 06:50:08.694382 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 06:50:08.697316 systemd-logind[1916]: Session 14 logged out. Waiting for processes to exit. Jan 20 06:50:08.699028 systemd-logind[1916]: Removed session 14. Jan 20 06:50:08.773036 systemd[1]: Started sshd@13-172.31.26.220:22-68.220.241.50:51728.service - OpenSSH per-connection server daemon (68.220.241.50:51728). Jan 20 06:50:09.207389 sshd[4709]: Accepted publickey for core from 68.220.241.50 port 51728 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:50:09.209605 sshd-session[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:09.216276 systemd-logind[1916]: New session 15 of user core. Jan 20 06:50:09.221710 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 06:50:09.661139 sshd[4713]: Connection closed by 68.220.241.50 port 51728 Jan 20 06:50:09.662679 sshd-session[4709]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:09.667605 systemd[1]: sshd@13-172.31.26.220:22-68.220.241.50:51728.service: Deactivated successfully. Jan 20 06:50:09.669954 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 06:50:09.671421 systemd-logind[1916]: Session 15 logged out. Waiting for processes to exit. Jan 20 06:50:09.672916 systemd-logind[1916]: Removed session 15. Jan 20 06:50:09.751877 systemd[1]: Started sshd@14-172.31.26.220:22-68.220.241.50:51740.service - OpenSSH per-connection server daemon (68.220.241.50:51740). Jan 20 06:50:10.182950 sshd[4722]: Accepted publickey for core from 68.220.241.50 port 51740 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:50:10.184712 sshd-session[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:10.189837 systemd-logind[1916]: New session 16 of user core. Jan 20 06:50:10.199729 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 06:50:10.476549 sshd[4726]: Connection closed by 68.220.241.50 port 51740 Jan 20 06:50:10.478074 sshd-session[4722]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:10.483245 systemd[1]: sshd@14-172.31.26.220:22-68.220.241.50:51740.service: Deactivated successfully. Jan 20 06:50:10.486783 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 06:50:10.488538 systemd-logind[1916]: Session 16 logged out. Waiting for processes to exit. Jan 20 06:50:10.491031 systemd-logind[1916]: Removed session 16. Jan 20 06:50:15.567898 systemd[1]: Started sshd@15-172.31.26.220:22-68.220.241.50:44068.service - OpenSSH per-connection server daemon (68.220.241.50:44068). Jan 20 06:50:16.002751 sshd[4762]: Accepted publickey for core from 68.220.241.50 port 44068 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:50:16.004705 sshd-session[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:16.010966 systemd-logind[1916]: New session 17 of user core. Jan 20 06:50:16.019757 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 06:50:16.295670 sshd[4766]: Connection closed by 68.220.241.50 port 44068 Jan 20 06:50:16.297638 sshd-session[4762]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:16.301313 systemd[1]: sshd@15-172.31.26.220:22-68.220.241.50:44068.service: Deactivated successfully. Jan 20 06:50:16.303085 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 06:50:16.305691 systemd-logind[1916]: Session 17 logged out. Waiting for processes to exit. Jan 20 06:50:16.306722 systemd-logind[1916]: Removed session 17. Jan 20 06:50:21.390007 systemd[1]: Started sshd@16-172.31.26.220:22-68.220.241.50:44070.service - OpenSSH per-connection server daemon (68.220.241.50:44070). Jan 20 06:50:21.842002 sshd[4800]: Accepted publickey for core from 68.220.241.50 port 44070 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:50:21.842970 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:21.848617 systemd-logind[1916]: New session 18 of user core. Jan 20 06:50:21.854777 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 06:50:22.142270 sshd[4825]: Connection closed by 68.220.241.50 port 44070 Jan 20 06:50:22.143617 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:22.148270 systemd[1]: sshd@16-172.31.26.220:22-68.220.241.50:44070.service: Deactivated successfully. Jan 20 06:50:22.150315 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 06:50:22.151640 systemd-logind[1916]: Session 18 logged out. Waiting for processes to exit. Jan 20 06:50:22.153951 systemd-logind[1916]: Removed session 18. Jan 20 06:50:27.230314 systemd[1]: Started sshd@17-172.31.26.220:22-68.220.241.50:54616.service - OpenSSH per-connection server daemon (68.220.241.50:54616). Jan 20 06:50:27.669226 sshd[4858]: Accepted publickey for core from 68.220.241.50 port 54616 ssh2: RSA SHA256:2uqNLnq/JjyoPmWZkUGklWzLvCUPr/MsA/2B6wP9M+o Jan 20 06:50:27.670821 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:27.678823 systemd-logind[1916]: New session 19 of user core. Jan 20 06:50:27.683701 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 06:50:27.972620 sshd[4862]: Connection closed by 68.220.241.50 port 54616 Jan 20 06:50:27.974580 sshd-session[4858]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:27.978133 systemd[1]: sshd@17-172.31.26.220:22-68.220.241.50:54616.service: Deactivated successfully. Jan 20 06:50:27.980457 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 06:50:27.982080 systemd-logind[1916]: Session 19 logged out. Waiting for processes to exit. Jan 20 06:50:27.983856 systemd-logind[1916]: Removed session 19. Jan 20 06:50:41.606869 systemd[1]: cri-containerd-c0c63a504cbb5542ae7e2a31167fe84dda37101ed0433eb2a0fd7e6ea0527eb6.scope: Deactivated successfully. Jan 20 06:50:41.608364 systemd[1]: cri-containerd-c0c63a504cbb5542ae7e2a31167fe84dda37101ed0433eb2a0fd7e6ea0527eb6.scope: Consumed 1.996s CPU time, 102.8M memory peak, 49.5M read from disk. Jan 20 06:50:41.611748 containerd[1959]: time="2026-01-20T06:50:41.611706895Z" level=info msg="received container exit event container_id:\"c0c63a504cbb5542ae7e2a31167fe84dda37101ed0433eb2a0fd7e6ea0527eb6\" id:\"c0c63a504cbb5542ae7e2a31167fe84dda37101ed0433eb2a0fd7e6ea0527eb6\" pid:3047 exit_status:1 exited_at:{seconds:1768891841 nanos:609119158}" Jan 20 06:50:41.641072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0c63a504cbb5542ae7e2a31167fe84dda37101ed0433eb2a0fd7e6ea0527eb6-rootfs.mount: Deactivated successfully. Jan 20 06:50:42.254816 kubelet[3222]: I0120 06:50:42.254753 3222 scope.go:117] "RemoveContainer" containerID="c0c63a504cbb5542ae7e2a31167fe84dda37101ed0433eb2a0fd7e6ea0527eb6" Jan 20 06:50:42.257701 containerd[1959]: time="2026-01-20T06:50:42.257659407Z" level=info msg="CreateContainer within sandbox \"8b80b6d302d173f1e18bd9ab39a371e423890e697efe3d62faed87cd28a78c1d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 06:50:42.275742 containerd[1959]: time="2026-01-20T06:50:42.274969504Z" level=info msg="Container 5a3abd4fd34a4b742311b0c88816512272ce639340ed0ca6b937451df4a22f40: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:50:42.298532 containerd[1959]: time="2026-01-20T06:50:42.298490924Z" level=info msg="CreateContainer within sandbox \"8b80b6d302d173f1e18bd9ab39a371e423890e697efe3d62faed87cd28a78c1d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5a3abd4fd34a4b742311b0c88816512272ce639340ed0ca6b937451df4a22f40\"" Jan 20 06:50:42.300469 containerd[1959]: time="2026-01-20T06:50:42.298958679Z" level=info msg="StartContainer for \"5a3abd4fd34a4b742311b0c88816512272ce639340ed0ca6b937451df4a22f40\"" Jan 20 06:50:42.300469 containerd[1959]: time="2026-01-20T06:50:42.300107030Z" level=info msg="connecting to shim 5a3abd4fd34a4b742311b0c88816512272ce639340ed0ca6b937451df4a22f40" address="unix:///run/containerd/s/079f8a1b885462de044ebed469e75d0895d07225db142d0c1db8049091e6301a" protocol=ttrpc version=3 Jan 20 06:50:42.319672 systemd[1]: Started cri-containerd-5a3abd4fd34a4b742311b0c88816512272ce639340ed0ca6b937451df4a22f40.scope - libcontainer container 5a3abd4fd34a4b742311b0c88816512272ce639340ed0ca6b937451df4a22f40. Jan 20 06:50:42.395300 containerd[1959]: time="2026-01-20T06:50:42.395257038Z" level=info msg="StartContainer for \"5a3abd4fd34a4b742311b0c88816512272ce639340ed0ca6b937451df4a22f40\" returns successfully" Jan 20 06:50:46.596587 kubelet[3222]: E0120 06:50:46.596522 3222 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-26-220)" Jan 20 06:50:46.625169 systemd[1]: cri-containerd-e168ad227643f299340902b7267d27b0c4b5f8757d5c8ef7dd056a8073c08d2a.scope: Deactivated successfully. Jan 20 06:50:46.625520 systemd[1]: cri-containerd-e168ad227643f299340902b7267d27b0c4b5f8757d5c8ef7dd056a8073c08d2a.scope: Consumed 1.425s CPU time, 25.1M memory peak, 4.9M read from disk. Jan 20 06:50:46.627731 containerd[1959]: time="2026-01-20T06:50:46.627663576Z" level=info msg="received container exit event container_id:\"e168ad227643f299340902b7267d27b0c4b5f8757d5c8ef7dd056a8073c08d2a\" id:\"e168ad227643f299340902b7267d27b0c4b5f8757d5c8ef7dd056a8073c08d2a\" pid:3061 exit_status:1 exited_at:{seconds:1768891846 nanos:627293404}" Jan 20 06:50:46.656579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e168ad227643f299340902b7267d27b0c4b5f8757d5c8ef7dd056a8073c08d2a-rootfs.mount: Deactivated successfully. Jan 20 06:50:47.268867 kubelet[3222]: I0120 06:50:47.268840 3222 scope.go:117] "RemoveContainer" containerID="e168ad227643f299340902b7267d27b0c4b5f8757d5c8ef7dd056a8073c08d2a" Jan 20 06:50:47.270314 containerd[1959]: time="2026-01-20T06:50:47.270258785Z" level=info msg="CreateContainer within sandbox \"7341b46686ea4470323755b1a67848a75f66a16d830adcd3f2fce07399de1074\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 06:50:47.312469 containerd[1959]: time="2026-01-20T06:50:47.311137011Z" level=info msg="Container e48a61c95b4a5a92654159eb1648dca9baa6a1cc42193378acbaf95f024b7c37: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:50:47.314488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3986086603.mount: Deactivated successfully. Jan 20 06:50:47.328010 containerd[1959]: time="2026-01-20T06:50:47.327922454Z" level=info msg="CreateContainer within sandbox \"7341b46686ea4470323755b1a67848a75f66a16d830adcd3f2fce07399de1074\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e48a61c95b4a5a92654159eb1648dca9baa6a1cc42193378acbaf95f024b7c37\"" Jan 20 06:50:47.328700 containerd[1959]: time="2026-01-20T06:50:47.328646757Z" level=info msg="StartContainer for \"e48a61c95b4a5a92654159eb1648dca9baa6a1cc42193378acbaf95f024b7c37\"" Jan 20 06:50:47.329911 containerd[1959]: time="2026-01-20T06:50:47.329872068Z" level=info msg="connecting to shim e48a61c95b4a5a92654159eb1648dca9baa6a1cc42193378acbaf95f024b7c37" address="unix:///run/containerd/s/e8a16de0c54042b3d2a411aa220af59ba33de8190973e5661fcf0a8ed2b2f024" protocol=ttrpc version=3 Jan 20 06:50:47.356725 systemd[1]: Started cri-containerd-e48a61c95b4a5a92654159eb1648dca9baa6a1cc42193378acbaf95f024b7c37.scope - libcontainer container e48a61c95b4a5a92654159eb1648dca9baa6a1cc42193378acbaf95f024b7c37. Jan 20 06:50:47.427272 containerd[1959]: time="2026-01-20T06:50:47.427233576Z" level=info msg="StartContainer for \"e48a61c95b4a5a92654159eb1648dca9baa6a1cc42193378acbaf95f024b7c37\" returns successfully" Jan 20 06:50:56.597516 kubelet[3222]: E0120 06:50:56.597150 3222 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-220?timeout=10s\": context deadline exceeded"