Jan 21 01:00:18.169353 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 20 22:19:08 -00 2026 Jan 21 01:00:18.169378 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=febd26d0ecadb4f9abb44f6b2a89e793f13258cbb011a4bfe78289e5448c772a Jan 21 01:00:18.169391 kernel: BIOS-provided physical RAM map: Jan 21 01:00:18.169398 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 21 01:00:18.169404 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 21 01:00:18.169411 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 21 01:00:18.169419 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 21 01:00:18.169427 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 21 01:00:18.169434 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 21 01:00:18.169441 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 21 01:00:18.169450 kernel: NX (Execute Disable) protection: active Jan 21 01:00:18.169457 kernel: APIC: Static calls initialized Jan 21 01:00:18.169464 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jan 21 01:00:18.169472 kernel: extended physical RAM map: Jan 21 01:00:18.169481 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 21 01:00:18.169491 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jan 21 01:00:18.169499 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jan 21 01:00:18.169507 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jan 21 01:00:18.169515 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 21 01:00:18.169523 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 21 01:00:18.169531 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 21 01:00:18.169539 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 21 01:00:18.169547 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 21 01:00:18.169554 kernel: efi: EFI v2.7 by EDK II Jan 21 01:00:18.169562 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 21 01:00:18.169573 kernel: secureboot: Secure boot disabled Jan 21 01:00:18.169580 kernel: SMBIOS 2.7 present. Jan 21 01:00:18.169588 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 21 01:00:18.169596 kernel: DMI: Memory slots populated: 1/1 Jan 21 01:00:18.169604 kernel: Hypervisor detected: KVM Jan 21 01:00:18.169612 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 21 01:00:18.169620 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 21 01:00:18.169627 kernel: kvm-clock: using sched offset of 6483540373 cycles Jan 21 01:00:18.169636 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 21 01:00:18.169645 kernel: tsc: Detected 2499.996 MHz processor Jan 21 01:00:18.169655 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 21 01:00:18.169664 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 21 01:00:18.169672 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 21 01:00:18.169680 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 21 01:00:18.169689 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 21 01:00:18.169700 kernel: Using GB pages for direct mapping Jan 21 01:00:18.169711 kernel: ACPI: Early table checksum verification disabled Jan 21 01:00:18.169720 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 21 01:00:18.169729 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 21 01:00:18.169737 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 21 01:00:18.169746 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 21 01:00:18.169755 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 21 01:00:18.169765 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 21 01:00:18.169774 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 21 01:00:18.169783 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 21 01:00:18.169791 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 21 01:00:18.169815 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 21 01:00:18.169824 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 21 01:00:18.169833 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 21 01:00:18.169844 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 21 01:00:18.169852 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 21 01:00:18.169861 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 21 01:00:18.169870 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 21 01:00:18.169878 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 21 01:00:18.169887 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 21 01:00:18.169931 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 21 01:00:18.169943 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 21 01:00:18.169952 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 21 01:00:18.169960 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 21 01:00:18.169969 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 21 01:00:18.169977 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 21 01:00:18.170006 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 21 01:00:18.170033 kernel: NUMA: Initialized distance table, cnt=1 Jan 21 01:00:18.170044 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Jan 21 01:00:18.170073 kernel: Zone ranges: Jan 21 01:00:18.170082 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 21 01:00:18.170090 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 21 01:00:18.170099 kernel: Normal empty Jan 21 01:00:18.170108 kernel: Device empty Jan 21 01:00:18.170116 kernel: Movable zone start for each node Jan 21 01:00:18.170125 kernel: Early memory node ranges Jan 21 01:00:18.170136 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 21 01:00:18.170145 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 21 01:00:18.170153 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 21 01:00:18.170162 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 21 01:00:18.170171 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 21 01:00:18.170179 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 21 01:00:18.170188 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 21 01:00:18.170197 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 21 01:00:18.170228 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 21 01:00:18.170237 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 21 01:00:18.170246 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 21 01:00:18.170255 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 21 01:00:18.170264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 21 01:00:18.170272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 21 01:00:18.170281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 21 01:00:18.170292 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 21 01:00:18.170301 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 21 01:00:18.170310 kernel: TSC deadline timer available Jan 21 01:00:18.170319 kernel: CPU topo: Max. logical packages: 1 Jan 21 01:00:18.170327 kernel: CPU topo: Max. logical dies: 1 Jan 21 01:00:18.170336 kernel: CPU topo: Max. dies per package: 1 Jan 21 01:00:18.170344 kernel: CPU topo: Max. threads per core: 2 Jan 21 01:00:18.170355 kernel: CPU topo: Num. cores per package: 1 Jan 21 01:00:18.170364 kernel: CPU topo: Num. threads per package: 2 Jan 21 01:00:18.170373 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 21 01:00:18.170381 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 21 01:00:18.170390 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 21 01:00:18.170399 kernel: Booting paravirtualized kernel on KVM Jan 21 01:00:18.170408 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 21 01:00:18.170416 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 21 01:00:18.170428 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 21 01:00:18.170436 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 21 01:00:18.170445 kernel: pcpu-alloc: [0] 0 1 Jan 21 01:00:18.170454 kernel: kvm-guest: PV spinlocks enabled Jan 21 01:00:18.170463 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 21 01:00:18.170473 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=febd26d0ecadb4f9abb44f6b2a89e793f13258cbb011a4bfe78289e5448c772a Jan 21 01:00:18.170485 kernel: random: crng init done Jan 21 01:00:18.170494 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 21 01:00:18.170503 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 21 01:00:18.170512 kernel: Fallback order for Node 0: 0 Jan 21 01:00:18.170520 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jan 21 01:00:18.170529 kernel: Policy zone: DMA32 Jan 21 01:00:18.170549 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 21 01:00:18.170558 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 21 01:00:18.170567 kernel: Kernel/User page tables isolation: enabled Jan 21 01:00:18.170576 kernel: ftrace: allocating 40097 entries in 157 pages Jan 21 01:00:18.170588 kernel: ftrace: allocated 157 pages with 5 groups Jan 21 01:00:18.170598 kernel: Dynamic Preempt: voluntary Jan 21 01:00:18.170607 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 21 01:00:18.170617 kernel: rcu: RCU event tracing is enabled. Jan 21 01:00:18.170626 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 21 01:00:18.170636 kernel: Trampoline variant of Tasks RCU enabled. Jan 21 01:00:18.170648 kernel: Rude variant of Tasks RCU enabled. Jan 21 01:00:18.170657 kernel: Tracing variant of Tasks RCU enabled. Jan 21 01:00:18.170666 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 21 01:00:18.170675 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 21 01:00:18.170684 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 01:00:18.170696 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 01:00:18.170706 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 01:00:18.170715 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 21 01:00:18.170725 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 21 01:00:18.170734 kernel: Console: colour dummy device 80x25 Jan 21 01:00:18.170743 kernel: printk: legacy console [tty0] enabled Jan 21 01:00:18.170752 kernel: printk: legacy console [ttyS0] enabled Jan 21 01:00:18.170764 kernel: ACPI: Core revision 20240827 Jan 21 01:00:18.170774 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 21 01:00:18.170783 kernel: APIC: Switch to symmetric I/O mode setup Jan 21 01:00:18.170792 kernel: x2apic enabled Jan 21 01:00:18.171021 kernel: APIC: Switched APIC routing to: physical x2apic Jan 21 01:00:18.171033 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 21 01:00:18.171046 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 21 01:00:18.171062 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 21 01:00:18.171071 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 21 01:00:18.171080 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 21 01:00:18.171089 kernel: Spectre V2 : Mitigation: Retpolines Jan 21 01:00:18.171098 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 21 01:00:18.171108 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 21 01:00:18.171117 kernel: RETBleed: Vulnerable Jan 21 01:00:18.171126 kernel: Speculative Store Bypass: Vulnerable Jan 21 01:00:18.171135 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 21 01:00:18.171144 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 21 01:00:18.171155 kernel: GDS: Unknown: Dependent on hypervisor status Jan 21 01:00:18.171164 kernel: active return thunk: its_return_thunk Jan 21 01:00:18.171173 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 21 01:00:18.171182 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 21 01:00:18.171191 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 21 01:00:18.171200 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 21 01:00:18.171209 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 21 01:00:18.171218 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 21 01:00:18.171227 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 21 01:00:18.171236 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 21 01:00:18.171248 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 21 01:00:18.171257 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 21 01:00:18.171266 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 21 01:00:18.171274 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 21 01:00:18.171310 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 21 01:00:18.171319 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 21 01:00:18.171327 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 21 01:00:18.171336 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 21 01:00:18.171345 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 21 01:00:18.171354 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 21 01:00:18.171363 kernel: Freeing SMP alternatives memory: 32K Jan 21 01:00:18.171375 kernel: pid_max: default: 32768 minimum: 301 Jan 21 01:00:18.171384 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 21 01:00:18.171393 kernel: landlock: Up and running. Jan 21 01:00:18.171462 kernel: SELinux: Initializing. Jan 21 01:00:18.171471 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 21 01:00:18.171480 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 21 01:00:18.171490 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 21 01:00:18.171499 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 21 01:00:18.171508 kernel: signal: max sigframe size: 3632 Jan 21 01:00:18.171518 kernel: rcu: Hierarchical SRCU implementation. Jan 21 01:00:18.171539 kernel: rcu: Max phase no-delay instances is 400. Jan 21 01:00:18.171549 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 21 01:00:18.171559 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 21 01:00:18.171568 kernel: smp: Bringing up secondary CPUs ... Jan 21 01:00:18.171578 kernel: smpboot: x86: Booting SMP configuration: Jan 21 01:00:18.171587 kernel: .... node #0, CPUs: #1 Jan 21 01:00:18.171597 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 21 01:00:18.171610 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 21 01:00:18.171620 kernel: smp: Brought up 1 node, 2 CPUs Jan 21 01:00:18.171629 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 21 01:00:18.171639 kernel: Memory: 1924436K/2037804K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 108804K reserved, 0K cma-reserved) Jan 21 01:00:18.171648 kernel: devtmpfs: initialized Jan 21 01:00:18.171657 kernel: x86/mm: Memory block size: 128MB Jan 21 01:00:18.171669 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 21 01:00:18.171679 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 21 01:00:18.171688 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 21 01:00:18.171698 kernel: pinctrl core: initialized pinctrl subsystem Jan 21 01:00:18.171707 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 21 01:00:18.171716 kernel: audit: initializing netlink subsys (disabled) Jan 21 01:00:18.171726 kernel: audit: type=2000 audit(1768957214.899:1): state=initialized audit_enabled=0 res=1 Jan 21 01:00:18.171738 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 21 01:00:18.171747 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 21 01:00:18.171757 kernel: cpuidle: using governor menu Jan 21 01:00:18.171766 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 21 01:00:18.171775 kernel: dca service started, version 1.12.1 Jan 21 01:00:18.171785 kernel: PCI: Using configuration type 1 for base access Jan 21 01:00:18.171794 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 21 01:00:18.171835 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 21 01:00:18.171844 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 21 01:00:18.171853 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 21 01:00:18.171863 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 21 01:00:18.171872 kernel: ACPI: Added _OSI(Module Device) Jan 21 01:00:18.171882 kernel: ACPI: Added _OSI(Processor Device) Jan 21 01:00:18.171891 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 21 01:00:18.171903 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 21 01:00:18.171912 kernel: ACPI: Interpreter enabled Jan 21 01:00:18.171921 kernel: ACPI: PM: (supports S0 S5) Jan 21 01:00:18.171931 kernel: ACPI: Using IOAPIC for interrupt routing Jan 21 01:00:18.171940 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 21 01:00:18.171950 kernel: PCI: Using E820 reservations for host bridge windows Jan 21 01:00:18.171964 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 21 01:00:18.171973 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 21 01:00:18.172181 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 21 01:00:18.172313 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 21 01:00:18.172450 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 21 01:00:18.172462 kernel: acpiphp: Slot [3] registered Jan 21 01:00:18.172472 kernel: acpiphp: Slot [4] registered Jan 21 01:00:18.172486 kernel: acpiphp: Slot [5] registered Jan 21 01:00:18.172495 kernel: acpiphp: Slot [6] registered Jan 21 01:00:18.172505 kernel: acpiphp: Slot [7] registered Jan 21 01:00:18.172514 kernel: acpiphp: Slot [8] registered Jan 21 01:00:18.172523 kernel: acpiphp: Slot [9] registered Jan 21 01:00:18.172532 kernel: acpiphp: Slot [10] registered Jan 21 01:00:18.172541 kernel: acpiphp: Slot [11] registered Jan 21 01:00:18.172551 kernel: acpiphp: Slot [12] registered Jan 21 01:00:18.172562 kernel: acpiphp: Slot [13] registered Jan 21 01:00:18.172571 kernel: acpiphp: Slot [14] registered Jan 21 01:00:18.172581 kernel: acpiphp: Slot [15] registered Jan 21 01:00:18.172590 kernel: acpiphp: Slot [16] registered Jan 21 01:00:18.172599 kernel: acpiphp: Slot [17] registered Jan 21 01:00:18.172608 kernel: acpiphp: Slot [18] registered Jan 21 01:00:18.172618 kernel: acpiphp: Slot [19] registered Jan 21 01:00:18.172630 kernel: acpiphp: Slot [20] registered Jan 21 01:00:18.172639 kernel: acpiphp: Slot [21] registered Jan 21 01:00:18.172649 kernel: acpiphp: Slot [22] registered Jan 21 01:00:18.172658 kernel: acpiphp: Slot [23] registered Jan 21 01:00:18.172667 kernel: acpiphp: Slot [24] registered Jan 21 01:00:18.172676 kernel: acpiphp: Slot [25] registered Jan 21 01:00:18.172685 kernel: acpiphp: Slot [26] registered Jan 21 01:00:18.172695 kernel: acpiphp: Slot [27] registered Jan 21 01:00:18.172706 kernel: acpiphp: Slot [28] registered Jan 21 01:00:18.172716 kernel: acpiphp: Slot [29] registered Jan 21 01:00:18.172725 kernel: acpiphp: Slot [30] registered Jan 21 01:00:18.172735 kernel: acpiphp: Slot [31] registered Jan 21 01:00:18.172744 kernel: PCI host bridge to bus 0000:00 Jan 21 01:00:18.172888 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 21 01:00:18.173008 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 21 01:00:18.173121 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 21 01:00:18.173239 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 21 01:00:18.173351 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 21 01:00:18.173462 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 21 01:00:18.173603 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 21 01:00:18.173741 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 21 01:00:18.173890 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jan 21 01:00:18.174018 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 21 01:00:18.174140 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 21 01:00:18.174261 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 21 01:00:18.174391 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 21 01:00:18.174512 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 21 01:00:18.174634 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 21 01:00:18.174755 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 21 01:00:18.174897 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jan 21 01:00:18.175022 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jan 21 01:00:18.175151 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 21 01:00:18.175274 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 21 01:00:18.175403 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jan 21 01:00:18.175541 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jan 21 01:00:18.175697 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jan 21 01:00:18.175848 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jan 21 01:00:18.175861 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 21 01:00:18.175871 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 21 01:00:18.175881 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 21 01:00:18.175891 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 21 01:00:18.175900 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 21 01:00:18.175910 kernel: iommu: Default domain type: Translated Jan 21 01:00:18.175924 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 21 01:00:18.175933 kernel: efivars: Registered efivars operations Jan 21 01:00:18.175943 kernel: PCI: Using ACPI for IRQ routing Jan 21 01:00:18.175952 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 21 01:00:18.175962 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jan 21 01:00:18.175970 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 21 01:00:18.175979 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 21 01:00:18.176103 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 21 01:00:18.176227 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 21 01:00:18.176350 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 21 01:00:18.176362 kernel: vgaarb: loaded Jan 21 01:00:18.176372 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 21 01:00:18.176381 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 21 01:00:18.176391 kernel: clocksource: Switched to clocksource kvm-clock Jan 21 01:00:18.176401 kernel: VFS: Disk quotas dquot_6.6.0 Jan 21 01:00:18.176413 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 21 01:00:18.176423 kernel: pnp: PnP ACPI init Jan 21 01:00:18.176432 kernel: pnp: PnP ACPI: found 5 devices Jan 21 01:00:18.176442 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 21 01:00:18.176451 kernel: NET: Registered PF_INET protocol family Jan 21 01:00:18.176461 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 21 01:00:18.176470 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 21 01:00:18.176482 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 21 01:00:18.176492 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 21 01:00:18.176501 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 21 01:00:18.176511 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 21 01:00:18.176520 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 21 01:00:18.176529 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 21 01:00:18.176539 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 21 01:00:18.176551 kernel: NET: Registered PF_XDP protocol family Jan 21 01:00:18.176671 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 21 01:00:18.176786 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 21 01:00:18.176914 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 21 01:00:18.177026 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 21 01:00:18.177136 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 21 01:00:18.177261 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 21 01:00:18.177277 kernel: PCI: CLS 0 bytes, default 64 Jan 21 01:00:18.177287 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 21 01:00:18.177297 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 21 01:00:18.177307 kernel: clocksource: Switched to clocksource tsc Jan 21 01:00:18.177316 kernel: Initialise system trusted keyrings Jan 21 01:00:18.177326 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 21 01:00:18.177338 kernel: Key type asymmetric registered Jan 21 01:00:18.177347 kernel: Asymmetric key parser 'x509' registered Jan 21 01:00:18.177357 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 21 01:00:18.177366 kernel: io scheduler mq-deadline registered Jan 21 01:00:18.177376 kernel: io scheduler kyber registered Jan 21 01:00:18.177385 kernel: io scheduler bfq registered Jan 21 01:00:18.177395 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 21 01:00:18.177404 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 21 01:00:18.177417 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 21 01:00:18.177426 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 21 01:00:18.177435 kernel: i8042: Warning: Keylock active Jan 21 01:00:18.177445 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 21 01:00:18.177454 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 21 01:00:18.177590 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 21 01:00:18.177714 kernel: rtc_cmos 00:00: registered as rtc0 Jan 21 01:00:18.177852 kernel: rtc_cmos 00:00: setting system clock to 2026-01-21T01:00:15 UTC (1768957215) Jan 21 01:00:18.177971 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 21 01:00:18.178000 kernel: intel_pstate: CPU model not supported Jan 21 01:00:18.178012 kernel: efifb: probing for efifb Jan 21 01:00:18.178022 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jan 21 01:00:18.178032 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 21 01:00:18.178045 kernel: efifb: scrolling: redraw Jan 21 01:00:18.178055 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 21 01:00:18.178065 kernel: Console: switching to colour frame buffer device 100x37 Jan 21 01:00:18.178074 kernel: fb0: EFI VGA frame buffer device Jan 21 01:00:18.178085 kernel: pstore: Using crash dump compression: deflate Jan 21 01:00:18.178095 kernel: pstore: Registered efi_pstore as persistent store backend Jan 21 01:00:18.178105 kernel: NET: Registered PF_INET6 protocol family Jan 21 01:00:18.178117 kernel: Segment Routing with IPv6 Jan 21 01:00:18.178126 kernel: In-situ OAM (IOAM) with IPv6 Jan 21 01:00:18.178136 kernel: NET: Registered PF_PACKET protocol family Jan 21 01:00:18.178146 kernel: Key type dns_resolver registered Jan 21 01:00:18.178156 kernel: IPI shorthand broadcast: enabled Jan 21 01:00:18.178166 kernel: sched_clock: Marking stable (1347067647, 144252885)->(1562016539, -70696007) Jan 21 01:00:18.178175 kernel: registered taskstats version 1 Jan 21 01:00:18.178188 kernel: Loading compiled-in X.509 certificates Jan 21 01:00:18.178198 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 169e95345ec0c7da7389f5f6d7b9c06dfd352178' Jan 21 01:00:18.178207 kernel: Demotion targets for Node 0: null Jan 21 01:00:18.178217 kernel: Key type .fscrypt registered Jan 21 01:00:18.178227 kernel: Key type fscrypt-provisioning registered Jan 21 01:00:18.178236 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 21 01:00:18.178247 kernel: ima: Allocated hash algorithm: sha1 Jan 21 01:00:18.178259 kernel: ima: No architecture policies found Jan 21 01:00:18.178269 kernel: clk: Disabling unused clocks Jan 21 01:00:18.178279 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 21 01:00:18.178289 kernel: Write protecting the kernel read-only data: 47104k Jan 21 01:00:18.178301 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 21 01:00:18.178313 kernel: Run /init as init process Jan 21 01:00:18.178322 kernel: with arguments: Jan 21 01:00:18.178332 kernel: /init Jan 21 01:00:18.178342 kernel: with environment: Jan 21 01:00:18.178352 kernel: HOME=/ Jan 21 01:00:18.178361 kernel: TERM=linux Jan 21 01:00:18.178466 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 21 01:00:18.178485 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 21 01:00:18.178572 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 21 01:00:18.178585 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 21 01:00:18.178594 kernel: GPT:25804799 != 33554431 Jan 21 01:00:18.178604 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 21 01:00:18.178614 kernel: GPT:25804799 != 33554431 Jan 21 01:00:18.178628 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 21 01:00:18.178638 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 21 01:00:18.178648 kernel: SCSI subsystem initialized Jan 21 01:00:18.178658 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 21 01:00:18.178668 kernel: device-mapper: uevent: version 1.0.3 Jan 21 01:00:18.178678 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 21 01:00:18.178688 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 21 01:00:18.178701 kernel: raid6: avx512x4 gen() 15284 MB/s Jan 21 01:00:18.178710 kernel: raid6: avx512x2 gen() 15272 MB/s Jan 21 01:00:18.178720 kernel: raid6: avx512x1 gen() 15228 MB/s Jan 21 01:00:18.178730 kernel: raid6: avx2x4 gen() 15109 MB/s Jan 21 01:00:18.178740 kernel: raid6: avx2x2 gen() 15165 MB/s Jan 21 01:00:18.178749 kernel: raid6: avx2x1 gen() 11547 MB/s Jan 21 01:00:18.178759 kernel: raid6: using algorithm avx512x4 gen() 15284 MB/s Jan 21 01:00:18.178771 kernel: raid6: .... xor() 7979 MB/s, rmw enabled Jan 21 01:00:18.178781 kernel: raid6: using avx512x2 recovery algorithm Jan 21 01:00:18.178791 kernel: xor: automatically using best checksumming function avx Jan 21 01:00:18.178813 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 21 01:00:18.178824 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 21 01:00:18.178834 kernel: BTRFS: device fsid 1d50d7f2-b244-4434-b37e-796fa0c23345 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (152) Jan 21 01:00:18.178844 kernel: BTRFS info (device dm-0): first mount of filesystem 1d50d7f2-b244-4434-b37e-796fa0c23345 Jan 21 01:00:18.178857 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 21 01:00:18.178867 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 21 01:00:18.178877 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 21 01:00:18.178887 kernel: BTRFS info (device dm-0): enabling free space tree Jan 21 01:00:18.178898 kernel: loop: module loaded Jan 21 01:00:18.178907 kernel: loop0: detected capacity change from 0 to 100552 Jan 21 01:00:18.178917 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 21 01:00:18.178931 systemd[1]: Successfully made /usr/ read-only. Jan 21 01:00:18.178945 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 01:00:18.178955 systemd[1]: Detected virtualization amazon. Jan 21 01:00:18.178966 systemd[1]: Detected architecture x86-64. Jan 21 01:00:18.178975 systemd[1]: Running in initrd. Jan 21 01:00:18.178986 systemd[1]: No hostname configured, using default hostname. Jan 21 01:00:18.178999 systemd[1]: Hostname set to . Jan 21 01:00:18.179009 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 21 01:00:18.179019 systemd[1]: Queued start job for default target initrd.target. Jan 21 01:00:18.179030 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 01:00:18.179040 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 01:00:18.179051 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 01:00:18.179064 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 21 01:00:18.179074 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 01:00:18.179086 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 21 01:00:18.179096 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 21 01:00:18.179107 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 01:00:18.179117 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 01:00:18.179130 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 21 01:00:18.179140 systemd[1]: Reached target paths.target - Path Units. Jan 21 01:00:18.179150 systemd[1]: Reached target slices.target - Slice Units. Jan 21 01:00:18.179161 systemd[1]: Reached target swap.target - Swaps. Jan 21 01:00:18.179171 systemd[1]: Reached target timers.target - Timer Units. Jan 21 01:00:18.179181 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 01:00:18.179192 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 01:00:18.179205 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 01:00:18.179215 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 21 01:00:18.179226 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 21 01:00:18.179236 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 01:00:18.179247 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 01:00:18.179257 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 01:00:18.179272 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 01:00:18.179283 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 21 01:00:18.179293 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 21 01:00:18.179304 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 01:00:18.179314 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 21 01:00:18.179325 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 21 01:00:18.179336 systemd[1]: Starting systemd-fsck-usr.service... Jan 21 01:00:18.179348 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 01:00:18.179359 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 01:00:18.179370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 01:00:18.179380 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 21 01:00:18.179393 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 01:00:18.179404 systemd[1]: Finished systemd-fsck-usr.service. Jan 21 01:00:18.179414 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 21 01:00:18.179448 systemd-journald[287]: Collecting audit messages is enabled. Jan 21 01:00:18.179476 systemd-journald[287]: Journal started Jan 21 01:00:18.179497 systemd-journald[287]: Runtime Journal (/run/log/journal/ec2aa885fda79e7908d83369af4d9e1d) is 4.7M, max 38M, 33.2M free. Jan 21 01:00:18.181834 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 01:00:18.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.184649 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 01:00:18.187407 kernel: audit: type=1130 audit(1768957218.180:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.192833 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 21 01:00:18.213234 systemd-tmpfiles[302]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 21 01:00:18.226321 kernel: Bridge firewalling registered Jan 21 01:00:18.226360 kernel: audit: type=1130 audit(1768957218.213:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.214553 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 21 01:00:18.216517 systemd-modules-load[290]: Inserted module 'br_netfilter' Jan 21 01:00:18.235478 kernel: audit: type=1130 audit(1768957218.226:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.218242 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 01:00:18.241601 kernel: audit: type=1130 audit(1768957218.234:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.227953 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 01:00:18.234495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 01:00:18.244899 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 01:00:18.247638 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 01:00:18.254353 kernel: audit: type=1130 audit(1768957218.246:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.248428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 01:00:18.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.257218 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 21 01:00:18.260897 kernel: audit: type=1130 audit(1768957218.253:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.277952 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 01:00:18.288623 kernel: audit: type=1130 audit(1768957218.276:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.288661 kernel: audit: type=1334 audit(1768957218.276:9): prog-id=6 op=LOAD Jan 21 01:00:18.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.276000 audit: BPF prog-id=6 op=LOAD Jan 21 01:00:18.280996 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 01:00:18.298068 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 01:00:18.304664 kernel: audit: type=1130 audit(1768957218.297:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.304667 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 21 01:00:18.336473 dracut-cmdline[328]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=febd26d0ecadb4f9abb44f6b2a89e793f13258cbb011a4bfe78289e5448c772a Jan 21 01:00:18.361699 systemd-resolved[323]: Positive Trust Anchors: Jan 21 01:00:18.361715 systemd-resolved[323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 01:00:18.361721 systemd-resolved[323]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 01:00:18.361785 systemd-resolved[323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 01:00:18.398919 systemd-resolved[323]: Defaulting to hostname 'linux'. Jan 21 01:00:18.401094 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 01:00:18.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.401878 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 01:00:18.519836 kernel: Loading iSCSI transport class v2.0-870. Jan 21 01:00:18.611880 kernel: iscsi: registered transport (tcp) Jan 21 01:00:18.676036 kernel: iscsi: registered transport (qla4xxx) Jan 21 01:00:18.676119 kernel: QLogic iSCSI HBA Driver Jan 21 01:00:18.703040 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 01:00:18.719166 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 01:00:18.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.722797 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 01:00:18.766543 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 21 01:00:18.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.768794 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 21 01:00:18.771964 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 21 01:00:18.809359 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 21 01:00:18.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.811000 audit: BPF prog-id=7 op=LOAD Jan 21 01:00:18.811000 audit: BPF prog-id=8 op=LOAD Jan 21 01:00:18.815033 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 01:00:18.856217 systemd-udevd[571]: Using default interface naming scheme 'v257'. Jan 21 01:00:18.874284 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 01:00:18.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.880234 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 21 01:00:18.894163 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 01:00:18.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.894000 audit: BPF prog-id=9 op=LOAD Jan 21 01:00:18.897977 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 01:00:18.914521 dracut-pre-trigger[661]: rd.md=0: removing MD RAID activation Jan 21 01:00:18.952226 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 01:00:18.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.957007 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 01:00:18.969979 systemd-networkd[668]: lo: Link UP Jan 21 01:00:18.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:18.969991 systemd-networkd[668]: lo: Gained carrier Jan 21 01:00:18.970942 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 01:00:18.971875 systemd[1]: Reached target network.target - Network. Jan 21 01:00:19.031765 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 01:00:19.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:19.036170 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 21 01:00:19.151189 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 01:00:19.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:19.151467 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 01:00:19.152420 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 01:00:19.156097 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 01:00:19.164079 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 21 01:00:19.164421 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 21 01:00:19.169828 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 21 01:00:19.177503 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:fe:cc:d6:d6:19 Jan 21 01:00:19.179505 (udev-worker)[719]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:00:19.199267 systemd-networkd[668]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 01:00:19.199275 systemd-networkd[668]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 01:00:19.205677 systemd-networkd[668]: eth0: Link UP Jan 21 01:00:19.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:19.206175 systemd-networkd[668]: eth0: Gained carrier Jan 21 01:00:19.206191 systemd-networkd[668]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 01:00:19.207105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 01:00:19.215911 systemd-networkd[668]: eth0: DHCPv4 address 172.31.16.12/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 21 01:00:19.270424 kernel: cryptd: max_cpu_qlen set to 1000 Jan 21 01:00:19.270498 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jan 21 01:00:19.309993 kernel: AES CTR mode by8 optimization enabled Jan 21 01:00:19.316832 kernel: nvme nvme0: using unchecked data buffer Jan 21 01:00:19.426500 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 21 01:00:19.427920 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 21 01:00:19.456085 disk-uuid[839]: Primary Header is updated. Jan 21 01:00:19.456085 disk-uuid[839]: Secondary Entries is updated. Jan 21 01:00:19.456085 disk-uuid[839]: Secondary Header is updated. Jan 21 01:00:19.476668 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 21 01:00:19.522439 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 21 01:00:19.598304 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 21 01:00:19.849890 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 21 01:00:19.856897 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 21 01:00:19.856937 kernel: audit: type=1130 audit(1768957219.848:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:19.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:19.851060 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 01:00:19.857329 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 01:00:19.858411 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 01:00:19.860449 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 21 01:00:19.887838 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 21 01:00:19.893819 kernel: audit: type=1130 audit(1768957219.886:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:19.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.576610 disk-uuid[840]: Warning: The kernel is still using the old partition table. Jan 21 01:00:20.576610 disk-uuid[840]: The new table will be used at the next reboot or after you Jan 21 01:00:20.576610 disk-uuid[840]: run partprobe(8) or kpartx(8) Jan 21 01:00:20.576610 disk-uuid[840]: The operation has completed successfully. Jan 21 01:00:20.586933 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 21 01:00:20.598614 kernel: audit: type=1130 audit(1768957220.585:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.598658 kernel: audit: type=1131 audit(1768957220.585:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.587078 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 21 01:00:20.589984 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 21 01:00:20.638918 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1074) Jan 21 01:00:20.641857 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 01:00:20.641926 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 21 01:00:20.699931 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 21 01:00:20.700015 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 21 01:00:20.708836 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 01:00:20.709691 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 21 01:00:20.716752 kernel: audit: type=1130 audit(1768957220.708:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.711792 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 21 01:00:20.888744 systemd-networkd[668]: eth0: Gained IPv6LL Jan 21 01:00:21.338413 ignition[1093]: Ignition 2.24.0 Jan 21 01:00:21.338427 ignition[1093]: Stage: fetch-offline Jan 21 01:00:21.338507 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:21.338517 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:21.338826 ignition[1093]: Ignition finished successfully Jan 21 01:00:21.342371 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 01:00:21.347896 kernel: audit: type=1130 audit(1768957221.340:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.344954 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 21 01:00:21.367712 ignition[1099]: Ignition 2.24.0 Jan 21 01:00:21.367727 ignition[1099]: Stage: fetch Jan 21 01:00:21.368002 ignition[1099]: no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:21.368014 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:21.368131 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:21.383286 ignition[1099]: PUT result: OK Jan 21 01:00:21.386193 ignition[1099]: parsed url from cmdline: "" Jan 21 01:00:21.386205 ignition[1099]: no config URL provided Jan 21 01:00:21.386214 ignition[1099]: reading system config file "/usr/lib/ignition/user.ign" Jan 21 01:00:21.386236 ignition[1099]: no config at "/usr/lib/ignition/user.ign" Jan 21 01:00:21.386284 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:21.387177 ignition[1099]: PUT result: OK Jan 21 01:00:21.387246 ignition[1099]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 21 01:00:21.388078 ignition[1099]: GET result: OK Jan 21 01:00:21.388164 ignition[1099]: parsing config with SHA512: 298dcea3f90ac2954bd7d16f71d9bdaaa8ae02c843ce8a874f3c7ff997e7f54a1516e749f371e6147558ca0672b5bf59f7048a8b730934f5d6e564ed9a5e76a4 Jan 21 01:00:21.395959 unknown[1099]: fetched base config from "system" Jan 21 01:00:21.395973 unknown[1099]: fetched base config from "system" Jan 21 01:00:21.396511 ignition[1099]: fetch: fetch complete Jan 21 01:00:21.395981 unknown[1099]: fetched user config from "aws" Jan 21 01:00:21.396518 ignition[1099]: fetch: fetch passed Jan 21 01:00:21.396591 ignition[1099]: Ignition finished successfully Jan 21 01:00:21.399352 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 21 01:00:21.405323 kernel: audit: type=1130 audit(1768957221.397:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.402986 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 21 01:00:21.443435 ignition[1105]: Ignition 2.24.0 Jan 21 01:00:21.443452 ignition[1105]: Stage: kargs Jan 21 01:00:21.443762 ignition[1105]: no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:21.443776 ignition[1105]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:21.443899 ignition[1105]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:21.444897 ignition[1105]: PUT result: OK Jan 21 01:00:21.448431 ignition[1105]: kargs: kargs passed Jan 21 01:00:21.448541 ignition[1105]: Ignition finished successfully Jan 21 01:00:21.450688 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 21 01:00:21.456823 kernel: audit: type=1130 audit(1768957221.449:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.454000 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 21 01:00:21.482019 ignition[1111]: Ignition 2.24.0 Jan 21 01:00:21.482033 ignition[1111]: Stage: disks Jan 21 01:00:21.482321 ignition[1111]: no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:21.482334 ignition[1111]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:21.482433 ignition[1111]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:21.483251 ignition[1111]: PUT result: OK Jan 21 01:00:21.486716 ignition[1111]: disks: disks passed Jan 21 01:00:21.486852 ignition[1111]: Ignition finished successfully Jan 21 01:00:21.488768 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 21 01:00:21.493858 kernel: audit: type=1130 audit(1768957221.487:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.489821 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 21 01:00:21.494250 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 21 01:00:21.494917 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 01:00:21.495662 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 01:00:21.496266 systemd[1]: Reached target basic.target - Basic System. Jan 21 01:00:21.498014 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 21 01:00:21.595089 systemd-fsck[1120]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 21 01:00:21.598982 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 21 01:00:21.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.602034 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 21 01:00:21.609038 kernel: audit: type=1130 audit(1768957221.598:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.844832 kernel: EXT4-fs (nvme0n1p9): mounted filesystem cf9e7296-d0ad-4d9a-b030-d4e17a1c88bf r/w with ordered data mode. Quota mode: none. Jan 21 01:00:21.845787 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 21 01:00:21.846932 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 21 01:00:21.907471 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 01:00:21.910247 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 21 01:00:21.913194 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 21 01:00:21.913261 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 21 01:00:21.913297 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 01:00:21.923049 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 21 01:00:21.925556 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 21 01:00:21.939826 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1139) Jan 21 01:00:21.942825 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 01:00:21.942880 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 21 01:00:21.954457 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 21 01:00:21.954550 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 21 01:00:21.956692 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 01:00:22.235647 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 21 01:00:22.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:22.237746 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 21 01:00:22.240973 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 21 01:00:22.257528 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 21 01:00:22.259823 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 01:00:22.282194 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 21 01:00:22.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:22.286531 ignition[1236]: INFO : Ignition 2.24.0 Jan 21 01:00:22.286531 ignition[1236]: INFO : Stage: mount Jan 21 01:00:22.288255 ignition[1236]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:22.288255 ignition[1236]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:22.288255 ignition[1236]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:22.288255 ignition[1236]: INFO : PUT result: OK Jan 21 01:00:22.291568 ignition[1236]: INFO : mount: mount passed Jan 21 01:00:22.291980 ignition[1236]: INFO : Ignition finished successfully Jan 21 01:00:22.293494 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 21 01:00:22.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:22.295651 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 21 01:00:22.847906 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 01:00:22.887827 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1250) Jan 21 01:00:22.892770 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 01:00:22.892861 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 21 01:00:22.900440 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 21 01:00:22.900513 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 21 01:00:22.902225 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 01:00:22.929017 ignition[1266]: INFO : Ignition 2.24.0 Jan 21 01:00:22.929017 ignition[1266]: INFO : Stage: files Jan 21 01:00:22.930506 ignition[1266]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:22.930506 ignition[1266]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:22.930506 ignition[1266]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:22.930506 ignition[1266]: INFO : PUT result: OK Jan 21 01:00:22.935109 ignition[1266]: DEBUG : files: compiled without relabeling support, skipping Jan 21 01:00:22.936265 ignition[1266]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 21 01:00:22.936265 ignition[1266]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 21 01:00:22.941725 ignition[1266]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 21 01:00:22.942460 ignition[1266]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 21 01:00:22.942460 ignition[1266]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 21 01:00:22.942237 unknown[1266]: wrote ssh authorized keys file for user: core Jan 21 01:00:22.944467 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 21 01:00:22.945045 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 21 01:00:23.009790 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 21 01:00:23.142121 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 21 01:00:23.142121 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 21 01:00:23.143855 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 21 01:00:23.143855 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 21 01:00:23.143855 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 21 01:00:23.143855 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 01:00:23.143855 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 01:00:23.143855 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 01:00:23.143855 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 01:00:23.148886 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 01:00:23.148886 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 01:00:23.148886 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 21 01:00:23.151456 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 21 01:00:23.151456 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 21 01:00:23.151456 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 21 01:00:23.709136 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 21 01:00:24.777873 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 21 01:00:24.777873 ignition[1266]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 21 01:00:24.779907 ignition[1266]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 01:00:24.784304 ignition[1266]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 01:00:24.784304 ignition[1266]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 21 01:00:24.784304 ignition[1266]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 21 01:00:24.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.787516 ignition[1266]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 21 01:00:24.787516 ignition[1266]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 21 01:00:24.787516 ignition[1266]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 21 01:00:24.787516 ignition[1266]: INFO : files: files passed Jan 21 01:00:24.787516 ignition[1266]: INFO : Ignition finished successfully Jan 21 01:00:24.786117 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 21 01:00:24.790036 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 21 01:00:24.792730 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 21 01:00:24.802378 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 21 01:00:24.803522 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 21 01:00:24.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.810923 initrd-setup-root-after-ignition[1299]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 01:00:24.813041 initrd-setup-root-after-ignition[1299]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 21 01:00:24.814644 initrd-setup-root-after-ignition[1303]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 01:00:24.816306 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 01:00:24.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.817366 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 21 01:00:24.819244 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 21 01:00:24.868508 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 21 01:00:24.880093 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 21 01:00:24.880132 kernel: audit: type=1130 audit(1768957224.867:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.880156 kernel: audit: type=1131 audit(1768957224.867:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.868629 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 21 01:00:24.869527 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 21 01:00:24.880668 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 21 01:00:24.882242 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 21 01:00:24.883651 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 21 01:00:24.914056 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 01:00:24.920084 kernel: audit: type=1130 audit(1768957224.912:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.916222 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 21 01:00:24.937924 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 01:00:24.938166 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 21 01:00:24.939000 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 01:00:24.939953 systemd[1]: Stopped target timers.target - Timer Units. Jan 21 01:00:24.945545 kernel: audit: type=1131 audit(1768957224.939:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.940644 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 21 01:00:24.940797 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 01:00:24.946342 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 21 01:00:24.947168 systemd[1]: Stopped target basic.target - Basic System. Jan 21 01:00:24.947843 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 21 01:00:24.948507 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 01:00:24.949159 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 21 01:00:24.949766 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 21 01:00:24.950631 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 21 01:00:24.951281 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 01:00:24.952060 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 21 01:00:24.953109 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 21 01:00:24.953793 systemd[1]: Stopped target swap.target - Swaps. Jan 21 01:00:24.954427 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 21 01:00:24.959178 kernel: audit: type=1131 audit(1768957224.953:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.954553 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 21 01:00:24.959342 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 21 01:00:24.960291 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 01:00:24.960884 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 21 01:00:24.961005 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 01:00:24.966668 kernel: audit: type=1131 audit(1768957224.960:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.961610 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 21 01:00:24.961760 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 21 01:00:24.972096 kernel: audit: type=1131 audit(1768957224.966:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.966788 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 21 01:00:24.976915 kernel: audit: type=1131 audit(1768957224.970:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.967060 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 01:00:24.967896 systemd[1]: ignition-files.service: Deactivated successfully. Jan 21 01:00:24.968019 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 21 01:00:24.974914 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 21 01:00:24.993175 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 21 01:00:24.994707 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 21 01:00:24.995747 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 01:00:24.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.000404 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 21 01:00:25.007710 kernel: audit: type=1131 audit(1768957224.998:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.000699 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 01:00:25.009232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 21 01:00:25.009410 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 01:00:25.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.016912 kernel: audit: type=1131 audit(1768957225.007:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.021005 ignition[1323]: INFO : Ignition 2.24.0 Jan 21 01:00:25.021005 ignition[1323]: INFO : Stage: umount Jan 21 01:00:25.021005 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:25.021005 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:25.021005 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:25.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.027336 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 21 01:00:25.031911 ignition[1323]: INFO : PUT result: OK Jan 21 01:00:25.031911 ignition[1323]: INFO : umount: umount passed Jan 21 01:00:25.027493 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 21 01:00:25.034332 ignition[1323]: INFO : Ignition finished successfully Jan 21 01:00:25.036346 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 21 01:00:25.036519 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 21 01:00:25.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.039841 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 21 01:00:25.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.039913 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 21 01:00:25.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.040541 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 21 01:00:25.040610 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 21 01:00:25.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.041901 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 21 01:00:25.041980 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 21 01:00:25.043263 systemd[1]: Stopped target network.target - Network. Jan 21 01:00:25.044441 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 21 01:00:25.044518 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 01:00:25.045706 systemd[1]: Stopped target paths.target - Path Units. Jan 21 01:00:25.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.046639 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 21 01:00:25.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.046765 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 01:00:25.047846 systemd[1]: Stopped target slices.target - Slice Units. Jan 21 01:00:25.048254 systemd[1]: Stopped target sockets.target - Socket Units. Jan 21 01:00:25.048581 systemd[1]: iscsid.socket: Deactivated successfully. Jan 21 01:00:25.048627 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 01:00:25.049158 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 21 01:00:25.049202 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 01:00:25.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.049777 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 21 01:00:25.049851 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 21 01:00:25.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.050455 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 21 01:00:25.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.050534 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 21 01:00:25.051146 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 21 01:00:25.051211 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 21 01:00:25.064000 audit: BPF prog-id=6 op=UNLOAD Jan 21 01:00:25.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.052450 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 21 01:00:25.054055 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 21 01:00:25.056691 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 21 01:00:25.059545 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 21 01:00:25.059667 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 21 01:00:25.067000 audit: BPF prog-id=9 op=UNLOAD Jan 21 01:00:25.061147 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 21 01:00:25.061245 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 21 01:00:25.062136 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 21 01:00:25.062270 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 21 01:00:25.065381 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 21 01:00:25.065548 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 21 01:00:25.069192 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 21 01:00:25.069691 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 21 01:00:25.069746 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 21 01:00:25.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.071614 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 21 01:00:25.073248 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 21 01:00:25.073335 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 01:00:25.075998 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 21 01:00:25.076071 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 21 01:00:25.076693 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 21 01:00:25.076757 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 21 01:00:25.077302 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 01:00:25.093218 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 21 01:00:25.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.093457 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 01:00:25.094458 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 21 01:00:25.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.094521 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 21 01:00:25.095170 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 21 01:00:25.095211 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 01:00:25.097872 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 21 01:00:25.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.097952 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 21 01:00:25.098944 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 21 01:00:25.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.099010 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 21 01:00:25.102141 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 21 01:00:25.102213 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 01:00:25.104592 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 21 01:00:25.109602 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 21 01:00:25.110345 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 01:00:25.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.110848 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 21 01:00:25.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.110917 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 01:00:25.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.112030 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 21 01:00:25.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.112096 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 21 01:00:25.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.112729 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 21 01:00:25.112790 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 01:00:25.113793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 01:00:25.113873 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 01:00:25.127814 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 21 01:00:25.129400 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 21 01:00:25.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.132193 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 21 01:00:25.132720 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 21 01:00:25.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:25.133874 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 21 01:00:25.135619 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 21 01:00:25.153750 systemd[1]: Switching root. Jan 21 01:00:25.177695 systemd-journald[287]: Journal stopped Jan 21 01:00:26.835762 systemd-journald[287]: Received SIGTERM from PID 1 (systemd). Jan 21 01:00:26.835889 kernel: SELinux: policy capability network_peer_controls=1 Jan 21 01:00:26.835919 kernel: SELinux: policy capability open_perms=1 Jan 21 01:00:26.835938 kernel: SELinux: policy capability extended_socket_class=1 Jan 21 01:00:26.835959 kernel: SELinux: policy capability always_check_network=0 Jan 21 01:00:26.835979 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 21 01:00:26.835997 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 21 01:00:26.836018 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 21 01:00:26.836041 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 21 01:00:26.836059 kernel: SELinux: policy capability userspace_initial_context=0 Jan 21 01:00:26.836088 systemd[1]: Successfully loaded SELinux policy in 69.199ms. Jan 21 01:00:26.836120 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.577ms. Jan 21 01:00:26.836143 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 01:00:26.836166 systemd[1]: Detected virtualization amazon. Jan 21 01:00:26.836187 systemd[1]: Detected architecture x86-64. Jan 21 01:00:26.836206 systemd[1]: Detected first boot. Jan 21 01:00:26.836225 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 21 01:00:26.836245 zram_generator::config[1367]: No configuration found. Jan 21 01:00:26.836277 kernel: Guest personality initialized and is inactive Jan 21 01:00:26.836297 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 21 01:00:26.836321 kernel: Initialized host personality Jan 21 01:00:26.836343 kernel: NET: Registered PF_VSOCK protocol family Jan 21 01:00:26.836367 systemd[1]: Populated /etc with preset unit settings. Jan 21 01:00:26.836394 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 21 01:00:26.836418 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 21 01:00:26.836447 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 21 01:00:26.836478 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 21 01:00:26.836503 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 21 01:00:26.836529 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 21 01:00:26.836553 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 21 01:00:26.836581 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 21 01:00:26.836605 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 21 01:00:26.836634 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 21 01:00:26.836658 systemd[1]: Created slice user.slice - User and Session Slice. Jan 21 01:00:26.836682 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 01:00:26.836707 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 01:00:26.836733 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 21 01:00:26.836759 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 21 01:00:26.836786 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 21 01:00:26.836835 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 01:00:26.836863 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 21 01:00:26.836888 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 01:00:26.836913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 01:00:26.836938 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 21 01:00:26.836963 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 21 01:00:26.836991 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 21 01:00:26.837017 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 21 01:00:26.837043 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 01:00:26.837068 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 01:00:26.837093 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 21 01:00:26.837118 systemd[1]: Reached target slices.target - Slice Units. Jan 21 01:00:26.837144 systemd[1]: Reached target swap.target - Swaps. Jan 21 01:00:26.837173 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 21 01:00:26.837197 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 21 01:00:26.837222 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 21 01:00:26.837246 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 01:00:26.837272 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 21 01:00:26.837296 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 01:00:26.837322 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 21 01:00:26.837352 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 21 01:00:26.837376 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 01:00:26.837402 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 01:00:26.837427 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 21 01:00:26.837451 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 21 01:00:26.837477 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 21 01:00:26.837500 systemd[1]: Mounting media.mount - External Media Directory... Jan 21 01:00:26.837529 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:26.837552 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 21 01:00:26.837578 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 21 01:00:26.837604 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 21 01:00:26.837628 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 21 01:00:26.837653 systemd[1]: Reached target machines.target - Containers. Jan 21 01:00:26.837678 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 21 01:00:26.837707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 01:00:26.837732 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 01:00:26.837757 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 21 01:00:26.837782 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 01:00:26.839848 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 01:00:26.839889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 01:00:26.839909 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 21 01:00:26.839935 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 01:00:26.839956 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 21 01:00:26.839976 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 21 01:00:26.839996 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 21 01:00:26.840026 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 21 01:00:26.840049 systemd[1]: Stopped systemd-fsck-usr.service. Jan 21 01:00:26.840070 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 01:00:26.840090 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 01:00:26.840114 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 01:00:26.840134 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 01:00:26.840155 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 21 01:00:26.840175 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 21 01:00:26.840194 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 01:00:26.840213 kernel: fuse: init (API version 7.41) Jan 21 01:00:26.840237 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:26.840257 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 21 01:00:26.840277 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 21 01:00:26.840296 systemd[1]: Mounted media.mount - External Media Directory. Jan 21 01:00:26.840315 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 21 01:00:26.840335 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 21 01:00:26.840357 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 21 01:00:26.840379 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 01:00:26.840402 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 21 01:00:26.840423 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 21 01:00:26.840447 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 01:00:26.840470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 01:00:26.840489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 01:00:26.840509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 01:00:26.840529 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 21 01:00:26.840548 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 21 01:00:26.840567 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 21 01:00:26.840585 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 21 01:00:26.840607 kernel: ACPI: bus type drm_connector registered Jan 21 01:00:26.840637 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 21 01:00:26.840656 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 01:00:26.840676 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 01:00:26.840701 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 01:00:26.840722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 01:00:26.840744 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 01:00:26.840773 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 01:00:26.840797 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 21 01:00:26.840890 systemd-journald[1451]: Collecting audit messages is enabled. Jan 21 01:00:26.840941 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 21 01:00:26.840968 systemd-journald[1451]: Journal started Jan 21 01:00:26.841014 systemd-journald[1451]: Runtime Journal (/run/log/journal/ec2aa885fda79e7908d83369af4d9e1d) is 4.7M, max 38M, 33.2M free. Jan 21 01:00:26.463000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 21 01:00:26.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.622000 audit: BPF prog-id=14 op=UNLOAD Jan 21 01:00:26.622000 audit: BPF prog-id=13 op=UNLOAD Jan 21 01:00:26.623000 audit: BPF prog-id=15 op=LOAD Jan 21 01:00:26.623000 audit: BPF prog-id=16 op=LOAD Jan 21 01:00:26.623000 audit: BPF prog-id=17 op=LOAD Jan 21 01:00:26.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.831000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 21 01:00:26.831000 audit[1451]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffdbf4cc7e0 a2=4000 a3=0 items=0 ppid=1 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:26.831000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 21 01:00:26.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.377167 systemd[1]: Queued start job for default target multi-user.target. Jan 21 01:00:26.396255 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 21 01:00:26.396750 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 21 01:00:26.845889 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 01:00:26.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.848699 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 21 01:00:26.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.851582 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 21 01:00:26.852693 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 21 01:00:26.881631 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 01:00:26.884223 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 21 01:00:26.885447 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 21 01:00:26.885621 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 01:00:26.887200 systemd-tmpfiles[1475]: ACLs are not supported, ignoring. Jan 21 01:00:26.889254 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 21 01:00:26.889406 systemd-tmpfiles[1475]: ACLs are not supported, ignoring. Jan 21 01:00:26.892123 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 01:00:26.892323 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 01:00:26.896091 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 21 01:00:26.901142 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 21 01:00:26.901929 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 01:00:26.905002 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 21 01:00:26.907941 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 01:00:26.913021 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 01:00:26.917527 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 21 01:00:26.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.920820 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 21 01:00:26.926494 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 21 01:00:26.937081 systemd-journald[1451]: Time spent on flushing to /var/log/journal/ec2aa885fda79e7908d83369af4d9e1d is 80.044ms for 1151 entries. Jan 21 01:00:26.937081 systemd-journald[1451]: System Journal (/var/log/journal/ec2aa885fda79e7908d83369af4d9e1d) is 8M, max 588.1M, 580.1M free. Jan 21 01:00:27.054884 systemd-journald[1451]: Received client request to flush runtime journal. Jan 21 01:00:27.054970 kernel: loop1: detected capacity change from 0 to 50784 Jan 21 01:00:27.055009 kernel: loop2: detected capacity change from 0 to 111560 Jan 21 01:00:26.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.959315 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 21 01:00:26.960172 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 21 01:00:26.965252 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 21 01:00:26.996740 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 01:00:27.009361 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 01:00:27.056297 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 21 01:00:27.066556 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 21 01:00:27.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.068000 audit: BPF prog-id=18 op=LOAD Jan 21 01:00:27.068000 audit: BPF prog-id=19 op=LOAD Jan 21 01:00:27.068000 audit: BPF prog-id=20 op=LOAD Jan 21 01:00:27.072006 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 21 01:00:27.073000 audit: BPF prog-id=21 op=LOAD Jan 21 01:00:27.078523 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 01:00:27.083918 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 01:00:27.094179 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 21 01:00:27.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.098000 audit: BPF prog-id=22 op=LOAD Jan 21 01:00:27.098000 audit: BPF prog-id=23 op=LOAD Jan 21 01:00:27.098000 audit: BPF prog-id=24 op=LOAD Jan 21 01:00:27.106000 audit: BPF prog-id=25 op=LOAD Jan 21 01:00:27.105075 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 21 01:00:27.109000 audit: BPF prog-id=26 op=LOAD Jan 21 01:00:27.109000 audit: BPF prog-id=27 op=LOAD Jan 21 01:00:27.113000 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 21 01:00:27.137832 kernel: loop3: detected capacity change from 0 to 73176 Jan 21 01:00:27.153001 systemd-tmpfiles[1525]: ACLs are not supported, ignoring. Jan 21 01:00:27.153427 systemd-tmpfiles[1525]: ACLs are not supported, ignoring. Jan 21 01:00:27.167763 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 01:00:27.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.213865 systemd-nsresourced[1529]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 21 01:00:27.216608 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 21 01:00:27.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.220310 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 21 01:00:27.356011 systemd-oomd[1523]: No swap; memory pressure usage will be degraded Jan 21 01:00:27.362181 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 21 01:00:27.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.387518 systemd-resolved[1524]: Positive Trust Anchors: Jan 21 01:00:27.387844 systemd-resolved[1524]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 01:00:27.387851 systemd-resolved[1524]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 01:00:27.387916 systemd-resolved[1524]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 01:00:27.394544 systemd-resolved[1524]: Defaulting to hostname 'linux'. Jan 21 01:00:27.396516 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 01:00:27.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.398647 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 21 01:00:27.399474 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 01:00:27.487834 kernel: loop4: detected capacity change from 0 to 219144 Jan 21 01:00:27.761197 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 21 01:00:27.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.760000 audit: BPF prog-id=8 op=UNLOAD Jan 21 01:00:27.760000 audit: BPF prog-id=7 op=UNLOAD Jan 21 01:00:27.760000 audit: BPF prog-id=28 op=LOAD Jan 21 01:00:27.760000 audit: BPF prog-id=29 op=LOAD Jan 21 01:00:27.763622 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 01:00:27.802277 systemd-udevd[1550]: Using default interface naming scheme 'v257'. Jan 21 01:00:27.845743 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 01:00:27.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.846000 audit: BPF prog-id=30 op=LOAD Jan 21 01:00:27.850318 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 01:00:27.900852 kernel: loop5: detected capacity change from 0 to 50784 Jan 21 01:00:27.945146 (udev-worker)[1568]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:00:27.950425 kernel: loop6: detected capacity change from 0 to 111560 Jan 21 01:00:27.952920 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 21 01:00:27.977225 systemd-networkd[1555]: lo: Link UP Jan 21 01:00:27.977236 systemd-networkd[1555]: lo: Gained carrier Jan 21 01:00:27.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.980096 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 01:00:27.981782 systemd[1]: Reached target network.target - Network. Jan 21 01:00:27.985221 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 21 01:00:27.988999 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 21 01:00:27.997879 kernel: loop7: detected capacity change from 0 to 73176 Jan 21 01:00:28.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.015095 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 21 01:00:28.033733 kernel: loop1: detected capacity change from 0 to 219144 Jan 21 01:00:28.038871 systemd-networkd[1555]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 01:00:28.038886 systemd-networkd[1555]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 01:00:28.043662 systemd-networkd[1555]: eth0: Link UP Jan 21 01:00:28.044769 systemd-networkd[1555]: eth0: Gained carrier Jan 21 01:00:28.044827 systemd-networkd[1555]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 01:00:28.055905 systemd-networkd[1555]: eth0: DHCPv4 address 172.31.16.12/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 21 01:00:28.062902 kernel: mousedev: PS/2 mouse device common for all mice Jan 21 01:00:28.066221 (sd-merge)[1569]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 21 01:00:28.075486 (sd-merge)[1569]: Merged extensions into '/usr'. Jan 21 01:00:28.088099 systemd[1]: Reload requested from client PID 1506 ('systemd-sysext') (unit systemd-sysext.service)... Jan 21 01:00:28.088124 systemd[1]: Reloading... Jan 21 01:00:28.152851 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 21 01:00:28.198189 kernel: ACPI: button: Power Button [PWRF] Jan 21 01:00:28.213830 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 21 01:00:28.224212 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 21 01:00:28.245836 kernel: ACPI: button: Sleep Button [SLPF] Jan 21 01:00:28.256826 zram_generator::config[1688]: No configuration found. Jan 21 01:00:28.717828 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 21 01:00:28.718705 systemd[1]: Reloading finished in 629 ms. Jan 21 01:00:28.749889 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 21 01:00:28.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.805192 systemd[1]: Starting ensure-sysext.service... Jan 21 01:00:28.808985 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 21 01:00:28.814030 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 01:00:28.817000 audit: BPF prog-id=31 op=LOAD Jan 21 01:00:28.817000 audit: BPF prog-id=15 op=UNLOAD Jan 21 01:00:28.817000 audit: BPF prog-id=32 op=LOAD Jan 21 01:00:28.817000 audit: BPF prog-id=33 op=LOAD Jan 21 01:00:28.817000 audit: BPF prog-id=16 op=UNLOAD Jan 21 01:00:28.817000 audit: BPF prog-id=17 op=UNLOAD Jan 21 01:00:28.817000 audit: BPF prog-id=34 op=LOAD Jan 21 01:00:28.817000 audit: BPF prog-id=35 op=LOAD Jan 21 01:00:28.817000 audit: BPF prog-id=28 op=UNLOAD Jan 21 01:00:28.817000 audit: BPF prog-id=29 op=UNLOAD Jan 21 01:00:28.820000 audit: BPF prog-id=36 op=LOAD Jan 21 01:00:28.820000 audit: BPF prog-id=22 op=UNLOAD Jan 21 01:00:28.820000 audit: BPF prog-id=37 op=LOAD Jan 21 01:00:28.820000 audit: BPF prog-id=38 op=LOAD Jan 21 01:00:28.820000 audit: BPF prog-id=23 op=UNLOAD Jan 21 01:00:28.820000 audit: BPF prog-id=24 op=UNLOAD Jan 21 01:00:28.821000 audit: BPF prog-id=39 op=LOAD Jan 21 01:00:28.821000 audit: BPF prog-id=21 op=UNLOAD Jan 21 01:00:28.821000 audit: BPF prog-id=40 op=LOAD Jan 21 01:00:28.821000 audit: BPF prog-id=25 op=UNLOAD Jan 21 01:00:28.821000 audit: BPF prog-id=41 op=LOAD Jan 21 01:00:28.817978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 01:00:28.824000 audit: BPF prog-id=42 op=LOAD Jan 21 01:00:28.824000 audit: BPF prog-id=26 op=UNLOAD Jan 21 01:00:28.824000 audit: BPF prog-id=27 op=UNLOAD Jan 21 01:00:28.825000 audit: BPF prog-id=43 op=LOAD Jan 21 01:00:28.825000 audit: BPF prog-id=30 op=UNLOAD Jan 21 01:00:28.829000 audit: BPF prog-id=44 op=LOAD Jan 21 01:00:28.834000 audit: BPF prog-id=18 op=UNLOAD Jan 21 01:00:28.835000 audit: BPF prog-id=45 op=LOAD Jan 21 01:00:28.835000 audit: BPF prog-id=46 op=LOAD Jan 21 01:00:28.835000 audit: BPF prog-id=19 op=UNLOAD Jan 21 01:00:28.835000 audit: BPF prog-id=20 op=UNLOAD Jan 21 01:00:28.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.848302 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 21 01:00:28.853762 systemd-tmpfiles[1766]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 21 01:00:28.854426 systemd-tmpfiles[1766]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 21 01:00:28.854880 systemd[1]: Reload requested from client PID 1764 ('systemctl') (unit ensure-sysext.service)... Jan 21 01:00:28.854889 systemd-tmpfiles[1766]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 21 01:00:28.855789 systemd[1]: Reloading... Jan 21 01:00:28.857260 systemd-tmpfiles[1766]: ACLs are not supported, ignoring. Jan 21 01:00:28.857355 systemd-tmpfiles[1766]: ACLs are not supported, ignoring. Jan 21 01:00:28.868059 systemd-tmpfiles[1766]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 01:00:28.868077 systemd-tmpfiles[1766]: Skipping /boot Jan 21 01:00:28.885381 systemd-tmpfiles[1766]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 01:00:28.885402 systemd-tmpfiles[1766]: Skipping /boot Jan 21 01:00:28.974887 zram_generator::config[1806]: No configuration found. Jan 21 01:00:29.205210 systemd[1]: Reloading finished in 348 ms. Jan 21 01:00:29.231000 audit: BPF prog-id=47 op=LOAD Jan 21 01:00:29.231000 audit: BPF prog-id=39 op=UNLOAD Jan 21 01:00:29.232000 audit: BPF prog-id=48 op=LOAD Jan 21 01:00:29.232000 audit: BPF prog-id=49 op=LOAD Jan 21 01:00:29.232000 audit: BPF prog-id=34 op=UNLOAD Jan 21 01:00:29.232000 audit: BPF prog-id=35 op=UNLOAD Jan 21 01:00:29.233000 audit: BPF prog-id=50 op=LOAD Jan 21 01:00:29.233000 audit: BPF prog-id=44 op=UNLOAD Jan 21 01:00:29.233000 audit: BPF prog-id=51 op=LOAD Jan 21 01:00:29.233000 audit: BPF prog-id=52 op=LOAD Jan 21 01:00:29.233000 audit: BPF prog-id=45 op=UNLOAD Jan 21 01:00:29.233000 audit: BPF prog-id=46 op=UNLOAD Jan 21 01:00:29.234000 audit: BPF prog-id=53 op=LOAD Jan 21 01:00:29.234000 audit: BPF prog-id=40 op=UNLOAD Jan 21 01:00:29.234000 audit: BPF prog-id=54 op=LOAD Jan 21 01:00:29.234000 audit: BPF prog-id=55 op=LOAD Jan 21 01:00:29.234000 audit: BPF prog-id=41 op=UNLOAD Jan 21 01:00:29.234000 audit: BPF prog-id=42 op=UNLOAD Jan 21 01:00:29.235000 audit: BPF prog-id=56 op=LOAD Jan 21 01:00:29.235000 audit: BPF prog-id=36 op=UNLOAD Jan 21 01:00:29.235000 audit: BPF prog-id=57 op=LOAD Jan 21 01:00:29.235000 audit: BPF prog-id=58 op=LOAD Jan 21 01:00:29.235000 audit: BPF prog-id=37 op=UNLOAD Jan 21 01:00:29.235000 audit: BPF prog-id=38 op=UNLOAD Jan 21 01:00:29.235000 audit: BPF prog-id=59 op=LOAD Jan 21 01:00:29.235000 audit: BPF prog-id=43 op=UNLOAD Jan 21 01:00:29.236000 audit: BPF prog-id=60 op=LOAD Jan 21 01:00:29.236000 audit: BPF prog-id=31 op=UNLOAD Jan 21 01:00:29.236000 audit: BPF prog-id=61 op=LOAD Jan 21 01:00:29.236000 audit: BPF prog-id=62 op=LOAD Jan 21 01:00:29.236000 audit: BPF prog-id=32 op=UNLOAD Jan 21 01:00:29.236000 audit: BPF prog-id=33 op=UNLOAD Jan 21 01:00:29.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.244950 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 01:00:29.248277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 01:00:29.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.259185 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 21 01:00:29.264165 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 21 01:00:29.267341 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 21 01:00:29.272112 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 21 01:00:29.279770 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 21 01:00:29.286729 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:29.287517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 01:00:29.291899 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 01:00:29.297250 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 01:00:29.303956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 01:00:29.304751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 01:00:29.306114 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 01:00:29.306269 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 01:00:29.306404 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:29.313546 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:29.315172 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 01:00:29.315434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 01:00:29.315678 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 01:00:29.315829 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 01:00:29.315967 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:29.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.336565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 01:00:29.337902 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 01:00:29.339256 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 01:00:29.339532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 01:00:29.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.349210 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:29.349677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 01:00:29.356173 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 01:00:29.358155 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 01:00:29.358322 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 01:00:29.358373 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 01:00:29.358427 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 01:00:29.358488 systemd[1]: Reached target time-set.target - System Time Set. Jan 21 01:00:29.359607 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:29.360192 systemd[1]: Finished ensure-sysext.service. Jan 21 01:00:29.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.371000 audit[1864]: SYSTEM_BOOT pid=1864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.384823 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 21 01:00:29.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.391791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 01:00:29.393698 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 01:00:29.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.396710 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 01:00:29.402502 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 01:00:29.403293 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 01:00:29.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.427747 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 21 01:00:29.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.454000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 21 01:00:29.454000 audit[1895]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed2b3b3c0 a2=420 a3=0 items=0 ppid=1860 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:29.454000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 01:00:29.456555 augenrules[1895]: No rules Jan 21 01:00:29.457944 systemd[1]: audit-rules.service: Deactivated successfully. Jan 21 01:00:29.458361 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 21 01:00:29.496084 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 21 01:00:29.497162 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 21 01:00:29.718949 systemd-networkd[1555]: eth0: Gained IPv6LL Jan 21 01:00:29.722332 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 21 01:00:29.723428 systemd[1]: Reached target network-online.target - Network is Online. Jan 21 01:00:29.994635 ldconfig[1862]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 21 01:00:30.000012 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 21 01:00:30.002998 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 21 01:00:30.026846 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 21 01:00:30.028080 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 01:00:30.028745 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 21 01:00:30.029189 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 21 01:00:30.029572 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 21 01:00:30.030057 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 21 01:00:30.030474 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 21 01:00:30.030797 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 21 01:00:30.031228 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 21 01:00:30.031577 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 21 01:00:30.031888 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 21 01:00:30.031921 systemd[1]: Reached target paths.target - Path Units. Jan 21 01:00:30.032193 systemd[1]: Reached target timers.target - Timer Units. Jan 21 01:00:30.034145 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 21 01:00:30.036121 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 21 01:00:30.038661 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 21 01:00:30.039265 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 21 01:00:30.039683 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 21 01:00:30.042232 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 21 01:00:30.043054 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 21 01:00:30.044239 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 21 01:00:30.046097 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 01:00:30.046704 systemd[1]: Reached target basic.target - Basic System. Jan 21 01:00:30.047182 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 21 01:00:30.047224 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 21 01:00:30.048788 systemd[1]: Starting containerd.service - containerd container runtime... Jan 21 01:00:30.053011 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 21 01:00:30.055413 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 21 01:00:30.063909 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 21 01:00:30.067170 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 21 01:00:30.072716 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 21 01:00:30.074936 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 21 01:00:30.078077 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 21 01:00:30.089081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:00:30.102513 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 21 01:00:30.112222 systemd[1]: Started ntpd.service - Network Time Service. Jan 21 01:00:30.123872 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 21 01:00:30.143270 jq[1912]: false Jan 21 01:00:30.145455 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 21 01:00:30.151063 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 21 01:00:30.162081 google_oslogin_nss_cache[1914]: oslogin_cache_refresh[1914]: Refreshing passwd entry cache Jan 21 01:00:30.162092 oslogin_cache_refresh[1914]: Refreshing passwd entry cache Jan 21 01:00:30.163126 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 21 01:00:30.177107 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 21 01:00:30.185867 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 21 01:00:30.186914 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 21 01:00:30.187651 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 21 01:00:30.192458 google_oslogin_nss_cache[1914]: oslogin_cache_refresh[1914]: Failure getting users, quitting Jan 21 01:00:30.192458 google_oslogin_nss_cache[1914]: oslogin_cache_refresh[1914]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 21 01:00:30.192458 google_oslogin_nss_cache[1914]: oslogin_cache_refresh[1914]: Refreshing group entry cache Jan 21 01:00:30.191690 oslogin_cache_refresh[1914]: Failure getting users, quitting Jan 21 01:00:30.191713 oslogin_cache_refresh[1914]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 21 01:00:30.191773 oslogin_cache_refresh[1914]: Refreshing group entry cache Jan 21 01:00:30.194095 systemd[1]: Starting update-engine.service - Update Engine... Jan 21 01:00:30.198362 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 21 01:00:30.200827 google_oslogin_nss_cache[1914]: oslogin_cache_refresh[1914]: Failure getting groups, quitting Jan 21 01:00:30.200827 google_oslogin_nss_cache[1914]: oslogin_cache_refresh[1914]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 21 01:00:30.199284 oslogin_cache_refresh[1914]: Failure getting groups, quitting Jan 21 01:00:30.199301 oslogin_cache_refresh[1914]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 21 01:00:30.207865 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 21 01:00:30.208853 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 21 01:00:30.209176 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 21 01:00:30.209588 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 21 01:00:30.209932 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 21 01:00:30.219967 extend-filesystems[1913]: Found /dev/nvme0n1p6 Jan 21 01:00:30.235344 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 21 01:00:30.235696 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 21 01:00:30.252346 jq[1934]: true Jan 21 01:00:30.278905 extend-filesystems[1913]: Found /dev/nvme0n1p9 Jan 21 01:00:30.279549 ntpd[1917]: ntpd 4.2.8p18@1.4062-o Tue Jan 20 21:35:32 UTC 2026 (1): Starting Jan 21 01:00:30.284947 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 21 01:00:30.288337 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: ntpd 4.2.8p18@1.4062-o Tue Jan 20 21:35:32 UTC 2026 (1): Starting Jan 21 01:00:30.288337 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 21 01:00:30.288337 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: ---------------------------------------------------- Jan 21 01:00:30.288337 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: ntp-4 is maintained by Network Time Foundation, Jan 21 01:00:30.288337 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 21 01:00:30.288337 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: corporation. Support and training for ntp-4 are Jan 21 01:00:30.288337 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: available at https://www.nwtime.org/support Jan 21 01:00:30.288337 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: ---------------------------------------------------- Jan 21 01:00:30.279620 ntpd[1917]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 21 01:00:30.279631 ntpd[1917]: ---------------------------------------------------- Jan 21 01:00:30.297448 extend-filesystems[1913]: Checking size of /dev/nvme0n1p9 Jan 21 01:00:30.303395 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: proto: precision = 0.066 usec (-24) Jan 21 01:00:30.279641 ntpd[1917]: ntp-4 is maintained by Network Time Foundation, Jan 21 01:00:30.279650 ntpd[1917]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 21 01:00:30.279660 ntpd[1917]: corporation. Support and training for ntp-4 are Jan 21 01:00:30.279670 ntpd[1917]: available at https://www.nwtime.org/support Jan 21 01:00:30.279679 ntpd[1917]: ---------------------------------------------------- Jan 21 01:00:30.291441 ntpd[1917]: proto: precision = 0.066 usec (-24) Jan 21 01:00:30.313944 update_engine[1930]: I20260121 01:00:30.308117 1930 main.cc:92] Flatcar Update Engine starting Jan 21 01:00:30.314321 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: basedate set to 2026-01-08 Jan 21 01:00:30.314321 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: gps base set to 2026-01-11 (week 2401) Jan 21 01:00:30.314321 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: Listen and drop on 0 v6wildcard [::]:123 Jan 21 01:00:30.314321 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 21 01:00:30.314321 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: Listen normally on 2 lo 127.0.0.1:123 Jan 21 01:00:30.314321 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: Listen normally on 3 eth0 172.31.16.12:123 Jan 21 01:00:30.314321 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: Listen normally on 4 lo [::1]:123 Jan 21 01:00:30.314321 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: Listen normally on 5 eth0 [fe80::4fe:ccff:fed6:d619%2]:123 Jan 21 01:00:30.314321 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: Listening on routing socket on fd #22 for interface updates Jan 21 01:00:30.309932 ntpd[1917]: basedate set to 2026-01-08 Jan 21 01:00:30.309957 ntpd[1917]: gps base set to 2026-01-11 (week 2401) Jan 21 01:00:30.310106 ntpd[1917]: Listen and drop on 0 v6wildcard [::]:123 Jan 21 01:00:30.310140 ntpd[1917]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 21 01:00:30.310375 ntpd[1917]: Listen normally on 2 lo 127.0.0.1:123 Jan 21 01:00:30.310406 ntpd[1917]: Listen normally on 3 eth0 172.31.16.12:123 Jan 21 01:00:30.310440 ntpd[1917]: Listen normally on 4 lo [::1]:123 Jan 21 01:00:30.310471 ntpd[1917]: Listen normally on 5 eth0 [fe80::4fe:ccff:fed6:d619%2]:123 Jan 21 01:00:30.310502 ntpd[1917]: Listening on routing socket on fd #22 for interface updates Jan 21 01:00:30.325576 systemd[1]: motdgen.service: Deactivated successfully. Jan 21 01:00:30.326005 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 21 01:00:30.328098 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 21 01:00:30.329943 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 21 01:00:30.329943 ntpd[1917]: 21 Jan 01:00:30 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 21 01:00:30.328132 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 21 01:00:30.342646 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 21 01:00:30.352600 coreos-metadata[1909]: Jan 21 01:00:30.352 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 21 01:00:30.353917 coreos-metadata[1909]: Jan 21 01:00:30.353 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 21 01:00:30.358126 coreos-metadata[1909]: Jan 21 01:00:30.356 INFO Fetch successful Jan 21 01:00:30.358126 coreos-metadata[1909]: Jan 21 01:00:30.356 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 21 01:00:30.358126 coreos-metadata[1909]: Jan 21 01:00:30.357 INFO Fetch successful Jan 21 01:00:30.358126 coreos-metadata[1909]: Jan 21 01:00:30.358 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 21 01:00:30.359016 coreos-metadata[1909]: Jan 21 01:00:30.358 INFO Fetch successful Jan 21 01:00:30.359016 coreos-metadata[1909]: Jan 21 01:00:30.358 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 21 01:00:30.360879 coreos-metadata[1909]: Jan 21 01:00:30.360 INFO Fetch successful Jan 21 01:00:30.360879 coreos-metadata[1909]: Jan 21 01:00:30.360 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 21 01:00:30.362580 jq[1951]: true Jan 21 01:00:30.368851 coreos-metadata[1909]: Jan 21 01:00:30.362 INFO Fetch failed with 404: resource not found Jan 21 01:00:30.368851 coreos-metadata[1909]: Jan 21 01:00:30.362 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 21 01:00:30.371967 coreos-metadata[1909]: Jan 21 01:00:30.371 INFO Fetch successful Jan 21 01:00:30.371967 coreos-metadata[1909]: Jan 21 01:00:30.371 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 21 01:00:30.376545 coreos-metadata[1909]: Jan 21 01:00:30.375 INFO Fetch successful Jan 21 01:00:30.376545 coreos-metadata[1909]: Jan 21 01:00:30.376 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 21 01:00:30.377153 coreos-metadata[1909]: Jan 21 01:00:30.377 INFO Fetch successful Jan 21 01:00:30.377153 coreos-metadata[1909]: Jan 21 01:00:30.377 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 21 01:00:30.377891 coreos-metadata[1909]: Jan 21 01:00:30.377 INFO Fetch successful Jan 21 01:00:30.377891 coreos-metadata[1909]: Jan 21 01:00:30.377 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 21 01:00:30.378951 coreos-metadata[1909]: Jan 21 01:00:30.378 INFO Fetch successful Jan 21 01:00:30.401429 extend-filesystems[1913]: Resized partition /dev/nvme0n1p9 Jan 21 01:00:30.411050 extend-filesystems[1988]: resize2fs 1.47.3 (8-Jul-2025) Jan 21 01:00:30.413397 tar[1941]: linux-amd64/LICENSE Jan 21 01:00:30.413397 tar[1941]: linux-amd64/helm Jan 21 01:00:30.432646 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 21 01:00:30.468244 dbus-daemon[1910]: [system] SELinux support is enabled Jan 21 01:00:30.468578 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 21 01:00:30.483554 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 21 01:00:30.478377 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 21 01:00:30.481741 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 21 01:00:30.481913 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 21 01:00:30.481960 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 21 01:00:30.483996 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 21 01:00:30.484025 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 21 01:00:30.495233 extend-filesystems[1988]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 21 01:00:30.495233 extend-filesystems[1988]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 21 01:00:30.495233 extend-filesystems[1988]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 21 01:00:30.499365 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 21 01:00:30.507095 extend-filesystems[1913]: Resized filesystem in /dev/nvme0n1p9 Jan 21 01:00:30.500891 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 21 01:00:30.506915 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 21 01:00:30.513899 dbus-daemon[1910]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1555 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 21 01:00:30.514092 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 21 01:00:30.522845 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 21 01:00:30.527072 update_engine[1930]: I20260121 01:00:30.527012 1930 update_check_scheduler.cc:74] Next update check in 6m46s Jan 21 01:00:30.528146 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 21 01:00:30.535066 systemd[1]: Started update-engine.service - Update Engine. Jan 21 01:00:30.557437 systemd-logind[1929]: Watching system buttons on /dev/input/event2 (Power Button) Jan 21 01:00:30.557471 systemd-logind[1929]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 21 01:00:30.557496 systemd-logind[1929]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 21 01:00:30.558389 systemd-logind[1929]: New seat seat0. Jan 21 01:00:30.577125 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 21 01:00:30.578461 systemd[1]: Started systemd-logind.service - User Login Management. Jan 21 01:00:30.675901 bash[2018]: Updated "/home/core/.ssh/authorized_keys" Jan 21 01:00:30.671047 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 21 01:00:30.682815 systemd[1]: Starting sshkeys.service... Jan 21 01:00:30.763454 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 21 01:00:30.781886 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 21 01:00:30.929496 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 21 01:00:30.945327 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 21 01:00:30.947965 dbus-daemon[1910]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2002 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 21 01:00:30.964168 systemd[1]: Starting polkit.service - Authorization Manager... Jan 21 01:00:31.000006 amazon-ssm-agent[2000]: Initializing new seelog logger Jan 21 01:00:31.000576 amazon-ssm-agent[2000]: New Seelog Logger Creation Complete Jan 21 01:00:31.000825 amazon-ssm-agent[2000]: 2026/01/21 01:00:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:31.000825 amazon-ssm-agent[2000]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:31.002736 amazon-ssm-agent[2000]: 2026/01/21 01:00:31 processing appconfig overrides Jan 21 01:00:31.004353 amazon-ssm-agent[2000]: 2026/01/21 01:00:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:31.004353 amazon-ssm-agent[2000]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:31.004451 amazon-ssm-agent[2000]: 2026/01/21 01:00:31 processing appconfig overrides Jan 21 01:00:31.007832 amazon-ssm-agent[2000]: 2026/01/21 01:00:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:31.007832 amazon-ssm-agent[2000]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:31.007832 amazon-ssm-agent[2000]: 2026/01/21 01:00:31 processing appconfig overrides Jan 21 01:00:31.009823 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.0042 INFO Proxy environment variables: Jan 21 01:00:31.016882 amazon-ssm-agent[2000]: 2026/01/21 01:00:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:31.016882 amazon-ssm-agent[2000]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:31.016882 amazon-ssm-agent[2000]: 2026/01/21 01:00:31 processing appconfig overrides Jan 21 01:00:31.114836 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.0043 INFO https_proxy: Jan 21 01:00:31.125441 coreos-metadata[2029]: Jan 21 01:00:31.115 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 21 01:00:31.129426 coreos-metadata[2029]: Jan 21 01:00:31.129 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 21 01:00:31.131515 coreos-metadata[2029]: Jan 21 01:00:31.131 INFO Fetch successful Jan 21 01:00:31.131515 coreos-metadata[2029]: Jan 21 01:00:31.131 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 21 01:00:31.139932 coreos-metadata[2029]: Jan 21 01:00:31.137 INFO Fetch successful Jan 21 01:00:31.140224 unknown[2029]: wrote ssh authorized keys file for user: core Jan 21 01:00:31.164174 locksmithd[2004]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 21 01:00:31.208433 sshd_keygen[1966]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 21 01:00:31.219571 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.0043 INFO http_proxy: Jan 21 01:00:31.233585 update-ssh-keys[2110]: Updated "/home/core/.ssh/authorized_keys" Jan 21 01:00:31.234520 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 21 01:00:31.237367 systemd[1]: Finished sshkeys.service. Jan 21 01:00:31.295892 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 21 01:00:31.303928 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 21 01:00:31.322012 systemd[1]: Started sshd@0-172.31.16.12:22-68.220.241.50:33362.service - OpenSSH per-connection server daemon (68.220.241.50:33362). Jan 21 01:00:31.338826 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.0043 INFO no_proxy: Jan 21 01:00:31.407462 containerd[1959]: time="2026-01-21T01:00:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 21 01:00:31.414967 containerd[1959]: time="2026-01-21T01:00:31.408799359Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 21 01:00:31.412509 systemd[1]: issuegen.service: Deactivated successfully. Jan 21 01:00:31.412838 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 21 01:00:31.421444 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 21 01:00:31.450000 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.0044 INFO Checking if agent identity type OnPrem can be assumed Jan 21 01:00:31.493234 containerd[1959]: time="2026-01-21T01:00:31.493185352Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.371µs" Jan 21 01:00:31.494147 containerd[1959]: time="2026-01-21T01:00:31.494108091Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 21 01:00:31.494491 containerd[1959]: time="2026-01-21T01:00:31.494466998Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 21 01:00:31.494835 containerd[1959]: time="2026-01-21T01:00:31.494647381Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 21 01:00:31.495613 containerd[1959]: time="2026-01-21T01:00:31.495588150Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 21 01:00:31.497053 containerd[1959]: time="2026-01-21T01:00:31.495739722Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 01:00:31.497053 containerd[1959]: time="2026-01-21T01:00:31.495910137Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 01:00:31.497053 containerd[1959]: time="2026-01-21T01:00:31.495929417Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 01:00:31.497053 containerd[1959]: time="2026-01-21T01:00:31.496195420Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 01:00:31.497053 containerd[1959]: time="2026-01-21T01:00:31.496214220Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 01:00:31.497053 containerd[1959]: time="2026-01-21T01:00:31.496233486Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 01:00:31.497053 containerd[1959]: time="2026-01-21T01:00:31.496248140Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 01:00:31.497053 containerd[1959]: time="2026-01-21T01:00:31.496514135Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 01:00:31.497053 containerd[1959]: time="2026-01-21T01:00:31.496538986Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 21 01:00:31.497053 containerd[1959]: time="2026-01-21T01:00:31.496652794Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 21 01:00:31.506482 containerd[1959]: time="2026-01-21T01:00:31.502187976Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 01:00:31.506482 containerd[1959]: time="2026-01-21T01:00:31.502263537Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 01:00:31.506482 containerd[1959]: time="2026-01-21T01:00:31.502305533Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 21 01:00:31.506482 containerd[1959]: time="2026-01-21T01:00:31.502479158Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 21 01:00:31.506482 containerd[1959]: time="2026-01-21T01:00:31.506303657Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 21 01:00:31.508145 containerd[1959]: time="2026-01-21T01:00:31.506784425Z" level=info msg="metadata content store policy set" policy=shared Jan 21 01:00:31.520650 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 21 01:00:31.534529 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.536868389Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.536946081Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.537054436Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.537075474Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.537094385Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.537117582Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.537134791Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.537148801Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.537168099Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.537187247Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.537207319Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.537243186Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.537262485Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 21 01:00:31.544982 containerd[1959]: time="2026-01-21T01:00:31.537280891Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537473809Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537504334Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537525082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537548965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537569094Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537584777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537602146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537619186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537637206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537654923Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537670223Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537703543Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537761998Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.537782027Z" level=info msg="Start snapshots syncer" Jan 21 01:00:31.545491 containerd[1959]: time="2026-01-21T01:00:31.541596687Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 21 01:00:31.546052 containerd[1959]: time="2026-01-21T01:00:31.544079945Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 21 01:00:31.546052 containerd[1959]: time="2026-01-21T01:00:31.544167553Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 21 01:00:31.546247 containerd[1959]: time="2026-01-21T01:00:31.544258455Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 21 01:00:31.546247 containerd[1959]: time="2026-01-21T01:00:31.544437677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 21 01:00:31.546247 containerd[1959]: time="2026-01-21T01:00:31.544480175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 21 01:00:31.546247 containerd[1959]: time="2026-01-21T01:00:31.544500361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 21 01:00:31.546247 containerd[1959]: time="2026-01-21T01:00:31.544517622Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 21 01:00:31.546247 containerd[1959]: time="2026-01-21T01:00:31.544537973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 21 01:00:31.546247 containerd[1959]: time="2026-01-21T01:00:31.544555683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 21 01:00:31.546247 containerd[1959]: time="2026-01-21T01:00:31.544572151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 21 01:00:31.546247 containerd[1959]: time="2026-01-21T01:00:31.544589524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 21 01:00:31.546247 containerd[1959]: time="2026-01-21T01:00:31.544615614Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 21 01:00:31.546247 containerd[1959]: time="2026-01-21T01:00:31.544655528Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 01:00:31.546247 containerd[1959]: time="2026-01-21T01:00:31.544678250Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 01:00:31.546247 containerd[1959]: time="2026-01-21T01:00:31.544692062Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 01:00:31.546706 containerd[1959]: time="2026-01-21T01:00:31.544705642Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 01:00:31.546706 containerd[1959]: time="2026-01-21T01:00:31.544719510Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 21 01:00:31.546706 containerd[1959]: time="2026-01-21T01:00:31.544735285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 21 01:00:31.546706 containerd[1959]: time="2026-01-21T01:00:31.544751629Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 21 01:00:31.546706 containerd[1959]: time="2026-01-21T01:00:31.544775299Z" level=info msg="runtime interface created" Jan 21 01:00:31.546706 containerd[1959]: time="2026-01-21T01:00:31.544783488Z" level=info msg="created NRI interface" Jan 21 01:00:31.550052 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 21 01:00:31.552149 systemd[1]: Reached target getty.target - Login Prompts. Jan 21 01:00:31.555890 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.0046 INFO Checking if agent identity type EC2 can be assumed Jan 21 01:00:31.562945 containerd[1959]: time="2026-01-21T01:00:31.544798677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 21 01:00:31.562945 containerd[1959]: time="2026-01-21T01:00:31.561249105Z" level=info msg="Connect containerd service" Jan 21 01:00:31.562945 containerd[1959]: time="2026-01-21T01:00:31.561303041Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 21 01:00:31.562945 containerd[1959]: time="2026-01-21T01:00:31.562491293Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 21 01:00:31.631327 polkitd[2092]: Started polkitd version 126 Jan 21 01:00:31.653347 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.4894 INFO Agent will take identity from EC2 Jan 21 01:00:31.654076 polkitd[2092]: Loading rules from directory /etc/polkit-1/rules.d Jan 21 01:00:31.654701 polkitd[2092]: Loading rules from directory /run/polkit-1/rules.d Jan 21 01:00:31.655484 polkitd[2092]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 21 01:00:31.660184 polkitd[2092]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 21 01:00:31.660258 polkitd[2092]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 21 01:00:31.660312 polkitd[2092]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 21 01:00:31.661656 polkitd[2092]: Finished loading, compiling and executing 2 rules Jan 21 01:00:31.662583 systemd[1]: Started polkit.service - Authorization Manager. Jan 21 01:00:31.671973 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 21 01:00:31.672822 polkitd[2092]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 21 01:00:31.726306 systemd-hostnamed[2002]: Hostname set to (transient) Jan 21 01:00:31.726434 systemd-resolved[1524]: System hostname changed to 'ip-172-31-16-12'. Jan 21 01:00:31.754035 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.4935 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 21 01:00:31.827029 tar[1941]: linux-amd64/README.md Jan 21 01:00:31.836505 containerd[1959]: time="2026-01-21T01:00:31.836448732Z" level=info msg="Start subscribing containerd event" Jan 21 01:00:31.836895 containerd[1959]: time="2026-01-21T01:00:31.836785612Z" level=info msg="Start recovering state" Jan 21 01:00:31.837855 containerd[1959]: time="2026-01-21T01:00:31.837196576Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 21 01:00:31.837855 containerd[1959]: time="2026-01-21T01:00:31.837256005Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 21 01:00:31.838128 containerd[1959]: time="2026-01-21T01:00:31.838109909Z" level=info msg="Start event monitor" Jan 21 01:00:31.838210 containerd[1959]: time="2026-01-21T01:00:31.838197175Z" level=info msg="Start cni network conf syncer for default" Jan 21 01:00:31.838285 containerd[1959]: time="2026-01-21T01:00:31.838273595Z" level=info msg="Start streaming server" Jan 21 01:00:31.838366 containerd[1959]: time="2026-01-21T01:00:31.838355201Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 21 01:00:31.838438 containerd[1959]: time="2026-01-21T01:00:31.838427326Z" level=info msg="runtime interface starting up..." Jan 21 01:00:31.838507 containerd[1959]: time="2026-01-21T01:00:31.838483688Z" level=info msg="starting plugins..." Jan 21 01:00:31.838589 containerd[1959]: time="2026-01-21T01:00:31.838562962Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 21 01:00:31.839053 systemd[1]: Started containerd.service - containerd container runtime. Jan 21 01:00:31.841466 containerd[1959]: time="2026-01-21T01:00:31.841217097Z" level=info msg="containerd successfully booted in 0.437791s" Jan 21 01:00:31.853324 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 21 01:00:31.855089 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.4935 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 21 01:00:31.865420 amazon-ssm-agent[2000]: 2026/01/21 01:00:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:31.865420 amazon-ssm-agent[2000]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:31.865420 amazon-ssm-agent[2000]: 2026/01/21 01:00:31 processing appconfig overrides Jan 21 01:00:31.911824 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.4935 INFO [amazon-ssm-agent] Starting Core Agent Jan 21 01:00:31.911824 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.4936 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 21 01:00:31.911824 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.4936 INFO [Registrar] Starting registrar module Jan 21 01:00:31.912889 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.5026 INFO [EC2Identity] Checking disk for registration info Jan 21 01:00:31.912889 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.5027 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 21 01:00:31.912889 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.5027 INFO [EC2Identity] Generating registration keypair Jan 21 01:00:31.912889 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.8167 INFO [EC2Identity] Checking write access before registering Jan 21 01:00:31.912889 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.8172 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 21 01:00:31.912889 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.8651 INFO [EC2Identity] EC2 registration was successful. Jan 21 01:00:31.912889 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.8651 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 21 01:00:31.912889 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.8652 INFO [CredentialRefresher] credentialRefresher has started Jan 21 01:00:31.912889 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.8652 INFO [CredentialRefresher] Starting credentials refresher loop Jan 21 01:00:31.912889 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.9115 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 21 01:00:31.912889 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.9117 INFO [CredentialRefresher] Credentials ready Jan 21 01:00:31.953696 amazon-ssm-agent[2000]: 2026-01-21 01:00:31.9119 INFO [CredentialRefresher] Next credential rotation will be in 29.999994343266668 minutes Jan 21 01:00:32.072396 sshd[2133]: Accepted publickey for core from 68.220.241.50 port 33362 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:32.076397 sshd-session[2133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:32.084022 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 21 01:00:32.086851 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 21 01:00:32.096582 systemd-logind[1929]: New session 1 of user core. Jan 21 01:00:32.111925 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 21 01:00:32.116354 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 21 01:00:32.132450 (systemd)[2185]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:32.135658 systemd-logind[1929]: New session 2 of user core. Jan 21 01:00:32.306101 systemd[2185]: Queued start job for default target default.target. Jan 21 01:00:32.317005 systemd[2185]: Created slice app.slice - User Application Slice. Jan 21 01:00:32.317044 systemd[2185]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 21 01:00:32.317060 systemd[2185]: Reached target paths.target - Paths. Jan 21 01:00:32.317111 systemd[2185]: Reached target timers.target - Timers. Jan 21 01:00:32.318618 systemd[2185]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 21 01:00:32.320896 systemd[2185]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 21 01:00:32.341281 systemd[2185]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 21 01:00:32.342764 systemd[2185]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 21 01:00:32.342960 systemd[2185]: Reached target sockets.target - Sockets. Jan 21 01:00:32.343027 systemd[2185]: Reached target basic.target - Basic System. Jan 21 01:00:32.343080 systemd[2185]: Reached target default.target - Main User Target. Jan 21 01:00:32.343130 systemd[2185]: Startup finished in 200ms. Jan 21 01:00:32.343924 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 21 01:00:32.352165 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 21 01:00:32.600104 systemd[1]: Started sshd@1-172.31.16.12:22-68.220.241.50:54786.service - OpenSSH per-connection server daemon (68.220.241.50:54786). Jan 21 01:00:32.923862 amazon-ssm-agent[2000]: 2026-01-21 01:00:32.9236 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 21 01:00:33.022832 sshd[2199]: Accepted publickey for core from 68.220.241.50 port 54786 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:33.024860 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:33.025182 amazon-ssm-agent[2000]: 2026-01-21 01:00:32.9274 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2204) started Jan 21 01:00:33.031502 systemd-logind[1929]: New session 3 of user core. Jan 21 01:00:33.038092 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 21 01:00:33.125314 amazon-ssm-agent[2000]: 2026-01-21 01:00:32.9274 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 21 01:00:33.257543 sshd[2206]: Connection closed by 68.220.241.50 port 54786 Jan 21 01:00:33.258263 sshd-session[2199]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:33.263033 systemd[1]: sshd@1-172.31.16.12:22-68.220.241.50:54786.service: Deactivated successfully. Jan 21 01:00:33.266285 systemd[1]: session-3.scope: Deactivated successfully. Jan 21 01:00:33.267542 systemd-logind[1929]: Session 3 logged out. Waiting for processes to exit. Jan 21 01:00:33.270037 systemd-logind[1929]: Removed session 3. Jan 21 01:00:33.348982 systemd[1]: Started sshd@2-172.31.16.12:22-68.220.241.50:54794.service - OpenSSH per-connection server daemon (68.220.241.50:54794). Jan 21 01:00:33.778848 sshd[2222]: Accepted publickey for core from 68.220.241.50 port 54794 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:33.779782 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:33.785527 systemd-logind[1929]: New session 4 of user core. Jan 21 01:00:33.788017 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 21 01:00:34.011693 sshd[2228]: Connection closed by 68.220.241.50 port 54794 Jan 21 01:00:34.013640 sshd-session[2222]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:34.018645 systemd[1]: sshd@2-172.31.16.12:22-68.220.241.50:54794.service: Deactivated successfully. Jan 21 01:00:34.021158 systemd[1]: session-4.scope: Deactivated successfully. Jan 21 01:00:34.022823 systemd-logind[1929]: Session 4 logged out. Waiting for processes to exit. Jan 21 01:00:34.024682 systemd-logind[1929]: Removed session 4. Jan 21 01:00:36.110283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:00:36.112584 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 21 01:00:36.113975 systemd[1]: Startup finished in 3.177s (kernel) + 8.228s (initrd) + 10.538s (userspace) = 21.944s. Jan 21 01:00:36.121627 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 01:00:38.176848 systemd-resolved[1524]: Clock change detected. Flushing caches. Jan 21 01:00:38.738498 kubelet[2238]: E0121 01:00:38.738397 2238 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 01:00:38.740883 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 01:00:38.741030 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 01:00:38.741655 systemd[1]: kubelet.service: Consumed 1.012s CPU time, 257.1M memory peak. Jan 21 01:00:45.017098 systemd[1]: Started sshd@3-172.31.16.12:22-68.220.241.50:50782.service - OpenSSH per-connection server daemon (68.220.241.50:50782). Jan 21 01:00:45.476811 sshd[2250]: Accepted publickey for core from 68.220.241.50 port 50782 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:45.478324 sshd-session[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:45.484143 systemd-logind[1929]: New session 5 of user core. Jan 21 01:00:45.493088 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 21 01:00:45.728944 sshd[2254]: Connection closed by 68.220.241.50 port 50782 Jan 21 01:00:45.730966 sshd-session[2250]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:45.735920 systemd[1]: sshd@3-172.31.16.12:22-68.220.241.50:50782.service: Deactivated successfully. Jan 21 01:00:45.738027 systemd[1]: session-5.scope: Deactivated successfully. Jan 21 01:00:45.739952 systemd-logind[1929]: Session 5 logged out. Waiting for processes to exit. Jan 21 01:00:45.741667 systemd-logind[1929]: Removed session 5. Jan 21 01:00:45.825934 systemd[1]: Started sshd@4-172.31.16.12:22-68.220.241.50:50794.service - OpenSSH per-connection server daemon (68.220.241.50:50794). Jan 21 01:00:46.293573 sshd[2260]: Accepted publickey for core from 68.220.241.50 port 50794 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:46.295129 sshd-session[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:46.301329 systemd-logind[1929]: New session 6 of user core. Jan 21 01:00:46.308078 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 21 01:00:46.541040 sshd[2264]: Connection closed by 68.220.241.50 port 50794 Jan 21 01:00:46.541898 sshd-session[2260]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:46.546494 systemd[1]: sshd@4-172.31.16.12:22-68.220.241.50:50794.service: Deactivated successfully. Jan 21 01:00:46.548585 systemd[1]: session-6.scope: Deactivated successfully. Jan 21 01:00:46.549890 systemd-logind[1929]: Session 6 logged out. Waiting for processes to exit. Jan 21 01:00:46.550942 systemd-logind[1929]: Removed session 6. Jan 21 01:00:46.620383 systemd[1]: Started sshd@5-172.31.16.12:22-68.220.241.50:50796.service - OpenSSH per-connection server daemon (68.220.241.50:50796). Jan 21 01:00:47.048109 sshd[2270]: Accepted publickey for core from 68.220.241.50 port 50796 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:47.049632 sshd-session[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:47.054988 systemd-logind[1929]: New session 7 of user core. Jan 21 01:00:47.063119 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 21 01:00:47.280725 sshd[2274]: Connection closed by 68.220.241.50 port 50796 Jan 21 01:00:47.281933 sshd-session[2270]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:47.285739 systemd[1]: sshd@5-172.31.16.12:22-68.220.241.50:50796.service: Deactivated successfully. Jan 21 01:00:47.287418 systemd[1]: session-7.scope: Deactivated successfully. Jan 21 01:00:47.289962 systemd-logind[1929]: Session 7 logged out. Waiting for processes to exit. Jan 21 01:00:47.291011 systemd-logind[1929]: Removed session 7. Jan 21 01:00:47.372502 systemd[1]: Started sshd@6-172.31.16.12:22-68.220.241.50:50800.service - OpenSSH per-connection server daemon (68.220.241.50:50800). Jan 21 01:00:47.802891 sshd[2280]: Accepted publickey for core from 68.220.241.50 port 50800 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:47.804378 sshd-session[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:47.810790 systemd-logind[1929]: New session 8 of user core. Jan 21 01:00:47.820047 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 21 01:00:47.978399 sudo[2285]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 21 01:00:47.978850 sudo[2285]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 01:00:48.401553 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 21 01:00:48.419219 (dockerd)[2304]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 21 01:00:48.737501 dockerd[2304]: time="2026-01-21T01:00:48.736114933Z" level=info msg="Starting up" Jan 21 01:00:48.737902 dockerd[2304]: time="2026-01-21T01:00:48.737391939Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 21 01:00:48.753008 dockerd[2304]: time="2026-01-21T01:00:48.752957456Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 21 01:00:48.770695 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4214337136-merged.mount: Deactivated successfully. Jan 21 01:00:48.772085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 21 01:00:48.774693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:00:48.857286 systemd[1]: var-lib-docker-metacopy\x2dcheck2699871491-merged.mount: Deactivated successfully. Jan 21 01:00:48.892807 dockerd[2304]: time="2026-01-21T01:00:48.890065641Z" level=info msg="Loading containers: start." Jan 21 01:00:48.908809 kernel: Initializing XFRM netlink socket Jan 21 01:00:49.087943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:00:49.101732 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 01:00:49.175525 kubelet[2385]: E0121 01:00:49.175482 2385 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 01:00:49.181107 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 01:00:49.181329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 01:00:49.182114 systemd[1]: kubelet.service: Consumed 228ms CPU time, 110.4M memory peak. Jan 21 01:00:49.219592 (udev-worker)[2328]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:00:49.270380 systemd-networkd[1555]: docker0: Link UP Jan 21 01:00:49.275489 dockerd[2304]: time="2026-01-21T01:00:49.275428275Z" level=info msg="Loading containers: done." Jan 21 01:00:49.354610 dockerd[2304]: time="2026-01-21T01:00:49.354016385Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 21 01:00:49.354610 dockerd[2304]: time="2026-01-21T01:00:49.354131206Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 21 01:00:49.354610 dockerd[2304]: time="2026-01-21T01:00:49.354258440Z" level=info msg="Initializing buildkit" Jan 21 01:00:49.385101 dockerd[2304]: time="2026-01-21T01:00:49.384839576Z" level=info msg="Completed buildkit initialization" Jan 21 01:00:49.394098 dockerd[2304]: time="2026-01-21T01:00:49.394043007Z" level=info msg="Daemon has completed initialization" Jan 21 01:00:49.394465 dockerd[2304]: time="2026-01-21T01:00:49.394259920Z" level=info msg="API listen on /run/docker.sock" Jan 21 01:00:49.394974 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 21 01:00:49.770296 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck425313313-merged.mount: Deactivated successfully. Jan 21 01:00:51.428307 containerd[1959]: time="2026-01-21T01:00:51.428231781Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 21 01:00:52.347295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403925248.mount: Deactivated successfully. Jan 21 01:00:53.706280 containerd[1959]: time="2026-01-21T01:00:53.706216809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:53.707898 containerd[1959]: time="2026-01-21T01:00:53.707845845Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=26233861" Jan 21 01:00:53.709474 containerd[1959]: time="2026-01-21T01:00:53.709365057Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:53.714210 containerd[1959]: time="2026-01-21T01:00:53.714168003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:53.715792 containerd[1959]: time="2026-01-21T01:00:53.715249673Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.286974027s" Jan 21 01:00:53.715792 containerd[1959]: time="2026-01-21T01:00:53.715288424Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 21 01:00:53.716113 containerd[1959]: time="2026-01-21T01:00:53.716076545Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 21 01:00:55.834592 containerd[1959]: time="2026-01-21T01:00:55.834540289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:55.878507 containerd[1959]: time="2026-01-21T01:00:55.878436435Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 21 01:00:55.922795 containerd[1959]: time="2026-01-21T01:00:55.922365324Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:55.968070 containerd[1959]: time="2026-01-21T01:00:55.967996007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:55.969180 containerd[1959]: time="2026-01-21T01:00:55.969132192Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 2.253015713s" Jan 21 01:00:55.969367 containerd[1959]: time="2026-01-21T01:00:55.969341335Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 21 01:00:55.970353 containerd[1959]: time="2026-01-21T01:00:55.970315907Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 21 01:00:57.374177 containerd[1959]: time="2026-01-21T01:00:57.374114132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:57.375259 containerd[1959]: time="2026-01-21T01:00:57.375195197Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Jan 21 01:00:57.376556 containerd[1959]: time="2026-01-21T01:00:57.376497995Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:57.379343 containerd[1959]: time="2026-01-21T01:00:57.379284939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:57.380160 containerd[1959]: time="2026-01-21T01:00:57.380053517Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.409708422s" Jan 21 01:00:57.380160 containerd[1959]: time="2026-01-21T01:00:57.380082754Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 21 01:00:57.380687 containerd[1959]: time="2026-01-21T01:00:57.380650771Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 21 01:00:58.430955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4271464935.mount: Deactivated successfully. Jan 21 01:00:58.810944 containerd[1959]: time="2026-01-21T01:00:58.810609869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:58.812068 containerd[1959]: time="2026-01-21T01:00:58.811920340Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=14375786" Jan 21 01:00:58.813255 containerd[1959]: time="2026-01-21T01:00:58.813217535Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:58.815262 containerd[1959]: time="2026-01-21T01:00:58.815207977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:58.815836 containerd[1959]: time="2026-01-21T01:00:58.815626949Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.434945188s" Jan 21 01:00:58.815836 containerd[1959]: time="2026-01-21T01:00:58.815668533Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 21 01:00:58.816815 containerd[1959]: time="2026-01-21T01:00:58.816770560Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 21 01:00:59.326399 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 21 01:00:59.329729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:00:59.342760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3578704663.mount: Deactivated successfully. Jan 21 01:00:59.642712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:00:59.653285 (kubelet)[2624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 01:00:59.730433 kubelet[2624]: E0121 01:00:59.730322 2624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 01:00:59.734806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 01:00:59.735011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 01:00:59.735833 systemd[1]: kubelet.service: Consumed 222ms CPU time, 108.4M memory peak. Jan 21 01:01:00.469234 containerd[1959]: time="2026-01-21T01:01:00.469175317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:00.470423 containerd[1959]: time="2026-01-21T01:01:00.470152151Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568511" Jan 21 01:01:00.471375 containerd[1959]: time="2026-01-21T01:01:00.471340749Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:00.474189 containerd[1959]: time="2026-01-21T01:01:00.474138089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:00.475112 containerd[1959]: time="2026-01-21T01:01:00.474957197Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.658139315s" Jan 21 01:01:00.475112 containerd[1959]: time="2026-01-21T01:01:00.474989306Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 21 01:01:00.475505 containerd[1959]: time="2026-01-21T01:01:00.475483717Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 21 01:01:00.954305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2952959423.mount: Deactivated successfully. Jan 21 01:01:00.962239 containerd[1959]: time="2026-01-21T01:01:00.962195001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:00.963318 containerd[1959]: time="2026-01-21T01:01:00.963144862Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 21 01:01:00.964431 containerd[1959]: time="2026-01-21T01:01:00.964396454Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:00.967226 containerd[1959]: time="2026-01-21T01:01:00.967099814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:00.967828 containerd[1959]: time="2026-01-21T01:01:00.967647557Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 492.132547ms" Jan 21 01:01:00.967828 containerd[1959]: time="2026-01-21T01:01:00.967678596Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 21 01:01:00.968136 containerd[1959]: time="2026-01-21T01:01:00.968102093Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 21 01:01:02.068945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652172836.mount: Deactivated successfully. Jan 21 01:01:02.676291 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 21 01:01:06.242436 containerd[1959]: time="2026-01-21T01:01:06.242380214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:06.243566 containerd[1959]: time="2026-01-21T01:01:06.243387810Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73504897" Jan 21 01:01:06.244571 containerd[1959]: time="2026-01-21T01:01:06.244533436Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:06.247324 containerd[1959]: time="2026-01-21T01:01:06.247287229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:06.249811 containerd[1959]: time="2026-01-21T01:01:06.248353239Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 5.280210937s" Jan 21 01:01:06.249811 containerd[1959]: time="2026-01-21T01:01:06.248394984Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 21 01:01:09.422443 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:01:09.422715 systemd[1]: kubelet.service: Consumed 222ms CPU time, 108.4M memory peak. Jan 21 01:01:09.425503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:01:09.468870 systemd[1]: Reload requested from client PID 2760 ('systemctl') (unit session-8.scope)... Jan 21 01:01:09.468921 systemd[1]: Reloading... Jan 21 01:01:09.611816 zram_generator::config[2805]: No configuration found. Jan 21 01:01:09.914467 systemd[1]: Reloading finished in 444 ms. Jan 21 01:01:10.035365 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 21 01:01:10.035478 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 21 01:01:10.035884 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:01:10.035953 systemd[1]: kubelet.service: Consumed 148ms CPU time, 98M memory peak. Jan 21 01:01:10.038082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:01:10.340053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:01:10.353451 (kubelet)[2870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 01:01:10.417799 kubelet[2870]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 01:01:10.417799 kubelet[2870]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 01:01:10.417799 kubelet[2870]: I0121 01:01:10.416611 2870 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 01:01:10.958146 kubelet[2870]: I0121 01:01:10.958093 2870 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 21 01:01:10.958146 kubelet[2870]: I0121 01:01:10.958122 2870 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 01:01:10.958146 kubelet[2870]: I0121 01:01:10.958145 2870 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 21 01:01:10.960148 kubelet[2870]: I0121 01:01:10.960105 2870 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 01:01:10.960389 kubelet[2870]: I0121 01:01:10.960372 2870 server.go:956] "Client rotation is on, will bootstrap in background" Jan 21 01:01:10.983373 kubelet[2870]: I0121 01:01:10.982159 2870 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 01:01:10.987106 kubelet[2870]: E0121 01:01:10.987054 2870 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 01:01:11.001637 kubelet[2870]: I0121 01:01:11.001539 2870 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 01:01:11.012487 kubelet[2870]: I0121 01:01:11.012443 2870 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 21 01:01:11.015568 kubelet[2870]: I0121 01:01:11.015194 2870 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 01:01:11.017469 kubelet[2870]: I0121 01:01:11.015271 2870 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-12","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 01:01:11.017688 kubelet[2870]: I0121 01:01:11.017483 2870 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 01:01:11.017688 kubelet[2870]: I0121 01:01:11.017503 2870 container_manager_linux.go:306] "Creating device plugin manager" Jan 21 01:01:11.017688 kubelet[2870]: I0121 01:01:11.017631 2870 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 21 01:01:11.035313 kubelet[2870]: I0121 01:01:11.034980 2870 state_mem.go:36] "Initialized new in-memory state store" Jan 21 01:01:11.036833 kubelet[2870]: I0121 01:01:11.036623 2870 kubelet.go:475] "Attempting to sync node with API server" Jan 21 01:01:11.036833 kubelet[2870]: I0121 01:01:11.036657 2870 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 01:01:11.036833 kubelet[2870]: I0121 01:01:11.036689 2870 kubelet.go:387] "Adding apiserver pod source" Jan 21 01:01:11.036833 kubelet[2870]: I0121 01:01:11.036724 2870 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 01:01:11.038559 kubelet[2870]: E0121 01:01:11.038519 2870 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-12&limit=500&resourceVersion=0\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 01:01:11.042621 kubelet[2870]: E0121 01:01:11.042307 2870 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 01:01:11.042962 kubelet[2870]: I0121 01:01:11.042943 2870 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 01:01:11.047991 kubelet[2870]: I0121 01:01:11.047963 2870 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 01:01:11.048205 kubelet[2870]: I0121 01:01:11.048139 2870 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 21 01:01:11.054897 kubelet[2870]: W0121 01:01:11.054850 2870 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 21 01:01:11.067217 kubelet[2870]: I0121 01:01:11.066711 2870 server.go:1262] "Started kubelet" Jan 21 01:01:11.072288 kubelet[2870]: I0121 01:01:11.072258 2870 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 01:01:11.079062 kubelet[2870]: I0121 01:01:11.079017 2870 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 01:01:11.080296 kubelet[2870]: E0121 01:01:11.073640 2870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.12:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-12.188c994404c5bd0f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-12,UID:ip-172-31-16-12,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-12,},FirstTimestamp:2026-01-21 01:01:11.066656015 +0000 UTC m=+0.708482249,LastTimestamp:2026-01-21 01:01:11.066656015 +0000 UTC m=+0.708482249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-12,}" Jan 21 01:01:11.084829 kubelet[2870]: I0121 01:01:11.082873 2870 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 01:01:11.084829 kubelet[2870]: I0121 01:01:11.082940 2870 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 21 01:01:11.084829 kubelet[2870]: I0121 01:01:11.083247 2870 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 01:01:11.092936 kubelet[2870]: I0121 01:01:11.092352 2870 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 01:01:11.095287 kubelet[2870]: I0121 01:01:11.095054 2870 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 21 01:01:11.095912 kubelet[2870]: E0121 01:01:11.095892 2870 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-12\" not found" Jan 21 01:01:11.099573 kubelet[2870]: I0121 01:01:11.099532 2870 server.go:310] "Adding debug handlers to kubelet server" Jan 21 01:01:11.103429 kubelet[2870]: I0121 01:01:11.100845 2870 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 01:01:11.103429 kubelet[2870]: I0121 01:01:11.100992 2870 reconciler.go:29] "Reconciler: start to sync state" Jan 21 01:01:11.103429 kubelet[2870]: E0121 01:01:11.101839 2870 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 01:01:11.103429 kubelet[2870]: E0121 01:01:11.101927 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-12?timeout=10s\": dial tcp 172.31.16.12:6443: connect: connection refused" interval="200ms" Jan 21 01:01:11.104610 kubelet[2870]: I0121 01:01:11.104086 2870 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 01:01:11.107137 kubelet[2870]: I0121 01:01:11.107113 2870 factory.go:223] Registration of the containerd container factory successfully Jan 21 01:01:11.107137 kubelet[2870]: I0121 01:01:11.107134 2870 factory.go:223] Registration of the systemd container factory successfully Jan 21 01:01:11.139106 kubelet[2870]: I0121 01:01:11.138356 2870 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 01:01:11.139106 kubelet[2870]: I0121 01:01:11.138376 2870 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 01:01:11.139106 kubelet[2870]: I0121 01:01:11.138395 2870 state_mem.go:36] "Initialized new in-memory state store" Jan 21 01:01:11.149581 kubelet[2870]: I0121 01:01:11.149522 2870 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 21 01:01:11.165247 kubelet[2870]: I0121 01:01:11.151382 2870 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 21 01:01:11.165247 kubelet[2870]: I0121 01:01:11.151406 2870 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 21 01:01:11.165247 kubelet[2870]: I0121 01:01:11.151445 2870 kubelet.go:2427] "Starting kubelet main sync loop" Jan 21 01:01:11.165247 kubelet[2870]: E0121 01:01:11.151500 2870 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 01:01:11.165247 kubelet[2870]: E0121 01:01:11.157641 2870 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 01:01:11.166219 kubelet[2870]: I0121 01:01:11.165286 2870 policy_none.go:49] "None policy: Start" Jan 21 01:01:11.166219 kubelet[2870]: I0121 01:01:11.165310 2870 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 21 01:01:11.166219 kubelet[2870]: I0121 01:01:11.165322 2870 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 21 01:01:11.173030 kubelet[2870]: I0121 01:01:11.172610 2870 policy_none.go:47] "Start" Jan 21 01:01:11.178342 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 21 01:01:11.196341 kubelet[2870]: E0121 01:01:11.196305 2870 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-12\" not found" Jan 21 01:01:11.199656 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 21 01:01:11.205234 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 21 01:01:11.215195 kubelet[2870]: E0121 01:01:11.214514 2870 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 01:01:11.215195 kubelet[2870]: I0121 01:01:11.214718 2870 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 01:01:11.215195 kubelet[2870]: I0121 01:01:11.214728 2870 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 01:01:11.217582 kubelet[2870]: I0121 01:01:11.216148 2870 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 01:01:11.218378 kubelet[2870]: E0121 01:01:11.218362 2870 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 01:01:11.218662 kubelet[2870]: E0121 01:01:11.218618 2870 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-12\" not found" Jan 21 01:01:11.275787 systemd[1]: Created slice kubepods-burstable-pod7961951e66f9652959a88e6ab6547a09.slice - libcontainer container kubepods-burstable-pod7961951e66f9652959a88e6ab6547a09.slice. Jan 21 01:01:11.292457 kubelet[2870]: E0121 01:01:11.292347 2870 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-12\" not found" node="ip-172-31-16-12" Jan 21 01:01:11.296194 systemd[1]: Created slice kubepods-burstable-pod431dacaade787a91cf1cbd0de68ef94b.slice - libcontainer container kubepods-burstable-pod431dacaade787a91cf1cbd0de68ef94b.slice. Jan 21 01:01:11.303133 kubelet[2870]: E0121 01:01:11.303085 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-12?timeout=10s\": dial tcp 172.31.16.12:6443: connect: connection refused" interval="400ms" Jan 21 01:01:11.306298 kubelet[2870]: E0121 01:01:11.306261 2870 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-12\" not found" node="ip-172-31-16-12" Jan 21 01:01:11.309541 systemd[1]: Created slice kubepods-burstable-pod8d727e33e70f88b0167065f482b723b8.slice - libcontainer container kubepods-burstable-pod8d727e33e70f88b0167065f482b723b8.slice. Jan 21 01:01:11.312347 kubelet[2870]: E0121 01:01:11.312319 2870 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-12\" not found" node="ip-172-31-16-12" Jan 21 01:01:11.318729 kubelet[2870]: I0121 01:01:11.317104 2870 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-12" Jan 21 01:01:11.318729 kubelet[2870]: E0121 01:01:11.318604 2870 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.12:6443/api/v1/nodes\": dial tcp 172.31.16.12:6443: connect: connection refused" node="ip-172-31-16-12" Jan 21 01:01:11.402545 kubelet[2870]: I0121 01:01:11.402483 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/431dacaade787a91cf1cbd0de68ef94b-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-12\" (UID: \"431dacaade787a91cf1cbd0de68ef94b\") " pod="kube-system/kube-apiserver-ip-172-31-16-12" Jan 21 01:01:11.402545 kubelet[2870]: I0121 01:01:11.402532 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/431dacaade787a91cf1cbd0de68ef94b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-12\" (UID: \"431dacaade787a91cf1cbd0de68ef94b\") " pod="kube-system/kube-apiserver-ip-172-31-16-12" Jan 21 01:01:11.402545 kubelet[2870]: I0121 01:01:11.402551 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d727e33e70f88b0167065f482b723b8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-12\" (UID: \"8d727e33e70f88b0167065f482b723b8\") " pod="kube-system/kube-controller-manager-ip-172-31-16-12" Jan 21 01:01:11.403020 kubelet[2870]: I0121 01:01:11.402565 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d727e33e70f88b0167065f482b723b8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-12\" (UID: \"8d727e33e70f88b0167065f482b723b8\") " pod="kube-system/kube-controller-manager-ip-172-31-16-12" Jan 21 01:01:11.403020 kubelet[2870]: I0121 01:01:11.402585 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/431dacaade787a91cf1cbd0de68ef94b-ca-certs\") pod \"kube-apiserver-ip-172-31-16-12\" (UID: \"431dacaade787a91cf1cbd0de68ef94b\") " pod="kube-system/kube-apiserver-ip-172-31-16-12" Jan 21 01:01:11.403020 kubelet[2870]: I0121 01:01:11.402598 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d727e33e70f88b0167065f482b723b8-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-12\" (UID: \"8d727e33e70f88b0167065f482b723b8\") " pod="kube-system/kube-controller-manager-ip-172-31-16-12" Jan 21 01:01:11.403020 kubelet[2870]: I0121 01:01:11.402612 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d727e33e70f88b0167065f482b723b8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-12\" (UID: \"8d727e33e70f88b0167065f482b723b8\") " pod="kube-system/kube-controller-manager-ip-172-31-16-12" Jan 21 01:01:11.403020 kubelet[2870]: I0121 01:01:11.402626 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d727e33e70f88b0167065f482b723b8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-12\" (UID: \"8d727e33e70f88b0167065f482b723b8\") " pod="kube-system/kube-controller-manager-ip-172-31-16-12" Jan 21 01:01:11.403153 kubelet[2870]: I0121 01:01:11.402640 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7961951e66f9652959a88e6ab6547a09-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-12\" (UID: \"7961951e66f9652959a88e6ab6547a09\") " pod="kube-system/kube-scheduler-ip-172-31-16-12" Jan 21 01:01:11.520767 kubelet[2870]: I0121 01:01:11.520649 2870 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-12" Jan 21 01:01:11.521417 kubelet[2870]: E0121 01:01:11.521079 2870 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.12:6443/api/v1/nodes\": dial tcp 172.31.16.12:6443: connect: connection refused" node="ip-172-31-16-12" Jan 21 01:01:11.599211 containerd[1959]: time="2026-01-21T01:01:11.599146164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-12,Uid:7961951e66f9652959a88e6ab6547a09,Namespace:kube-system,Attempt:0,}" Jan 21 01:01:11.611023 containerd[1959]: time="2026-01-21T01:01:11.610831646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-12,Uid:431dacaade787a91cf1cbd0de68ef94b,Namespace:kube-system,Attempt:0,}" Jan 21 01:01:11.616339 containerd[1959]: time="2026-01-21T01:01:11.616295806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-12,Uid:8d727e33e70f88b0167065f482b723b8,Namespace:kube-system,Attempt:0,}" Jan 21 01:01:11.704552 kubelet[2870]: E0121 01:01:11.704514 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-12?timeout=10s\": dial tcp 172.31.16.12:6443: connect: connection refused" interval="800ms" Jan 21 01:01:11.923364 kubelet[2870]: I0121 01:01:11.923336 2870 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-12" Jan 21 01:01:11.923697 kubelet[2870]: E0121 01:01:11.923662 2870 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.12:6443/api/v1/nodes\": dial tcp 172.31.16.12:6443: connect: connection refused" node="ip-172-31-16-12" Jan 21 01:01:12.064105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3227655861.mount: Deactivated successfully. Jan 21 01:01:12.069528 kubelet[2870]: E0121 01:01:12.069489 2870 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 01:01:12.073278 containerd[1959]: time="2026-01-21T01:01:12.073222602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 01:01:12.076699 containerd[1959]: time="2026-01-21T01:01:12.076584883Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 01:01:12.079552 containerd[1959]: time="2026-01-21T01:01:12.079509714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 21 01:01:12.080835 containerd[1959]: time="2026-01-21T01:01:12.080794243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 21 01:01:12.083456 containerd[1959]: time="2026-01-21T01:01:12.083276583Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 01:01:12.084285 containerd[1959]: time="2026-01-21T01:01:12.084230086Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 01:01:12.085358 containerd[1959]: time="2026-01-21T01:01:12.085087366Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 21 01:01:12.089637 containerd[1959]: time="2026-01-21T01:01:12.088368269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 01:01:12.089637 containerd[1959]: time="2026-01-21T01:01:12.089205158Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 475.013575ms" Jan 21 01:01:12.090175 containerd[1959]: time="2026-01-21T01:01:12.090135132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 471.607684ms" Jan 21 01:01:12.091823 containerd[1959]: time="2026-01-21T01:01:12.091768773Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 478.798862ms" Jan 21 01:01:12.242898 containerd[1959]: time="2026-01-21T01:01:12.241967076Z" level=info msg="connecting to shim 98efba1d3390b2d6c5b7ee85ea85426fe4df8a780901aa276bc78c74195f863e" address="unix:///run/containerd/s/ba750ce8aeef77bdfd7c1f3119544c6b0256d0d943965eb3c04dbb70fdb492d1" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:01:12.245983 containerd[1959]: time="2026-01-21T01:01:12.245939608Z" level=info msg="connecting to shim f7891ed0fccd6f38e51f82207b2ea0bf8e514a6675e03bcddda893552fad6465" address="unix:///run/containerd/s/bd4bedbb3c8c60e4aaeea1891b0b8df78280b60715cd5163e126f8f9eb131454" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:01:12.248973 containerd[1959]: time="2026-01-21T01:01:12.248930814Z" level=info msg="connecting to shim 4f569e7af57cd506192af4e7429fa89fc751d7bf24c18269110cc0e8da38a19b" address="unix:///run/containerd/s/8913fa396f35a09e590aeb1222e3274d1ff84cd0a5ad6a98e96c01db3b6bd852" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:01:12.304111 kubelet[2870]: E0121 01:01:12.303986 2870 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 01:01:12.342698 kubelet[2870]: E0121 01:01:12.342505 2870 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 01:01:12.404417 systemd[1]: Started cri-containerd-4f569e7af57cd506192af4e7429fa89fc751d7bf24c18269110cc0e8da38a19b.scope - libcontainer container 4f569e7af57cd506192af4e7429fa89fc751d7bf24c18269110cc0e8da38a19b. Jan 21 01:01:12.406340 systemd[1]: Started cri-containerd-98efba1d3390b2d6c5b7ee85ea85426fe4df8a780901aa276bc78c74195f863e.scope - libcontainer container 98efba1d3390b2d6c5b7ee85ea85426fe4df8a780901aa276bc78c74195f863e. Jan 21 01:01:12.408134 systemd[1]: Started cri-containerd-f7891ed0fccd6f38e51f82207b2ea0bf8e514a6675e03bcddda893552fad6465.scope - libcontainer container f7891ed0fccd6f38e51f82207b2ea0bf8e514a6675e03bcddda893552fad6465. Jan 21 01:01:12.422293 kubelet[2870]: E0121 01:01:12.422239 2870 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-12&limit=500&resourceVersion=0\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 01:01:12.508038 kubelet[2870]: E0121 01:01:12.507455 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-12?timeout=10s\": dial tcp 172.31.16.12:6443: connect: connection refused" interval="1.6s" Jan 21 01:01:12.526466 containerd[1959]: time="2026-01-21T01:01:12.526393131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-12,Uid:8d727e33e70f88b0167065f482b723b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f569e7af57cd506192af4e7429fa89fc751d7bf24c18269110cc0e8da38a19b\"" Jan 21 01:01:12.532814 containerd[1959]: time="2026-01-21T01:01:12.532361556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-12,Uid:431dacaade787a91cf1cbd0de68ef94b,Namespace:kube-system,Attempt:0,} returns sandbox id \"98efba1d3390b2d6c5b7ee85ea85426fe4df8a780901aa276bc78c74195f863e\"" Jan 21 01:01:12.541736 containerd[1959]: time="2026-01-21T01:01:12.541682463Z" level=info msg="CreateContainer within sandbox \"98efba1d3390b2d6c5b7ee85ea85426fe4df8a780901aa276bc78c74195f863e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 21 01:01:12.543313 containerd[1959]: time="2026-01-21T01:01:12.542970874Z" level=info msg="CreateContainer within sandbox \"4f569e7af57cd506192af4e7429fa89fc751d7bf24c18269110cc0e8da38a19b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 21 01:01:12.551260 containerd[1959]: time="2026-01-21T01:01:12.551179047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-12,Uid:7961951e66f9652959a88e6ab6547a09,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7891ed0fccd6f38e51f82207b2ea0bf8e514a6675e03bcddda893552fad6465\"" Jan 21 01:01:12.555919 containerd[1959]: time="2026-01-21T01:01:12.555879409Z" level=info msg="CreateContainer within sandbox \"f7891ed0fccd6f38e51f82207b2ea0bf8e514a6675e03bcddda893552fad6465\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 21 01:01:12.584616 containerd[1959]: time="2026-01-21T01:01:12.584473891Z" level=info msg="Container 67c40b8a1c6c70d1f1aed479e44e29513d0f281e8bc0e9280a167764ea60addd: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:12.585012 containerd[1959]: time="2026-01-21T01:01:12.584966138Z" level=info msg="Container 3f5b15c6c594925691fde5ce966d575b18c34cf099d7341a055ca967d1412566: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:12.585278 containerd[1959]: time="2026-01-21T01:01:12.585255492Z" level=info msg="Container da83093bece55c71a40905c97a01ce9e46d60ca3a39d265818fdfd84a478a6ad: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:12.592855 containerd[1959]: time="2026-01-21T01:01:12.592784984Z" level=info msg="CreateContainer within sandbox \"98efba1d3390b2d6c5b7ee85ea85426fe4df8a780901aa276bc78c74195f863e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3f5b15c6c594925691fde5ce966d575b18c34cf099d7341a055ca967d1412566\"" Jan 21 01:01:12.594064 containerd[1959]: time="2026-01-21T01:01:12.593746817Z" level=info msg="StartContainer for \"3f5b15c6c594925691fde5ce966d575b18c34cf099d7341a055ca967d1412566\"" Jan 21 01:01:12.597571 containerd[1959]: time="2026-01-21T01:01:12.597520449Z" level=info msg="connecting to shim 3f5b15c6c594925691fde5ce966d575b18c34cf099d7341a055ca967d1412566" address="unix:///run/containerd/s/ba750ce8aeef77bdfd7c1f3119544c6b0256d0d943965eb3c04dbb70fdb492d1" protocol=ttrpc version=3 Jan 21 01:01:12.598748 containerd[1959]: time="2026-01-21T01:01:12.598438851Z" level=info msg="CreateContainer within sandbox \"f7891ed0fccd6f38e51f82207b2ea0bf8e514a6675e03bcddda893552fad6465\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"da83093bece55c71a40905c97a01ce9e46d60ca3a39d265818fdfd84a478a6ad\"" Jan 21 01:01:12.599652 containerd[1959]: time="2026-01-21T01:01:12.599606109Z" level=info msg="StartContainer for \"da83093bece55c71a40905c97a01ce9e46d60ca3a39d265818fdfd84a478a6ad\"" Jan 21 01:01:12.602984 containerd[1959]: time="2026-01-21T01:01:12.602907480Z" level=info msg="connecting to shim da83093bece55c71a40905c97a01ce9e46d60ca3a39d265818fdfd84a478a6ad" address="unix:///run/containerd/s/bd4bedbb3c8c60e4aaeea1891b0b8df78280b60715cd5163e126f8f9eb131454" protocol=ttrpc version=3 Jan 21 01:01:12.604636 containerd[1959]: time="2026-01-21T01:01:12.604534768Z" level=info msg="CreateContainer within sandbox \"4f569e7af57cd506192af4e7429fa89fc751d7bf24c18269110cc0e8da38a19b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"67c40b8a1c6c70d1f1aed479e44e29513d0f281e8bc0e9280a167764ea60addd\"" Jan 21 01:01:12.605840 containerd[1959]: time="2026-01-21T01:01:12.605453124Z" level=info msg="StartContainer for \"67c40b8a1c6c70d1f1aed479e44e29513d0f281e8bc0e9280a167764ea60addd\"" Jan 21 01:01:12.606742 containerd[1959]: time="2026-01-21T01:01:12.606716589Z" level=info msg="connecting to shim 67c40b8a1c6c70d1f1aed479e44e29513d0f281e8bc0e9280a167764ea60addd" address="unix:///run/containerd/s/8913fa396f35a09e590aeb1222e3274d1ff84cd0a5ad6a98e96c01db3b6bd852" protocol=ttrpc version=3 Jan 21 01:01:12.643166 systemd[1]: Started cri-containerd-da83093bece55c71a40905c97a01ce9e46d60ca3a39d265818fdfd84a478a6ad.scope - libcontainer container da83093bece55c71a40905c97a01ce9e46d60ca3a39d265818fdfd84a478a6ad. Jan 21 01:01:12.655526 systemd[1]: Started cri-containerd-3f5b15c6c594925691fde5ce966d575b18c34cf099d7341a055ca967d1412566.scope - libcontainer container 3f5b15c6c594925691fde5ce966d575b18c34cf099d7341a055ca967d1412566. Jan 21 01:01:12.680105 systemd[1]: Started cri-containerd-67c40b8a1c6c70d1f1aed479e44e29513d0f281e8bc0e9280a167764ea60addd.scope - libcontainer container 67c40b8a1c6c70d1f1aed479e44e29513d0f281e8bc0e9280a167764ea60addd. Jan 21 01:01:12.726212 kubelet[2870]: I0121 01:01:12.726155 2870 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-12" Jan 21 01:01:12.728669 kubelet[2870]: E0121 01:01:12.728616 2870 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.12:6443/api/v1/nodes\": dial tcp 172.31.16.12:6443: connect: connection refused" node="ip-172-31-16-12" Jan 21 01:01:12.783007 containerd[1959]: time="2026-01-21T01:01:12.781964056Z" level=info msg="StartContainer for \"3f5b15c6c594925691fde5ce966d575b18c34cf099d7341a055ca967d1412566\" returns successfully" Jan 21 01:01:12.788758 containerd[1959]: time="2026-01-21T01:01:12.788714662Z" level=info msg="StartContainer for \"da83093bece55c71a40905c97a01ce9e46d60ca3a39d265818fdfd84a478a6ad\" returns successfully" Jan 21 01:01:12.802605 containerd[1959]: time="2026-01-21T01:01:12.802560115Z" level=info msg="StartContainer for \"67c40b8a1c6c70d1f1aed479e44e29513d0f281e8bc0e9280a167764ea60addd\" returns successfully" Jan 21 01:01:13.089472 kubelet[2870]: E0121 01:01:13.089169 2870 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 01:01:13.169491 kubelet[2870]: E0121 01:01:13.169242 2870 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-12\" not found" node="ip-172-31-16-12" Jan 21 01:01:13.171406 kubelet[2870]: E0121 01:01:13.171210 2870 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-12\" not found" node="ip-172-31-16-12" Jan 21 01:01:13.174822 kubelet[2870]: E0121 01:01:13.174798 2870 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-12\" not found" node="ip-172-31-16-12" Jan 21 01:01:13.372700 kubelet[2870]: E0121 01:01:13.372507 2870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.12:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-12.188c994404c5bd0f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-12,UID:ip-172-31-16-12,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-12,},FirstTimestamp:2026-01-21 01:01:11.066656015 +0000 UTC m=+0.708482249,LastTimestamp:2026-01-21 01:01:11.066656015 +0000 UTC m=+0.708482249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-12,}" Jan 21 01:01:13.673486 kubelet[2870]: E0121 01:01:13.673423 2870 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 01:01:14.108558 kubelet[2870]: E0121 01:01:14.108447 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-12?timeout=10s\": dial tcp 172.31.16.12:6443: connect: connection refused" interval="3.2s" Jan 21 01:01:14.176387 kubelet[2870]: E0121 01:01:14.176355 2870 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-12\" not found" node="ip-172-31-16-12" Jan 21 01:01:14.176814 kubelet[2870]: E0121 01:01:14.176768 2870 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-12\" not found" node="ip-172-31-16-12" Jan 21 01:01:14.281382 kubelet[2870]: E0121 01:01:14.281340 2870 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-12&limit=500&resourceVersion=0\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 01:01:14.331532 kubelet[2870]: I0121 01:01:14.331489 2870 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-12" Jan 21 01:01:14.331907 kubelet[2870]: E0121 01:01:14.331873 2870 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.12:6443/api/v1/nodes\": dial tcp 172.31.16.12:6443: connect: connection refused" node="ip-172-31-16-12" Jan 21 01:01:14.807586 kubelet[2870]: E0121 01:01:14.807547 2870 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 01:01:14.853052 kubelet[2870]: E0121 01:01:14.853014 2870 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 01:01:16.356765 kubelet[2870]: E0121 01:01:16.356733 2870 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-12\" not found" node="ip-172-31-16-12" Jan 21 01:01:16.520111 kubelet[2870]: E0121 01:01:16.520082 2870 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-12\" not found" node="ip-172-31-16-12" Jan 21 01:01:17.068501 update_engine[1930]: I20260121 01:01:17.067818 1930 update_attempter.cc:509] Updating boot flags... Jan 21 01:01:17.311305 kubelet[2870]: E0121 01:01:17.311235 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-12?timeout=10s\": dial tcp 172.31.16.12:6443: connect: connection refused" interval="6.4s" Jan 21 01:01:17.335386 kubelet[2870]: E0121 01:01:17.335279 2870 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 01:01:17.367170 kubelet[2870]: E0121 01:01:17.362249 2870 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 01:01:17.534947 kubelet[2870]: I0121 01:01:17.534496 2870 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-12" Jan 21 01:01:17.534947 kubelet[2870]: E0121 01:01:17.534844 2870 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.12:6443/api/v1/nodes\": dial tcp 172.31.16.12:6443: connect: connection refused" node="ip-172-31-16-12" Jan 21 01:01:19.745948 kubelet[2870]: E0121 01:01:19.745914 2870 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-12\" not found" node="ip-172-31-16-12" Jan 21 01:01:20.060182 kubelet[2870]: I0121 01:01:20.059901 2870 apiserver.go:52] "Watching apiserver" Jan 21 01:01:20.101437 kubelet[2870]: I0121 01:01:20.101387 2870 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 01:01:20.351227 kubelet[2870]: E0121 01:01:20.351104 2870 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-16-12" not found Jan 21 01:01:20.708893 kubelet[2870]: E0121 01:01:20.708717 2870 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-16-12" not found Jan 21 01:01:21.146971 kubelet[2870]: E0121 01:01:21.146871 2870 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-16-12" not found Jan 21 01:01:21.219060 kubelet[2870]: E0121 01:01:21.219019 2870 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-12\" not found" Jan 21 01:01:22.051749 kubelet[2870]: E0121 01:01:22.051709 2870 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-16-12" not found Jan 21 01:01:23.715826 kubelet[2870]: E0121 01:01:23.715765 2870 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-12\" not found" node="ip-172-31-16-12" Jan 21 01:01:23.937320 kubelet[2870]: I0121 01:01:23.937279 2870 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-12" Jan 21 01:01:23.944012 kubelet[2870]: I0121 01:01:23.943969 2870 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-12" Jan 21 01:01:24.000601 kubelet[2870]: I0121 01:01:23.999475 2870 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-12" Jan 21 01:01:24.018296 kubelet[2870]: I0121 01:01:24.018135 2870 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-12" Jan 21 01:01:24.019439 systemd[1]: Reload requested from client PID 3252 ('systemctl') (unit session-8.scope)... Jan 21 01:01:24.019455 systemd[1]: Reloading... Jan 21 01:01:24.033977 kubelet[2870]: I0121 01:01:24.033275 2870 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-12" Jan 21 01:01:24.160880 zram_generator::config[3299]: No configuration found. Jan 21 01:01:24.491340 systemd[1]: Reloading finished in 471 ms. Jan 21 01:01:24.535446 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:01:24.556872 systemd[1]: kubelet.service: Deactivated successfully. Jan 21 01:01:24.557541 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:01:24.557631 systemd[1]: kubelet.service: Consumed 1.206s CPU time, 122.5M memory peak. Jan 21 01:01:24.560056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:01:24.877806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:01:24.891135 (kubelet)[3359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 01:01:25.012847 kubelet[3359]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 01:01:25.013565 kubelet[3359]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 01:01:25.018871 kubelet[3359]: I0121 01:01:25.017438 3359 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 01:01:25.043383 kubelet[3359]: I0121 01:01:25.043338 3359 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 21 01:01:25.043383 kubelet[3359]: I0121 01:01:25.043372 3359 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 01:01:25.052703 kubelet[3359]: I0121 01:01:25.052647 3359 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 21 01:01:25.053922 kubelet[3359]: I0121 01:01:25.052816 3359 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 01:01:25.053922 kubelet[3359]: I0121 01:01:25.053209 3359 server.go:956] "Client rotation is on, will bootstrap in background" Jan 21 01:01:25.055286 kubelet[3359]: I0121 01:01:25.055264 3359 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 21 01:01:25.058486 kubelet[3359]: I0121 01:01:25.058457 3359 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 01:01:25.081185 kubelet[3359]: I0121 01:01:25.081146 3359 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 01:01:25.092757 kubelet[3359]: I0121 01:01:25.092676 3359 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 21 01:01:25.095423 kubelet[3359]: I0121 01:01:25.095356 3359 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 01:01:25.095733 kubelet[3359]: I0121 01:01:25.095569 3359 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-12","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 01:01:25.095897 kubelet[3359]: I0121 01:01:25.095887 3359 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 01:01:25.095945 kubelet[3359]: I0121 01:01:25.095940 3359 container_manager_linux.go:306] "Creating device plugin manager" Jan 21 01:01:25.096020 kubelet[3359]: I0121 01:01:25.096013 3359 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 21 01:01:25.106682 kubelet[3359]: I0121 01:01:25.106644 3359 state_mem.go:36] "Initialized new in-memory state store" Jan 21 01:01:25.107034 kubelet[3359]: I0121 01:01:25.107005 3359 kubelet.go:475] "Attempting to sync node with API server" Jan 21 01:01:25.107034 kubelet[3359]: I0121 01:01:25.107029 3359 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 01:01:25.108986 kubelet[3359]: I0121 01:01:25.108885 3359 kubelet.go:387] "Adding apiserver pod source" Jan 21 01:01:25.108986 kubelet[3359]: I0121 01:01:25.108918 3359 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 01:01:25.112183 kubelet[3359]: I0121 01:01:25.112156 3359 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 01:01:25.116471 kubelet[3359]: I0121 01:01:25.114953 3359 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 01:01:25.116471 kubelet[3359]: I0121 01:01:25.115005 3359 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 21 01:01:25.119918 kubelet[3359]: I0121 01:01:25.119890 3359 server.go:1262] "Started kubelet" Jan 21 01:01:25.138483 kubelet[3359]: I0121 01:01:25.138310 3359 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 01:01:25.138810 kubelet[3359]: I0121 01:01:25.138765 3359 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 21 01:01:25.139297 kubelet[3359]: I0121 01:01:25.139281 3359 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 01:01:25.154994 kubelet[3359]: I0121 01:01:25.152997 3359 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 01:01:25.155440 kubelet[3359]: I0121 01:01:25.155417 3359 server.go:310] "Adding debug handlers to kubelet server" Jan 21 01:01:25.157572 kubelet[3359]: I0121 01:01:25.157550 3359 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 01:01:25.158520 kubelet[3359]: E0121 01:01:25.158496 3359 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 21 01:01:25.162595 kubelet[3359]: I0121 01:01:25.162568 3359 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 21 01:01:25.162753 kubelet[3359]: I0121 01:01:25.162681 3359 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 01:01:25.165504 kubelet[3359]: I0121 01:01:25.165480 3359 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 01:01:25.165928 kubelet[3359]: I0121 01:01:25.165910 3359 reconciler.go:29] "Reconciler: start to sync state" Jan 21 01:01:25.176664 kubelet[3359]: I0121 01:01:25.176365 3359 factory.go:223] Registration of the containerd container factory successfully Jan 21 01:01:25.176664 kubelet[3359]: I0121 01:01:25.176391 3359 factory.go:223] Registration of the systemd container factory successfully Jan 21 01:01:25.176664 kubelet[3359]: I0121 01:01:25.176486 3359 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 01:01:25.185069 kubelet[3359]: I0121 01:01:25.185028 3359 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 21 01:01:25.187448 kubelet[3359]: I0121 01:01:25.187133 3359 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 21 01:01:25.187448 kubelet[3359]: I0121 01:01:25.187156 3359 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 21 01:01:25.187448 kubelet[3359]: I0121 01:01:25.187178 3359 kubelet.go:2427] "Starting kubelet main sync loop" Jan 21 01:01:25.187448 kubelet[3359]: E0121 01:01:25.187216 3359 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 01:01:25.259695 kubelet[3359]: I0121 01:01:25.259663 3359 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 01:01:25.259695 kubelet[3359]: I0121 01:01:25.259683 3359 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 01:01:25.259695 kubelet[3359]: I0121 01:01:25.259706 3359 state_mem.go:36] "Initialized new in-memory state store" Jan 21 01:01:25.259989 kubelet[3359]: I0121 01:01:25.259878 3359 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 21 01:01:25.259989 kubelet[3359]: I0121 01:01:25.259892 3359 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 21 01:01:25.259989 kubelet[3359]: I0121 01:01:25.259916 3359 policy_none.go:49] "None policy: Start" Jan 21 01:01:25.259989 kubelet[3359]: I0121 01:01:25.259930 3359 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 21 01:01:25.259989 kubelet[3359]: I0121 01:01:25.259942 3359 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 21 01:01:25.260168 kubelet[3359]: I0121 01:01:25.260066 3359 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 21 01:01:25.260168 kubelet[3359]: I0121 01:01:25.260077 3359 policy_none.go:47] "Start" Jan 21 01:01:25.266067 kubelet[3359]: E0121 01:01:25.266031 3359 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 01:01:25.267632 kubelet[3359]: I0121 01:01:25.267235 3359 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 01:01:25.267632 kubelet[3359]: I0121 01:01:25.267258 3359 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 01:01:25.270441 kubelet[3359]: I0121 01:01:25.269532 3359 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 01:01:25.273910 kubelet[3359]: E0121 01:01:25.273883 3359 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 01:01:25.289717 kubelet[3359]: I0121 01:01:25.289689 3359 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-12" Jan 21 01:01:25.293027 kubelet[3359]: I0121 01:01:25.292925 3359 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-12" Jan 21 01:01:25.294118 kubelet[3359]: I0121 01:01:25.293840 3359 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-12" Jan 21 01:01:25.308357 kubelet[3359]: E0121 01:01:25.308324 3359 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-12\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-12" Jan 21 01:01:25.308797 kubelet[3359]: E0121 01:01:25.308691 3359 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-12\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-12" Jan 21 01:01:25.308931 kubelet[3359]: E0121 01:01:25.308910 3359 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-12\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-12" Jan 21 01:01:25.367354 kubelet[3359]: I0121 01:01:25.367315 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/431dacaade787a91cf1cbd0de68ef94b-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-12\" (UID: \"431dacaade787a91cf1cbd0de68ef94b\") " pod="kube-system/kube-apiserver-ip-172-31-16-12" Jan 21 01:01:25.367354 kubelet[3359]: I0121 01:01:25.367352 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/431dacaade787a91cf1cbd0de68ef94b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-12\" (UID: \"431dacaade787a91cf1cbd0de68ef94b\") " pod="kube-system/kube-apiserver-ip-172-31-16-12" Jan 21 01:01:25.367354 kubelet[3359]: I0121 01:01:25.367371 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d727e33e70f88b0167065f482b723b8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-12\" (UID: \"8d727e33e70f88b0167065f482b723b8\") " pod="kube-system/kube-controller-manager-ip-172-31-16-12" Jan 21 01:01:25.367555 kubelet[3359]: I0121 01:01:25.367389 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d727e33e70f88b0167065f482b723b8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-12\" (UID: \"8d727e33e70f88b0167065f482b723b8\") " pod="kube-system/kube-controller-manager-ip-172-31-16-12" Jan 21 01:01:25.367555 kubelet[3359]: I0121 01:01:25.367405 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/431dacaade787a91cf1cbd0de68ef94b-ca-certs\") pod \"kube-apiserver-ip-172-31-16-12\" (UID: \"431dacaade787a91cf1cbd0de68ef94b\") " pod="kube-system/kube-apiserver-ip-172-31-16-12" Jan 21 01:01:25.367555 kubelet[3359]: I0121 01:01:25.367418 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d727e33e70f88b0167065f482b723b8-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-12\" (UID: \"8d727e33e70f88b0167065f482b723b8\") " pod="kube-system/kube-controller-manager-ip-172-31-16-12" Jan 21 01:01:25.367555 kubelet[3359]: I0121 01:01:25.367431 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d727e33e70f88b0167065f482b723b8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-12\" (UID: \"8d727e33e70f88b0167065f482b723b8\") " pod="kube-system/kube-controller-manager-ip-172-31-16-12" Jan 21 01:01:25.367555 kubelet[3359]: I0121 01:01:25.367445 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d727e33e70f88b0167065f482b723b8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-12\" (UID: \"8d727e33e70f88b0167065f482b723b8\") " pod="kube-system/kube-controller-manager-ip-172-31-16-12" Jan 21 01:01:25.367678 kubelet[3359]: I0121 01:01:25.367460 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7961951e66f9652959a88e6ab6547a09-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-12\" (UID: \"7961951e66f9652959a88e6ab6547a09\") " pod="kube-system/kube-scheduler-ip-172-31-16-12" Jan 21 01:01:25.381821 kubelet[3359]: I0121 01:01:25.381788 3359 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-12" Jan 21 01:01:25.392333 kubelet[3359]: I0121 01:01:25.392215 3359 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-12" Jan 21 01:01:25.392333 kubelet[3359]: I0121 01:01:25.392306 3359 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-12" Jan 21 01:01:26.118892 kubelet[3359]: I0121 01:01:26.118839 3359 apiserver.go:52] "Watching apiserver" Jan 21 01:01:26.165818 kubelet[3359]: I0121 01:01:26.165734 3359 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 01:01:26.230025 kubelet[3359]: I0121 01:01:26.229828 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-12" podStartSLOduration=2.229811147 podStartE2EDuration="2.229811147s" podCreationTimestamp="2026-01-21 01:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 01:01:26.219539182 +0000 UTC m=+1.321278076" watchObservedRunningTime="2026-01-21 01:01:26.229811147 +0000 UTC m=+1.331550020" Jan 21 01:01:26.244599 kubelet[3359]: I0121 01:01:26.244502 3359 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-12" Jan 21 01:01:26.247434 kubelet[3359]: I0121 01:01:26.247203 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-12" podStartSLOduration=2.24718852 podStartE2EDuration="2.24718852s" podCreationTimestamp="2026-01-21 01:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 01:01:26.244073233 +0000 UTC m=+1.345812129" watchObservedRunningTime="2026-01-21 01:01:26.24718852 +0000 UTC m=+1.348927415" Jan 21 01:01:26.247636 kubelet[3359]: I0121 01:01:26.247481 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-12" podStartSLOduration=2.247469246 podStartE2EDuration="2.247469246s" podCreationTimestamp="2026-01-21 01:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 01:01:26.231071577 +0000 UTC m=+1.332810470" watchObservedRunningTime="2026-01-21 01:01:26.247469246 +0000 UTC m=+1.349208158" Jan 21 01:01:26.262371 kubelet[3359]: E0121 01:01:26.262322 3359 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-12\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-12" Jan 21 01:01:27.339032 sudo[2285]: pam_unix(sudo:session): session closed for user root Jan 21 01:01:27.418527 sshd[2284]: Connection closed by 68.220.241.50 port 50800 Jan 21 01:01:27.419154 sshd-session[2280]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:27.424954 systemd[1]: sshd@6-172.31.16.12:22-68.220.241.50:50800.service: Deactivated successfully. Jan 21 01:01:27.429577 systemd[1]: session-8.scope: Deactivated successfully. Jan 21 01:01:27.429946 systemd[1]: session-8.scope: Consumed 4.293s CPU time, 155M memory peak. Jan 21 01:01:27.431971 systemd-logind[1929]: Session 8 logged out. Waiting for processes to exit. Jan 21 01:01:27.434353 systemd-logind[1929]: Removed session 8. Jan 21 01:01:29.030047 kubelet[3359]: I0121 01:01:29.029959 3359 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 21 01:01:29.030927 containerd[1959]: time="2026-01-21T01:01:29.030886959Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 21 01:01:29.031227 kubelet[3359]: I0121 01:01:29.031099 3359 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 21 01:01:29.942236 systemd[1]: Created slice kubepods-burstable-pod16b23aae_3a83_4368_b829_0f06157a8c37.slice - libcontainer container kubepods-burstable-pod16b23aae_3a83_4368_b829_0f06157a8c37.slice. Jan 21 01:01:29.954108 systemd[1]: Created slice kubepods-besteffort-pod175ae1ff_0f5f_4783_96b9_da431c871245.slice - libcontainer container kubepods-besteffort-pod175ae1ff_0f5f_4783_96b9_da431c871245.slice. Jan 21 01:01:30.000221 kubelet[3359]: I0121 01:01:30.000181 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/16b23aae-3a83-4368-b829-0f06157a8c37-cni\") pod \"kube-flannel-ds-hgbnz\" (UID: \"16b23aae-3a83-4368-b829-0f06157a8c37\") " pod="kube-flannel/kube-flannel-ds-hgbnz" Jan 21 01:01:30.000221 kubelet[3359]: I0121 01:01:30.000222 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16b23aae-3a83-4368-b829-0f06157a8c37-xtables-lock\") pod \"kube-flannel-ds-hgbnz\" (UID: \"16b23aae-3a83-4368-b829-0f06157a8c37\") " pod="kube-flannel/kube-flannel-ds-hgbnz" Jan 21 01:01:30.000389 kubelet[3359]: I0121 01:01:30.000241 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2lhf\" (UniqueName: \"kubernetes.io/projected/16b23aae-3a83-4368-b829-0f06157a8c37-kube-api-access-h2lhf\") pod \"kube-flannel-ds-hgbnz\" (UID: \"16b23aae-3a83-4368-b829-0f06157a8c37\") " pod="kube-flannel/kube-flannel-ds-hgbnz" Jan 21 01:01:30.000389 kubelet[3359]: I0121 01:01:30.000256 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/175ae1ff-0f5f-4783-96b9-da431c871245-lib-modules\") pod \"kube-proxy-zw9fx\" (UID: \"175ae1ff-0f5f-4783-96b9-da431c871245\") " pod="kube-system/kube-proxy-zw9fx" Jan 21 01:01:30.000389 kubelet[3359]: I0121 01:01:30.000276 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9pcm\" (UniqueName: \"kubernetes.io/projected/175ae1ff-0f5f-4783-96b9-da431c871245-kube-api-access-v9pcm\") pod \"kube-proxy-zw9fx\" (UID: \"175ae1ff-0f5f-4783-96b9-da431c871245\") " pod="kube-system/kube-proxy-zw9fx" Jan 21 01:01:30.000389 kubelet[3359]: I0121 01:01:30.000291 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/16b23aae-3a83-4368-b829-0f06157a8c37-run\") pod \"kube-flannel-ds-hgbnz\" (UID: \"16b23aae-3a83-4368-b829-0f06157a8c37\") " pod="kube-flannel/kube-flannel-ds-hgbnz" Jan 21 01:01:30.000389 kubelet[3359]: I0121 01:01:30.000304 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/16b23aae-3a83-4368-b829-0f06157a8c37-flannel-cfg\") pod \"kube-flannel-ds-hgbnz\" (UID: \"16b23aae-3a83-4368-b829-0f06157a8c37\") " pod="kube-flannel/kube-flannel-ds-hgbnz" Jan 21 01:01:30.000525 kubelet[3359]: I0121 01:01:30.000317 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/175ae1ff-0f5f-4783-96b9-da431c871245-kube-proxy\") pod \"kube-proxy-zw9fx\" (UID: \"175ae1ff-0f5f-4783-96b9-da431c871245\") " pod="kube-system/kube-proxy-zw9fx" Jan 21 01:01:30.000525 kubelet[3359]: I0121 01:01:30.000330 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/175ae1ff-0f5f-4783-96b9-da431c871245-xtables-lock\") pod \"kube-proxy-zw9fx\" (UID: \"175ae1ff-0f5f-4783-96b9-da431c871245\") " pod="kube-system/kube-proxy-zw9fx" Jan 21 01:01:30.000525 kubelet[3359]: I0121 01:01:30.000344 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/16b23aae-3a83-4368-b829-0f06157a8c37-cni-plugin\") pod \"kube-flannel-ds-hgbnz\" (UID: \"16b23aae-3a83-4368-b829-0f06157a8c37\") " pod="kube-flannel/kube-flannel-ds-hgbnz" Jan 21 01:01:30.251434 containerd[1959]: time="2026-01-21T01:01:30.251309641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hgbnz,Uid:16b23aae-3a83-4368-b829-0f06157a8c37,Namespace:kube-flannel,Attempt:0,}" Jan 21 01:01:30.265146 containerd[1959]: time="2026-01-21T01:01:30.264995212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zw9fx,Uid:175ae1ff-0f5f-4783-96b9-da431c871245,Namespace:kube-system,Attempt:0,}" Jan 21 01:01:30.292360 containerd[1959]: time="2026-01-21T01:01:30.292311225Z" level=info msg="connecting to shim 51a292f310afbe9899df6f9f0cc4054f91dc5efac0e23d3760e7c10b3c5373cd" address="unix:///run/containerd/s/84a6476dc63c39e30eefe0e5223937c2b2babe29cd020ff56a5529ed8a2448f3" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:01:30.341277 containerd[1959]: time="2026-01-21T01:01:30.341230952Z" level=info msg="connecting to shim 3d01a92ebd0c9c98b55b1b90973a7fc1db13c607f78a34e999e32c40d1219422" address="unix:///run/containerd/s/c468edf805053a4a1a3cd98e2db7fb9be7307a649dde323dcd81627371966530" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:01:30.351087 systemd[1]: Started cri-containerd-51a292f310afbe9899df6f9f0cc4054f91dc5efac0e23d3760e7c10b3c5373cd.scope - libcontainer container 51a292f310afbe9899df6f9f0cc4054f91dc5efac0e23d3760e7c10b3c5373cd. Jan 21 01:01:30.387071 systemd[1]: Started cri-containerd-3d01a92ebd0c9c98b55b1b90973a7fc1db13c607f78a34e999e32c40d1219422.scope - libcontainer container 3d01a92ebd0c9c98b55b1b90973a7fc1db13c607f78a34e999e32c40d1219422. Jan 21 01:01:30.436148 containerd[1959]: time="2026-01-21T01:01:30.436082707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zw9fx,Uid:175ae1ff-0f5f-4783-96b9-da431c871245,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d01a92ebd0c9c98b55b1b90973a7fc1db13c607f78a34e999e32c40d1219422\"" Jan 21 01:01:30.442797 containerd[1959]: time="2026-01-21T01:01:30.442717419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hgbnz,Uid:16b23aae-3a83-4368-b829-0f06157a8c37,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"51a292f310afbe9899df6f9f0cc4054f91dc5efac0e23d3760e7c10b3c5373cd\"" Jan 21 01:01:30.445727 containerd[1959]: time="2026-01-21T01:01:30.445688131Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 21 01:01:30.447800 containerd[1959]: time="2026-01-21T01:01:30.447739510Z" level=info msg="CreateContainer within sandbox \"3d01a92ebd0c9c98b55b1b90973a7fc1db13c607f78a34e999e32c40d1219422\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 21 01:01:30.465982 containerd[1959]: time="2026-01-21T01:01:30.465918800Z" level=info msg="Container 4f2e4d8790cb832954d31950e00f0879d9355ccf43bb7d2a0367971b1a19ae83: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:30.479669 containerd[1959]: time="2026-01-21T01:01:30.479623704Z" level=info msg="CreateContainer within sandbox \"3d01a92ebd0c9c98b55b1b90973a7fc1db13c607f78a34e999e32c40d1219422\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f2e4d8790cb832954d31950e00f0879d9355ccf43bb7d2a0367971b1a19ae83\"" Jan 21 01:01:30.480530 containerd[1959]: time="2026-01-21T01:01:30.480490297Z" level=info msg="StartContainer for \"4f2e4d8790cb832954d31950e00f0879d9355ccf43bb7d2a0367971b1a19ae83\"" Jan 21 01:01:30.482801 containerd[1959]: time="2026-01-21T01:01:30.482701532Z" level=info msg="connecting to shim 4f2e4d8790cb832954d31950e00f0879d9355ccf43bb7d2a0367971b1a19ae83" address="unix:///run/containerd/s/c468edf805053a4a1a3cd98e2db7fb9be7307a649dde323dcd81627371966530" protocol=ttrpc version=3 Jan 21 01:01:30.509060 systemd[1]: Started cri-containerd-4f2e4d8790cb832954d31950e00f0879d9355ccf43bb7d2a0367971b1a19ae83.scope - libcontainer container 4f2e4d8790cb832954d31950e00f0879d9355ccf43bb7d2a0367971b1a19ae83. Jan 21 01:01:30.630634 containerd[1959]: time="2026-01-21T01:01:30.630598418Z" level=info msg="StartContainer for \"4f2e4d8790cb832954d31950e00f0879d9355ccf43bb7d2a0367971b1a19ae83\" returns successfully" Jan 21 01:01:31.269508 kubelet[3359]: I0121 01:01:31.269278 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zw9fx" podStartSLOduration=2.2692311419999998 podStartE2EDuration="2.269231142s" podCreationTimestamp="2026-01-21 01:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 01:01:31.268953537 +0000 UTC m=+6.370692444" watchObservedRunningTime="2026-01-21 01:01:31.269231142 +0000 UTC m=+6.370970036" Jan 21 01:01:31.815571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628928472.mount: Deactivated successfully. Jan 21 01:01:31.888143 containerd[1959]: time="2026-01-21T01:01:31.888075230Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:31.890457 containerd[1959]: time="2026-01-21T01:01:31.890150048Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=0" Jan 21 01:01:31.892516 containerd[1959]: time="2026-01-21T01:01:31.892470857Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:31.897028 containerd[1959]: time="2026-01-21T01:01:31.896268206Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:31.897384 containerd[1959]: time="2026-01-21T01:01:31.897357790Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 1.451629993s" Jan 21 01:01:31.897482 containerd[1959]: time="2026-01-21T01:01:31.897468477Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 21 01:01:31.903341 containerd[1959]: time="2026-01-21T01:01:31.903293450Z" level=info msg="CreateContainer within sandbox \"51a292f310afbe9899df6f9f0cc4054f91dc5efac0e23d3760e7c10b3c5373cd\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 21 01:01:31.920859 containerd[1959]: time="2026-01-21T01:01:31.920141631Z" level=info msg="Container 9dc41e2ff3755789fa0ae2d9a4bc3375f6fa36396cfd62a2deb491cf5abd6e5e: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:31.936304 containerd[1959]: time="2026-01-21T01:01:31.936248170Z" level=info msg="CreateContainer within sandbox \"51a292f310afbe9899df6f9f0cc4054f91dc5efac0e23d3760e7c10b3c5373cd\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"9dc41e2ff3755789fa0ae2d9a4bc3375f6fa36396cfd62a2deb491cf5abd6e5e\"" Jan 21 01:01:31.937343 containerd[1959]: time="2026-01-21T01:01:31.937272854Z" level=info msg="StartContainer for \"9dc41e2ff3755789fa0ae2d9a4bc3375f6fa36396cfd62a2deb491cf5abd6e5e\"" Jan 21 01:01:31.938736 containerd[1959]: time="2026-01-21T01:01:31.938697499Z" level=info msg="connecting to shim 9dc41e2ff3755789fa0ae2d9a4bc3375f6fa36396cfd62a2deb491cf5abd6e5e" address="unix:///run/containerd/s/84a6476dc63c39e30eefe0e5223937c2b2babe29cd020ff56a5529ed8a2448f3" protocol=ttrpc version=3 Jan 21 01:01:31.967208 systemd[1]: Started cri-containerd-9dc41e2ff3755789fa0ae2d9a4bc3375f6fa36396cfd62a2deb491cf5abd6e5e.scope - libcontainer container 9dc41e2ff3755789fa0ae2d9a4bc3375f6fa36396cfd62a2deb491cf5abd6e5e. Jan 21 01:01:32.027978 containerd[1959]: time="2026-01-21T01:01:32.027947351Z" level=info msg="StartContainer for \"9dc41e2ff3755789fa0ae2d9a4bc3375f6fa36396cfd62a2deb491cf5abd6e5e\" returns successfully" Jan 21 01:01:32.050454 systemd[1]: cri-containerd-9dc41e2ff3755789fa0ae2d9a4bc3375f6fa36396cfd62a2deb491cf5abd6e5e.scope: Deactivated successfully. Jan 21 01:01:32.063032 containerd[1959]: time="2026-01-21T01:01:32.062977604Z" level=info msg="received container exit event container_id:\"9dc41e2ff3755789fa0ae2d9a4bc3375f6fa36396cfd62a2deb491cf5abd6e5e\" id:\"9dc41e2ff3755789fa0ae2d9a4bc3375f6fa36396cfd62a2deb491cf5abd6e5e\" pid:3565 exited_at:{seconds:1768957292 nanos:52503931}" Jan 21 01:01:32.265219 containerd[1959]: time="2026-01-21T01:01:32.265181405Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 21 01:01:35.186451 containerd[1959]: time="2026-01-21T01:01:35.185545068Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=26946120" Jan 21 01:01:35.191989 containerd[1959]: time="2026-01-21T01:01:35.190146426Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 2.924926631s" Jan 21 01:01:35.191989 containerd[1959]: time="2026-01-21T01:01:35.190182590Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 21 01:01:35.192400 containerd[1959]: time="2026-01-21T01:01:35.192356158Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:35.193866 containerd[1959]: time="2026-01-21T01:01:35.193836909Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:35.194740 containerd[1959]: time="2026-01-21T01:01:35.194680032Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:35.202581 containerd[1959]: time="2026-01-21T01:01:35.202508331Z" level=info msg="CreateContainer within sandbox \"51a292f310afbe9899df6f9f0cc4054f91dc5efac0e23d3760e7c10b3c5373cd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 21 01:01:35.214239 containerd[1959]: time="2026-01-21T01:01:35.214201890Z" level=info msg="Container 890d6bd8ce9dc3d91471c5e14d248aa778ea75cee60d9a41407144a11275771e: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:35.216995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165171790.mount: Deactivated successfully. Jan 21 01:01:35.225719 containerd[1959]: time="2026-01-21T01:01:35.225644332Z" level=info msg="CreateContainer within sandbox \"51a292f310afbe9899df6f9f0cc4054f91dc5efac0e23d3760e7c10b3c5373cd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"890d6bd8ce9dc3d91471c5e14d248aa778ea75cee60d9a41407144a11275771e\"" Jan 21 01:01:35.226528 containerd[1959]: time="2026-01-21T01:01:35.226497671Z" level=info msg="StartContainer for \"890d6bd8ce9dc3d91471c5e14d248aa778ea75cee60d9a41407144a11275771e\"" Jan 21 01:01:35.227530 containerd[1959]: time="2026-01-21T01:01:35.227502347Z" level=info msg="connecting to shim 890d6bd8ce9dc3d91471c5e14d248aa778ea75cee60d9a41407144a11275771e" address="unix:///run/containerd/s/84a6476dc63c39e30eefe0e5223937c2b2babe29cd020ff56a5529ed8a2448f3" protocol=ttrpc version=3 Jan 21 01:01:35.252045 systemd[1]: Started cri-containerd-890d6bd8ce9dc3d91471c5e14d248aa778ea75cee60d9a41407144a11275771e.scope - libcontainer container 890d6bd8ce9dc3d91471c5e14d248aa778ea75cee60d9a41407144a11275771e. Jan 21 01:01:35.360284 systemd[1]: cri-containerd-890d6bd8ce9dc3d91471c5e14d248aa778ea75cee60d9a41407144a11275771e.scope: Deactivated successfully. Jan 21 01:01:35.365971 containerd[1959]: time="2026-01-21T01:01:35.364371294Z" level=info msg="received container exit event container_id:\"890d6bd8ce9dc3d91471c5e14d248aa778ea75cee60d9a41407144a11275771e\" id:\"890d6bd8ce9dc3d91471c5e14d248aa778ea75cee60d9a41407144a11275771e\" pid:3775 exited_at:{seconds:1768957295 nanos:361068368}" Jan 21 01:01:35.372703 containerd[1959]: time="2026-01-21T01:01:35.372664512Z" level=info msg="StartContainer for \"890d6bd8ce9dc3d91471c5e14d248aa778ea75cee60d9a41407144a11275771e\" returns successfully" Jan 21 01:01:35.417292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-890d6bd8ce9dc3d91471c5e14d248aa778ea75cee60d9a41407144a11275771e-rootfs.mount: Deactivated successfully. Jan 21 01:01:35.427668 kubelet[3359]: I0121 01:01:35.427642 3359 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 21 01:01:35.479062 systemd[1]: Created slice kubepods-burstable-podf3733d61_effe_4c14_aede_71431ad8c80b.slice - libcontainer container kubepods-burstable-podf3733d61_effe_4c14_aede_71431ad8c80b.slice. Jan 21 01:01:35.494418 systemd[1]: Created slice kubepods-burstable-pod57ec02f7_f56a_4ee7_b188_4305ce9ac790.slice - libcontainer container kubepods-burstable-pod57ec02f7_f56a_4ee7_b188_4305ce9ac790.slice. Jan 21 01:01:35.545088 kubelet[3359]: I0121 01:01:35.545036 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3733d61-effe-4c14-aede-71431ad8c80b-config-volume\") pod \"coredns-66bc5c9577-975rm\" (UID: \"f3733d61-effe-4c14-aede-71431ad8c80b\") " pod="kube-system/coredns-66bc5c9577-975rm" Jan 21 01:01:35.545088 kubelet[3359]: I0121 01:01:35.545076 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d2hb\" (UniqueName: \"kubernetes.io/projected/f3733d61-effe-4c14-aede-71431ad8c80b-kube-api-access-8d2hb\") pod \"coredns-66bc5c9577-975rm\" (UID: \"f3733d61-effe-4c14-aede-71431ad8c80b\") " pod="kube-system/coredns-66bc5c9577-975rm" Jan 21 01:01:35.545088 kubelet[3359]: I0121 01:01:35.545099 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57ec02f7-f56a-4ee7-b188-4305ce9ac790-config-volume\") pod \"coredns-66bc5c9577-d9rth\" (UID: \"57ec02f7-f56a-4ee7-b188-4305ce9ac790\") " pod="kube-system/coredns-66bc5c9577-d9rth" Jan 21 01:01:35.545088 kubelet[3359]: I0121 01:01:35.545114 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t44vf\" (UniqueName: \"kubernetes.io/projected/57ec02f7-f56a-4ee7-b188-4305ce9ac790-kube-api-access-t44vf\") pod \"coredns-66bc5c9577-d9rth\" (UID: \"57ec02f7-f56a-4ee7-b188-4305ce9ac790\") " pod="kube-system/coredns-66bc5c9577-d9rth" Jan 21 01:01:35.791694 containerd[1959]: time="2026-01-21T01:01:35.791571622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-975rm,Uid:f3733d61-effe-4c14-aede-71431ad8c80b,Namespace:kube-system,Attempt:0,}" Jan 21 01:01:35.804942 containerd[1959]: time="2026-01-21T01:01:35.804896215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d9rth,Uid:57ec02f7-f56a-4ee7-b188-4305ce9ac790,Namespace:kube-system,Attempt:0,}" Jan 21 01:01:36.012831 containerd[1959]: time="2026-01-21T01:01:36.012715408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-975rm,Uid:f3733d61-effe-4c14-aede-71431ad8c80b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e90d91f1626b8235ba3c055aaa157bdb078ac11b6b679b483ffde40d870126\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 21 01:01:36.013158 kubelet[3359]: E0121 01:01:36.013087 3359 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e90d91f1626b8235ba3c055aaa157bdb078ac11b6b679b483ffde40d870126\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 21 01:01:36.013269 kubelet[3359]: E0121 01:01:36.013176 3359 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e90d91f1626b8235ba3c055aaa157bdb078ac11b6b679b483ffde40d870126\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-975rm" Jan 21 01:01:36.013269 kubelet[3359]: E0121 01:01:36.013197 3359 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e90d91f1626b8235ba3c055aaa157bdb078ac11b6b679b483ffde40d870126\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-975rm" Jan 21 01:01:36.013350 kubelet[3359]: E0121 01:01:36.013261 3359 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-975rm_kube-system(f3733d61-effe-4c14-aede-71431ad8c80b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-975rm_kube-system(f3733d61-effe-4c14-aede-71431ad8c80b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09e90d91f1626b8235ba3c055aaa157bdb078ac11b6b679b483ffde40d870126\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-975rm" podUID="f3733d61-effe-4c14-aede-71431ad8c80b" Jan 21 01:01:36.015448 containerd[1959]: time="2026-01-21T01:01:36.015385713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d9rth,Uid:57ec02f7-f56a-4ee7-b188-4305ce9ac790,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99bff61b3e750415db1176394acdbd7235749ac5a167ed9b2792bc8b51cfda84\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 21 01:01:36.015956 kubelet[3359]: E0121 01:01:36.015884 3359 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99bff61b3e750415db1176394acdbd7235749ac5a167ed9b2792bc8b51cfda84\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 21 01:01:36.016134 kubelet[3359]: E0121 01:01:36.016040 3359 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99bff61b3e750415db1176394acdbd7235749ac5a167ed9b2792bc8b51cfda84\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-d9rth" Jan 21 01:01:36.016134 kubelet[3359]: E0121 01:01:36.016061 3359 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99bff61b3e750415db1176394acdbd7235749ac5a167ed9b2792bc8b51cfda84\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-d9rth" Jan 21 01:01:36.016297 kubelet[3359]: E0121 01:01:36.016214 3359 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-d9rth_kube-system(57ec02f7-f56a-4ee7-b188-4305ce9ac790)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-d9rth_kube-system(57ec02f7-f56a-4ee7-b188-4305ce9ac790)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99bff61b3e750415db1176394acdbd7235749ac5a167ed9b2792bc8b51cfda84\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-d9rth" podUID="57ec02f7-f56a-4ee7-b188-4305ce9ac790" Jan 21 01:01:36.285108 containerd[1959]: time="2026-01-21T01:01:36.285060910Z" level=info msg="CreateContainer within sandbox \"51a292f310afbe9899df6f9f0cc4054f91dc5efac0e23d3760e7c10b3c5373cd\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 21 01:01:36.311797 containerd[1959]: time="2026-01-21T01:01:36.311574621Z" level=info msg="Container 9ccc77828e4efb5e1df232b686f1db065cc511bf0f5db78997917c2b177b4935: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:36.323760 containerd[1959]: time="2026-01-21T01:01:36.323693488Z" level=info msg="CreateContainer within sandbox \"51a292f310afbe9899df6f9f0cc4054f91dc5efac0e23d3760e7c10b3c5373cd\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"9ccc77828e4efb5e1df232b686f1db065cc511bf0f5db78997917c2b177b4935\"" Jan 21 01:01:36.324491 containerd[1959]: time="2026-01-21T01:01:36.324406806Z" level=info msg="StartContainer for \"9ccc77828e4efb5e1df232b686f1db065cc511bf0f5db78997917c2b177b4935\"" Jan 21 01:01:36.325873 containerd[1959]: time="2026-01-21T01:01:36.325838489Z" level=info msg="connecting to shim 9ccc77828e4efb5e1df232b686f1db065cc511bf0f5db78997917c2b177b4935" address="unix:///run/containerd/s/84a6476dc63c39e30eefe0e5223937c2b2babe29cd020ff56a5529ed8a2448f3" protocol=ttrpc version=3 Jan 21 01:01:36.354136 systemd[1]: Started cri-containerd-9ccc77828e4efb5e1df232b686f1db065cc511bf0f5db78997917c2b177b4935.scope - libcontainer container 9ccc77828e4efb5e1df232b686f1db065cc511bf0f5db78997917c2b177b4935. Jan 21 01:01:36.391683 containerd[1959]: time="2026-01-21T01:01:36.391640841Z" level=info msg="StartContainer for \"9ccc77828e4efb5e1df232b686f1db065cc511bf0f5db78997917c2b177b4935\" returns successfully" Jan 21 01:01:37.441623 (udev-worker)[3643]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:01:37.459122 systemd-networkd[1555]: flannel.1: Link UP Jan 21 01:01:37.459253 systemd-networkd[1555]: flannel.1: Gained carrier Jan 21 01:01:38.708976 systemd-networkd[1555]: flannel.1: Gained IPv6LL Jan 21 01:01:41.176236 ntpd[1917]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 21 01:01:41.176296 ntpd[1917]: Listen normally on 7 flannel.1 [fe80::644b:63ff:fea7:9714%4]:123 Jan 21 01:01:41.176818 ntpd[1917]: 21 Jan 01:01:41 ntpd[1917]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 21 01:01:41.176818 ntpd[1917]: 21 Jan 01:01:41 ntpd[1917]: Listen normally on 7 flannel.1 [fe80::644b:63ff:fea7:9714%4]:123 Jan 21 01:01:47.193321 containerd[1959]: time="2026-01-21T01:01:47.193140807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-975rm,Uid:f3733d61-effe-4c14-aede-71431ad8c80b,Namespace:kube-system,Attempt:0,}" Jan 21 01:01:47.469884 systemd-networkd[1555]: cni0: Link UP Jan 21 01:01:47.469891 systemd-networkd[1555]: cni0: Gained carrier Jan 21 01:01:47.473890 (udev-worker)[3979]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:01:47.473934 systemd-networkd[1555]: cni0: Lost carrier Jan 21 01:01:47.481812 kernel: cni0: port 1(veth8e7054ea) entered blocking state Jan 21 01:01:47.482749 kernel: cni0: port 1(veth8e7054ea) entered disabled state Jan 21 01:01:47.482825 kernel: veth8e7054ea: entered allmulticast mode Jan 21 01:01:47.481866 systemd-networkd[1555]: veth8e7054ea: Link UP Jan 21 01:01:47.483928 kernel: veth8e7054ea: entered promiscuous mode Jan 21 01:01:47.486119 (udev-worker)[3982]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:01:47.672794 kernel: cni0: port 1(veth8e7054ea) entered blocking state Jan 21 01:01:47.672887 kernel: cni0: port 1(veth8e7054ea) entered forwarding state Jan 21 01:01:47.671801 systemd-networkd[1555]: veth8e7054ea: Gained carrier Jan 21 01:01:47.673891 systemd-networkd[1555]: cni0: Gained carrier Jan 21 01:01:47.675162 containerd[1959]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000082950), "name":"cbr0", "type":"bridge"} Jan 21 01:01:47.675162 containerd[1959]: delegateAdd: netconf sent to delegate plugin: Jan 21 01:01:47.711562 containerd[1959]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-21T01:01:47.711516461Z" level=info msg="connecting to shim 4d5b14f2293fa1eddd637dd9ad9356ea6624f208317b6f86d07d96cc24166114" address="unix:///run/containerd/s/d94ce411ccb6e4f52ea3caf99e2d0dc93a712b07b997f7dad47a501cb8b3de73" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:01:47.747060 systemd[1]: Started cri-containerd-4d5b14f2293fa1eddd637dd9ad9356ea6624f208317b6f86d07d96cc24166114.scope - libcontainer container 4d5b14f2293fa1eddd637dd9ad9356ea6624f208317b6f86d07d96cc24166114. Jan 21 01:01:47.799423 containerd[1959]: time="2026-01-21T01:01:47.799328171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-975rm,Uid:f3733d61-effe-4c14-aede-71431ad8c80b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d5b14f2293fa1eddd637dd9ad9356ea6624f208317b6f86d07d96cc24166114\"" Jan 21 01:01:47.809415 containerd[1959]: time="2026-01-21T01:01:47.809213769Z" level=info msg="CreateContainer within sandbox \"4d5b14f2293fa1eddd637dd9ad9356ea6624f208317b6f86d07d96cc24166114\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 01:01:47.853102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2898091424.mount: Deactivated successfully. Jan 21 01:01:47.855145 containerd[1959]: time="2026-01-21T01:01:47.855083533Z" level=info msg="Container 9d4381f75d01f2bd79f0abf45685a1508eb6d71f7afad2e523249d8f406ce531: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:47.862425 containerd[1959]: time="2026-01-21T01:01:47.862377282Z" level=info msg="CreateContainer within sandbox \"4d5b14f2293fa1eddd637dd9ad9356ea6624f208317b6f86d07d96cc24166114\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d4381f75d01f2bd79f0abf45685a1508eb6d71f7afad2e523249d8f406ce531\"" Jan 21 01:01:47.863211 containerd[1959]: time="2026-01-21T01:01:47.863117850Z" level=info msg="StartContainer for \"9d4381f75d01f2bd79f0abf45685a1508eb6d71f7afad2e523249d8f406ce531\"" Jan 21 01:01:47.864675 containerd[1959]: time="2026-01-21T01:01:47.864347998Z" level=info msg="connecting to shim 9d4381f75d01f2bd79f0abf45685a1508eb6d71f7afad2e523249d8f406ce531" address="unix:///run/containerd/s/d94ce411ccb6e4f52ea3caf99e2d0dc93a712b07b997f7dad47a501cb8b3de73" protocol=ttrpc version=3 Jan 21 01:01:47.887032 systemd[1]: Started cri-containerd-9d4381f75d01f2bd79f0abf45685a1508eb6d71f7afad2e523249d8f406ce531.scope - libcontainer container 9d4381f75d01f2bd79f0abf45685a1508eb6d71f7afad2e523249d8f406ce531. Jan 21 01:01:47.921096 containerd[1959]: time="2026-01-21T01:01:47.921065384Z" level=info msg="StartContainer for \"9d4381f75d01f2bd79f0abf45685a1508eb6d71f7afad2e523249d8f406ce531\" returns successfully" Jan 21 01:01:48.342664 kubelet[3359]: I0121 01:01:48.342320 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-hgbnz" podStartSLOduration=14.592099426 podStartE2EDuration="19.342298821s" podCreationTimestamp="2026-01-21 01:01:29 +0000 UTC" firstStartedPulling="2026-01-21 01:01:30.444754807 +0000 UTC m=+5.546493684" lastFinishedPulling="2026-01-21 01:01:35.194954205 +0000 UTC m=+10.296693079" observedRunningTime="2026-01-21 01:01:37.307083193 +0000 UTC m=+12.408822107" watchObservedRunningTime="2026-01-21 01:01:48.342298821 +0000 UTC m=+23.444037716" Jan 21 01:01:48.948987 systemd-networkd[1555]: cni0: Gained IPv6LL Jan 21 01:01:49.190303 containerd[1959]: time="2026-01-21T01:01:49.190247667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d9rth,Uid:57ec02f7-f56a-4ee7-b188-4305ce9ac790,Namespace:kube-system,Attempt:0,}" Jan 21 01:01:49.209450 (udev-worker)[3981]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:01:49.210213 systemd-networkd[1555]: veth9e062e22: Link UP Jan 21 01:01:49.213580 kernel: cni0: port 2(veth9e062e22) entered blocking state Jan 21 01:01:49.213679 kernel: cni0: port 2(veth9e062e22) entered disabled state Jan 21 01:01:49.214687 kernel: veth9e062e22: entered allmulticast mode Jan 21 01:01:49.216880 kernel: veth9e062e22: entered promiscuous mode Jan 21 01:01:49.231355 kernel: cni0: port 2(veth9e062e22) entered blocking state Jan 21 01:01:49.231495 kernel: cni0: port 2(veth9e062e22) entered forwarding state Jan 21 01:01:49.231655 systemd-networkd[1555]: veth9e062e22: Gained carrier Jan 21 01:01:49.235036 containerd[1959]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Jan 21 01:01:49.235036 containerd[1959]: delegateAdd: netconf sent to delegate plugin: Jan 21 01:01:49.275360 containerd[1959]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2026-01-21T01:01:49.275315839Z" level=info msg="connecting to shim 2b1efaa6c789dbc11640574564d900a6e71ad23e9e07457010d9bc300c7ffbee" address="unix:///run/containerd/s/32d075422afb6e5cb45733a932be94bfbb7082e7d265aae2f5ab4bdee8a9ac00" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:01:49.313189 systemd[1]: Started cri-containerd-2b1efaa6c789dbc11640574564d900a6e71ad23e9e07457010d9bc300c7ffbee.scope - libcontainer container 2b1efaa6c789dbc11640574564d900a6e71ad23e9e07457010d9bc300c7ffbee. Jan 21 01:01:49.383821 containerd[1959]: time="2026-01-21T01:01:49.383761850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d9rth,Uid:57ec02f7-f56a-4ee7-b188-4305ce9ac790,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b1efaa6c789dbc11640574564d900a6e71ad23e9e07457010d9bc300c7ffbee\"" Jan 21 01:01:49.390511 containerd[1959]: time="2026-01-21T01:01:49.390451513Z" level=info msg="CreateContainer within sandbox \"2b1efaa6c789dbc11640574564d900a6e71ad23e9e07457010d9bc300c7ffbee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 01:01:49.413553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782970197.mount: Deactivated successfully. Jan 21 01:01:49.415080 containerd[1959]: time="2026-01-21T01:01:49.414952038Z" level=info msg="Container 409262ebc12e8140e48b75f0411086873d21748697b89348ba9e608666dd8e93: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:49.423820 containerd[1959]: time="2026-01-21T01:01:49.423731327Z" level=info msg="CreateContainer within sandbox \"2b1efaa6c789dbc11640574564d900a6e71ad23e9e07457010d9bc300c7ffbee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"409262ebc12e8140e48b75f0411086873d21748697b89348ba9e608666dd8e93\"" Jan 21 01:01:49.431296 containerd[1959]: time="2026-01-21T01:01:49.429889867Z" level=info msg="StartContainer for \"409262ebc12e8140e48b75f0411086873d21748697b89348ba9e608666dd8e93\"" Jan 21 01:01:49.431296 containerd[1959]: time="2026-01-21T01:01:49.431214250Z" level=info msg="connecting to shim 409262ebc12e8140e48b75f0411086873d21748697b89348ba9e608666dd8e93" address="unix:///run/containerd/s/32d075422afb6e5cb45733a932be94bfbb7082e7d265aae2f5ab4bdee8a9ac00" protocol=ttrpc version=3 Jan 21 01:01:49.461084 systemd[1]: Started cri-containerd-409262ebc12e8140e48b75f0411086873d21748697b89348ba9e608666dd8e93.scope - libcontainer container 409262ebc12e8140e48b75f0411086873d21748697b89348ba9e608666dd8e93. Jan 21 01:01:49.509697 containerd[1959]: time="2026-01-21T01:01:49.509650961Z" level=info msg="StartContainer for \"409262ebc12e8140e48b75f0411086873d21748697b89348ba9e608666dd8e93\" returns successfully" Jan 21 01:01:49.524999 systemd-networkd[1555]: veth8e7054ea: Gained IPv6LL Jan 21 01:01:50.347154 kubelet[3359]: I0121 01:01:50.347092 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-975rm" podStartSLOduration=20.347073032 podStartE2EDuration="20.347073032s" podCreationTimestamp="2026-01-21 01:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 01:01:48.344049183 +0000 UTC m=+23.445788066" watchObservedRunningTime="2026-01-21 01:01:50.347073032 +0000 UTC m=+25.448811928" Jan 21 01:01:50.613062 systemd-networkd[1555]: veth9e062e22: Gained IPv6LL Jan 21 01:01:53.176271 ntpd[1917]: Listen normally on 8 cni0 192.168.0.1:123 Jan 21 01:01:53.176921 ntpd[1917]: 21 Jan 01:01:53 ntpd[1917]: Listen normally on 8 cni0 192.168.0.1:123 Jan 21 01:01:53.176921 ntpd[1917]: 21 Jan 01:01:53 ntpd[1917]: Listen normally on 9 cni0 [fe80::603b:cff:fedd:c1f5%5]:123 Jan 21 01:01:53.176921 ntpd[1917]: 21 Jan 01:01:53 ntpd[1917]: Listen normally on 10 veth8e7054ea [fe80::d486:beff:fe67:43aa%6]:123 Jan 21 01:01:53.176921 ntpd[1917]: 21 Jan 01:01:53 ntpd[1917]: Listen normally on 11 veth9e062e22 [fe80::2cd5:5bff:fe59:b783%7]:123 Jan 21 01:01:53.176456 ntpd[1917]: Listen normally on 9 cni0 [fe80::603b:cff:fedd:c1f5%5]:123 Jan 21 01:01:53.176518 ntpd[1917]: Listen normally on 10 veth8e7054ea [fe80::d486:beff:fe67:43aa%6]:123 Jan 21 01:01:53.176579 ntpd[1917]: Listen normally on 11 veth9e062e22 [fe80::2cd5:5bff:fe59:b783%7]:123 Jan 21 01:01:59.345437 kubelet[3359]: I0121 01:01:59.345150 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-d9rth" podStartSLOduration=29.345128916 podStartE2EDuration="29.345128916s" podCreationTimestamp="2026-01-21 01:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 01:01:50.348312083 +0000 UTC m=+25.450050978" watchObservedRunningTime="2026-01-21 01:01:59.345128916 +0000 UTC m=+34.446867833" Jan 21 01:02:14.895576 systemd[1]: Started sshd@7-172.31.16.12:22-68.220.241.50:60452.service - OpenSSH per-connection server daemon (68.220.241.50:60452). Jan 21 01:02:15.356813 sshd[4329]: Accepted publickey for core from 68.220.241.50 port 60452 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:02:15.358076 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:02:15.364926 systemd-logind[1929]: New session 9 of user core. Jan 21 01:02:15.380071 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 21 01:02:15.721212 sshd[4333]: Connection closed by 68.220.241.50 port 60452 Jan 21 01:02:15.721967 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:15.727741 systemd[1]: sshd@7-172.31.16.12:22-68.220.241.50:60452.service: Deactivated successfully. Jan 21 01:02:15.730331 systemd[1]: session-9.scope: Deactivated successfully. Jan 21 01:02:15.731301 systemd-logind[1929]: Session 9 logged out. Waiting for processes to exit. Jan 21 01:02:15.733561 systemd-logind[1929]: Removed session 9. Jan 21 01:02:20.825194 systemd[1]: Started sshd@8-172.31.16.12:22-68.220.241.50:60456.service - OpenSSH per-connection server daemon (68.220.241.50:60456). Jan 21 01:02:21.295355 sshd[4367]: Accepted publickey for core from 68.220.241.50 port 60456 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:02:21.299926 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:02:21.320882 systemd-logind[1929]: New session 10 of user core. Jan 21 01:02:21.331041 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 21 01:02:21.624354 sshd[4371]: Connection closed by 68.220.241.50 port 60456 Jan 21 01:02:21.625596 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:21.631384 systemd[1]: sshd@8-172.31.16.12:22-68.220.241.50:60456.service: Deactivated successfully. Jan 21 01:02:21.634924 systemd[1]: session-10.scope: Deactivated successfully. Jan 21 01:02:21.636603 systemd-logind[1929]: Session 10 logged out. Waiting for processes to exit. Jan 21 01:02:21.638696 systemd-logind[1929]: Removed session 10. Jan 21 01:02:26.698066 systemd[1]: Started sshd@9-172.31.16.12:22-68.220.241.50:55452.service - OpenSSH per-connection server daemon (68.220.241.50:55452). Jan 21 01:02:27.130398 sshd[4406]: Accepted publickey for core from 68.220.241.50 port 55452 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:02:27.131345 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:02:27.137448 systemd-logind[1929]: New session 11 of user core. Jan 21 01:02:27.140101 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 21 01:02:27.432443 sshd[4410]: Connection closed by 68.220.241.50 port 55452 Jan 21 01:02:27.433989 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:27.439634 systemd-logind[1929]: Session 11 logged out. Waiting for processes to exit. Jan 21 01:02:27.440223 systemd[1]: sshd@9-172.31.16.12:22-68.220.241.50:55452.service: Deactivated successfully. Jan 21 01:02:27.442976 systemd[1]: session-11.scope: Deactivated successfully. Jan 21 01:02:27.445359 systemd-logind[1929]: Removed session 11. Jan 21 01:02:27.520050 systemd[1]: Started sshd@10-172.31.16.12:22-68.220.241.50:55454.service - OpenSSH per-connection server daemon (68.220.241.50:55454). Jan 21 01:02:27.949601 sshd[4423]: Accepted publickey for core from 68.220.241.50 port 55454 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:02:27.951405 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:02:27.957391 systemd-logind[1929]: New session 12 of user core. Jan 21 01:02:27.964343 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 21 01:02:28.302365 sshd[4447]: Connection closed by 68.220.241.50 port 55454 Jan 21 01:02:28.303019 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:28.309422 systemd-logind[1929]: Session 12 logged out. Waiting for processes to exit. Jan 21 01:02:28.310186 systemd[1]: sshd@10-172.31.16.12:22-68.220.241.50:55454.service: Deactivated successfully. Jan 21 01:02:28.312751 systemd[1]: session-12.scope: Deactivated successfully. Jan 21 01:02:28.315720 systemd-logind[1929]: Removed session 12. Jan 21 01:02:28.394216 systemd[1]: Started sshd@11-172.31.16.12:22-68.220.241.50:55468.service - OpenSSH per-connection server daemon (68.220.241.50:55468). Jan 21 01:02:28.825950 sshd[4457]: Accepted publickey for core from 68.220.241.50 port 55468 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:02:28.827589 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:02:28.833894 systemd-logind[1929]: New session 13 of user core. Jan 21 01:02:28.844159 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 21 01:02:29.130877 sshd[4461]: Connection closed by 68.220.241.50 port 55468 Jan 21 01:02:29.131704 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:29.138071 systemd-logind[1929]: Session 13 logged out. Waiting for processes to exit. Jan 21 01:02:29.139111 systemd[1]: sshd@11-172.31.16.12:22-68.220.241.50:55468.service: Deactivated successfully. Jan 21 01:02:29.141941 systemd[1]: session-13.scope: Deactivated successfully. Jan 21 01:02:29.144417 systemd-logind[1929]: Removed session 13. Jan 21 01:02:34.232582 systemd[1]: Started sshd@12-172.31.16.12:22-68.220.241.50:51856.service - OpenSSH per-connection server daemon (68.220.241.50:51856). Jan 21 01:02:34.698473 sshd[4493]: Accepted publickey for core from 68.220.241.50 port 51856 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:02:34.700132 sshd-session[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:02:34.705970 systemd-logind[1929]: New session 14 of user core. Jan 21 01:02:34.713098 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 21 01:02:35.015587 sshd[4499]: Connection closed by 68.220.241.50 port 51856 Jan 21 01:02:35.016904 sshd-session[4493]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:35.022174 systemd[1]: sshd@12-172.31.16.12:22-68.220.241.50:51856.service: Deactivated successfully. Jan 21 01:02:35.024875 systemd[1]: session-14.scope: Deactivated successfully. Jan 21 01:02:35.027111 systemd-logind[1929]: Session 14 logged out. Waiting for processes to exit. Jan 21 01:02:35.028769 systemd-logind[1929]: Removed session 14. Jan 21 01:02:35.099352 systemd[1]: Started sshd@13-172.31.16.12:22-68.220.241.50:51868.service - OpenSSH per-connection server daemon (68.220.241.50:51868). Jan 21 01:02:35.541080 sshd[4511]: Accepted publickey for core from 68.220.241.50 port 51868 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:02:35.542599 sshd-session[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:02:35.548831 systemd-logind[1929]: New session 15 of user core. Jan 21 01:02:35.557090 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 21 01:02:36.348560 sshd[4515]: Connection closed by 68.220.241.50 port 51868 Jan 21 01:02:36.350659 sshd-session[4511]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:36.386721 systemd[1]: sshd@13-172.31.16.12:22-68.220.241.50:51868.service: Deactivated successfully. Jan 21 01:02:36.389189 systemd[1]: session-15.scope: Deactivated successfully. Jan 21 01:02:36.391231 systemd-logind[1929]: Session 15 logged out. Waiting for processes to exit. Jan 21 01:02:36.392718 systemd-logind[1929]: Removed session 15. Jan 21 01:02:36.448273 systemd[1]: Started sshd@14-172.31.16.12:22-68.220.241.50:51880.service - OpenSSH per-connection server daemon (68.220.241.50:51880). Jan 21 01:02:36.929119 sshd[4526]: Accepted publickey for core from 68.220.241.50 port 51880 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:02:36.930760 sshd-session[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:02:36.936002 systemd-logind[1929]: New session 16 of user core. Jan 21 01:02:36.944280 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 21 01:02:37.913994 sshd[4530]: Connection closed by 68.220.241.50 port 51880 Jan 21 01:02:37.914612 sshd-session[4526]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:37.920012 systemd-logind[1929]: Session 16 logged out. Waiting for processes to exit. Jan 21 01:02:37.920581 systemd[1]: sshd@14-172.31.16.12:22-68.220.241.50:51880.service: Deactivated successfully. Jan 21 01:02:37.923144 systemd[1]: session-16.scope: Deactivated successfully. Jan 21 01:02:37.925875 systemd-logind[1929]: Removed session 16. Jan 21 01:02:37.998969 systemd[1]: Started sshd@15-172.31.16.12:22-68.220.241.50:51890.service - OpenSSH per-connection server daemon (68.220.241.50:51890). Jan 21 01:02:38.422334 sshd[4565]: Accepted publickey for core from 68.220.241.50 port 51890 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:02:38.423958 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:02:38.429549 systemd-logind[1929]: New session 17 of user core. Jan 21 01:02:38.436216 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 21 01:02:38.860937 sshd[4569]: Connection closed by 68.220.241.50 port 51890 Jan 21 01:02:38.863092 sshd-session[4565]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:38.867918 systemd[1]: sshd@15-172.31.16.12:22-68.220.241.50:51890.service: Deactivated successfully. Jan 21 01:02:38.876162 systemd[1]: session-17.scope: Deactivated successfully. Jan 21 01:02:38.877849 systemd-logind[1929]: Session 17 logged out. Waiting for processes to exit. Jan 21 01:02:38.880649 systemd-logind[1929]: Removed session 17. Jan 21 01:02:38.948296 systemd[1]: Started sshd@16-172.31.16.12:22-68.220.241.50:51902.service - OpenSSH per-connection server daemon (68.220.241.50:51902). Jan 21 01:02:39.379817 sshd[4581]: Accepted publickey for core from 68.220.241.50 port 51902 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:02:39.381028 sshd-session[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:02:39.386234 systemd-logind[1929]: New session 18 of user core. Jan 21 01:02:39.392411 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 21 01:02:39.690067 sshd[4585]: Connection closed by 68.220.241.50 port 51902 Jan 21 01:02:39.692023 sshd-session[4581]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:39.697016 systemd[1]: sshd@16-172.31.16.12:22-68.220.241.50:51902.service: Deactivated successfully. Jan 21 01:02:39.699995 systemd[1]: session-18.scope: Deactivated successfully. Jan 21 01:02:39.701944 systemd-logind[1929]: Session 18 logged out. Waiting for processes to exit. Jan 21 01:02:39.703226 systemd-logind[1929]: Removed session 18. Jan 21 01:02:44.778235 systemd[1]: Started sshd@17-172.31.16.12:22-68.220.241.50:54554.service - OpenSSH per-connection server daemon (68.220.241.50:54554). Jan 21 01:02:45.208252 sshd[4620]: Accepted publickey for core from 68.220.241.50 port 54554 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:02:45.209947 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:02:45.217360 systemd-logind[1929]: New session 19 of user core. Jan 21 01:02:45.222035 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 21 01:02:45.504882 sshd[4624]: Connection closed by 68.220.241.50 port 54554 Jan 21 01:02:45.506064 sshd-session[4620]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:45.512621 systemd[1]: sshd@17-172.31.16.12:22-68.220.241.50:54554.service: Deactivated successfully. Jan 21 01:02:45.518039 systemd[1]: session-19.scope: Deactivated successfully. Jan 21 01:02:45.520497 systemd-logind[1929]: Session 19 logged out. Waiting for processes to exit. Jan 21 01:02:45.523189 systemd-logind[1929]: Removed session 19. Jan 21 01:02:50.597152 systemd[1]: Started sshd@18-172.31.16.12:22-68.220.241.50:54556.service - OpenSSH per-connection server daemon (68.220.241.50:54556). Jan 21 01:02:51.022818 sshd[4658]: Accepted publickey for core from 68.220.241.50 port 54556 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:02:51.024311 sshd-session[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:02:51.030867 systemd-logind[1929]: New session 20 of user core. Jan 21 01:02:51.037035 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 21 01:02:51.325129 sshd[4662]: Connection closed by 68.220.241.50 port 54556 Jan 21 01:02:51.326929 sshd-session[4658]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:51.331261 systemd[1]: sshd@18-172.31.16.12:22-68.220.241.50:54556.service: Deactivated successfully. Jan 21 01:02:51.334468 systemd[1]: session-20.scope: Deactivated successfully. Jan 21 01:02:51.336030 systemd-logind[1929]: Session 20 logged out. Waiting for processes to exit. Jan 21 01:02:51.337152 systemd-logind[1929]: Removed session 20. Jan 21 01:03:38.299365 systemd[1]: cri-containerd-67c40b8a1c6c70d1f1aed479e44e29513d0f281e8bc0e9280a167764ea60addd.scope: Deactivated successfully. Jan 21 01:03:38.300312 systemd[1]: cri-containerd-67c40b8a1c6c70d1f1aed479e44e29513d0f281e8bc0e9280a167764ea60addd.scope: Consumed 3.966s CPU time, 103.1M memory peak, 51.8M read from disk. Jan 21 01:03:38.303453 containerd[1959]: time="2026-01-21T01:03:38.303405660Z" level=info msg="received container exit event container_id:\"67c40b8a1c6c70d1f1aed479e44e29513d0f281e8bc0e9280a167764ea60addd\" id:\"67c40b8a1c6c70d1f1aed479e44e29513d0f281e8bc0e9280a167764ea60addd\" pid:3099 exit_status:1 exited_at:{seconds:1768957418 nanos:302446633}" Jan 21 01:03:38.332799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67c40b8a1c6c70d1f1aed479e44e29513d0f281e8bc0e9280a167764ea60addd-rootfs.mount: Deactivated successfully. Jan 21 01:03:38.587459 kubelet[3359]: I0121 01:03:38.587312 3359 scope.go:117] "RemoveContainer" containerID="67c40b8a1c6c70d1f1aed479e44e29513d0f281e8bc0e9280a167764ea60addd" Jan 21 01:03:38.589675 containerd[1959]: time="2026-01-21T01:03:38.589639776Z" level=info msg="CreateContainer within sandbox \"4f569e7af57cd506192af4e7429fa89fc751d7bf24c18269110cc0e8da38a19b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 21 01:03:38.598804 containerd[1959]: time="2026-01-21T01:03:38.598239184Z" level=info msg="Container 33db002df6352000fd132985e2ecff1e5e1453564b41349cf8b45c9a5496a2e5: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:03:38.608549 containerd[1959]: time="2026-01-21T01:03:38.608507296Z" level=info msg="CreateContainer within sandbox \"4f569e7af57cd506192af4e7429fa89fc751d7bf24c18269110cc0e8da38a19b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"33db002df6352000fd132985e2ecff1e5e1453564b41349cf8b45c9a5496a2e5\"" Jan 21 01:03:38.609259 containerd[1959]: time="2026-01-21T01:03:38.609227437Z" level=info msg="StartContainer for \"33db002df6352000fd132985e2ecff1e5e1453564b41349cf8b45c9a5496a2e5\"" Jan 21 01:03:38.610431 containerd[1959]: time="2026-01-21T01:03:38.610399140Z" level=info msg="connecting to shim 33db002df6352000fd132985e2ecff1e5e1453564b41349cf8b45c9a5496a2e5" address="unix:///run/containerd/s/8913fa396f35a09e590aeb1222e3274d1ff84cd0a5ad6a98e96c01db3b6bd852" protocol=ttrpc version=3 Jan 21 01:03:38.647128 systemd[1]: Started cri-containerd-33db002df6352000fd132985e2ecff1e5e1453564b41349cf8b45c9a5496a2e5.scope - libcontainer container 33db002df6352000fd132985e2ecff1e5e1453564b41349cf8b45c9a5496a2e5. Jan 21 01:03:38.707042 containerd[1959]: time="2026-01-21T01:03:38.706991979Z" level=info msg="StartContainer for \"33db002df6352000fd132985e2ecff1e5e1453564b41349cf8b45c9a5496a2e5\" returns successfully" Jan 21 01:03:43.302113 systemd[1]: cri-containerd-da83093bece55c71a40905c97a01ce9e46d60ca3a39d265818fdfd84a478a6ad.scope: Deactivated successfully. Jan 21 01:03:43.302430 systemd[1]: cri-containerd-da83093bece55c71a40905c97a01ce9e46d60ca3a39d265818fdfd84a478a6ad.scope: Consumed 2.119s CPU time, 23.2M memory peak, 3.1M read from disk. Jan 21 01:03:43.303117 containerd[1959]: time="2026-01-21T01:03:43.302899724Z" level=info msg="received container exit event container_id:\"da83093bece55c71a40905c97a01ce9e46d60ca3a39d265818fdfd84a478a6ad\" id:\"da83093bece55c71a40905c97a01ce9e46d60ca3a39d265818fdfd84a478a6ad\" pid:3084 exit_status:1 exited_at:{seconds:1768957423 nanos:302166032}" Jan 21 01:03:43.332305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da83093bece55c71a40905c97a01ce9e46d60ca3a39d265818fdfd84a478a6ad-rootfs.mount: Deactivated successfully. Jan 21 01:03:43.603943 kubelet[3359]: I0121 01:03:43.603852 3359 scope.go:117] "RemoveContainer" containerID="da83093bece55c71a40905c97a01ce9e46d60ca3a39d265818fdfd84a478a6ad" Jan 21 01:03:43.607197 containerd[1959]: time="2026-01-21T01:03:43.607154707Z" level=info msg="CreateContainer within sandbox \"f7891ed0fccd6f38e51f82207b2ea0bf8e514a6675e03bcddda893552fad6465\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 21 01:03:43.623798 containerd[1959]: time="2026-01-21T01:03:43.621005738Z" level=info msg="Container ff10e7a8e95726def7e564e5900d755d02fbb0fe6494d21440b94ad9ea86c41d: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:03:43.625428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2007804319.mount: Deactivated successfully. Jan 21 01:03:43.633800 containerd[1959]: time="2026-01-21T01:03:43.633746217Z" level=info msg="CreateContainer within sandbox \"f7891ed0fccd6f38e51f82207b2ea0bf8e514a6675e03bcddda893552fad6465\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ff10e7a8e95726def7e564e5900d755d02fbb0fe6494d21440b94ad9ea86c41d\"" Jan 21 01:03:43.634529 containerd[1959]: time="2026-01-21T01:03:43.634488108Z" level=info msg="StartContainer for \"ff10e7a8e95726def7e564e5900d755d02fbb0fe6494d21440b94ad9ea86c41d\"" Jan 21 01:03:43.635647 containerd[1959]: time="2026-01-21T01:03:43.635613525Z" level=info msg="connecting to shim ff10e7a8e95726def7e564e5900d755d02fbb0fe6494d21440b94ad9ea86c41d" address="unix:///run/containerd/s/bd4bedbb3c8c60e4aaeea1891b0b8df78280b60715cd5163e126f8f9eb131454" protocol=ttrpc version=3 Jan 21 01:03:43.668101 systemd[1]: Started cri-containerd-ff10e7a8e95726def7e564e5900d755d02fbb0fe6494d21440b94ad9ea86c41d.scope - libcontainer container ff10e7a8e95726def7e564e5900d755d02fbb0fe6494d21440b94ad9ea86c41d. Jan 21 01:03:43.730902 containerd[1959]: time="2026-01-21T01:03:43.730844497Z" level=info msg="StartContainer for \"ff10e7a8e95726def7e564e5900d755d02fbb0fe6494d21440b94ad9ea86c41d\" returns successfully" Jan 21 01:03:47.572524 kubelet[3359]: E0121 01:03:47.572467 3359 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-16-12)" Jan 21 01:03:57.573701 kubelet[3359]: E0121 01:03:57.573477 3359 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-16-12)"