Jan 23 19:03:28.053049 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 19:03:28.053085 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:03:28.053103 kernel: BIOS-provided physical RAM map: Jan 23 19:03:28.053115 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 19:03:28.053126 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 23 19:03:28.053137 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 23 19:03:28.053151 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 23 19:03:28.053163 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 23 19:03:28.053175 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 23 19:03:28.053186 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 23 19:03:28.053198 kernel: NX (Execute Disable) protection: active Jan 23 19:03:28.053212 kernel: APIC: Static calls initialized Jan 23 19:03:28.053224 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jan 23 19:03:28.053236 kernel: extended physical RAM map: Jan 23 19:03:28.053251 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 19:03:28.053264 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jan 23 19:03:28.053280 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jan 23 19:03:28.053293 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jan 23 19:03:28.053306 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 23 19:03:28.053319 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 23 19:03:28.053332 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 23 19:03:28.053345 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 23 19:03:28.053358 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 23 19:03:28.053370 kernel: efi: EFI v2.7 by EDK II Jan 23 19:03:28.053384 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 23 19:03:28.053396 kernel: secureboot: Secure boot disabled Jan 23 19:03:28.053409 kernel: SMBIOS 2.7 present. Jan 23 19:03:28.053424 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 23 19:03:28.053437 kernel: DMI: Memory slots populated: 1/1 Jan 23 19:03:28.053450 kernel: Hypervisor detected: KVM Jan 23 19:03:28.053463 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 23 19:03:28.053475 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 19:03:28.053489 kernel: kvm-clock: using sched offset of 5156360841 cycles Jan 23 19:03:28.053502 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 19:03:28.053515 kernel: tsc: Detected 2499.998 MHz processor Jan 23 19:03:28.053529 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 19:03:28.053542 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 19:03:28.053558 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 23 19:03:28.053571 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 19:03:28.053584 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 19:03:28.053603 kernel: Using GB pages for direct mapping Jan 23 19:03:28.053617 kernel: ACPI: Early table checksum verification disabled Jan 23 19:03:28.053631 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 23 19:03:28.053645 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 19:03:28.053671 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 19:03:28.053685 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 19:03:28.053700 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 23 19:03:28.053713 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 23 19:03:28.053728 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 19:03:28.053742 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 19:03:28.053756 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 23 19:03:28.053770 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 23 19:03:28.053787 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 23 19:03:28.053801 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 23 19:03:28.053815 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 23 19:03:28.053829 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 23 19:03:28.053844 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 23 19:03:28.053858 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 23 19:03:28.053872 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 23 19:03:28.053913 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 23 19:03:28.053930 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 23 19:03:28.053944 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 23 19:03:28.053959 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 23 19:03:28.053973 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 23 19:03:28.053987 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 23 19:03:28.054001 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 23 19:03:28.054015 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 23 19:03:28.054030 kernel: NUMA: Initialized distance table, cnt=1 Jan 23 19:03:28.054043 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Jan 23 19:03:28.054060 kernel: Zone ranges: Jan 23 19:03:28.054075 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 19:03:28.054089 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 23 19:03:28.054103 kernel: Normal empty Jan 23 19:03:28.054117 kernel: Device empty Jan 23 19:03:28.054131 kernel: Movable zone start for each node Jan 23 19:03:28.054145 kernel: Early memory node ranges Jan 23 19:03:28.054159 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 19:03:28.054173 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 23 19:03:28.054187 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 23 19:03:28.054204 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 23 19:03:28.054218 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 19:03:28.054232 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 19:03:28.054246 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 23 19:03:28.054260 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 23 19:03:28.054275 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 23 19:03:28.054289 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 19:03:28.054303 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 23 19:03:28.054317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 19:03:28.054334 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 19:03:28.054349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 19:03:28.054363 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 19:03:28.054377 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 19:03:28.054391 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 19:03:28.054404 kernel: TSC deadline timer available Jan 23 19:03:28.054418 kernel: CPU topo: Max. logical packages: 1 Jan 23 19:03:28.054432 kernel: CPU topo: Max. logical dies: 1 Jan 23 19:03:28.054446 kernel: CPU topo: Max. dies per package: 1 Jan 23 19:03:28.054460 kernel: CPU topo: Max. threads per core: 2 Jan 23 19:03:28.054477 kernel: CPU topo: Num. cores per package: 1 Jan 23 19:03:28.054491 kernel: CPU topo: Num. threads per package: 2 Jan 23 19:03:28.054505 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 19:03:28.054519 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 19:03:28.054533 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 23 19:03:28.054547 kernel: Booting paravirtualized kernel on KVM Jan 23 19:03:28.054561 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 19:03:28.054576 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 19:03:28.054590 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 19:03:28.054607 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 19:03:28.054621 kernel: pcpu-alloc: [0] 0 1 Jan 23 19:03:28.054635 kernel: kvm-guest: PV spinlocks enabled Jan 23 19:03:28.054650 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 19:03:28.054666 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:03:28.054679 kernel: random: crng init done Jan 23 19:03:28.054693 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 19:03:28.054707 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 19:03:28.054725 kernel: Fallback order for Node 0: 0 Jan 23 19:03:28.054739 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jan 23 19:03:28.054754 kernel: Policy zone: DMA32 Jan 23 19:03:28.054779 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 19:03:28.054796 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 19:03:28.054810 kernel: Kernel/User page tables isolation: enabled Jan 23 19:03:28.054825 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 19:03:28.054840 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 19:03:28.054855 kernel: Dynamic Preempt: voluntary Jan 23 19:03:28.054870 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 19:03:28.054909 kernel: rcu: RCU event tracing is enabled. Jan 23 19:03:28.054928 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 19:03:28.054944 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 19:03:28.054959 kernel: Rude variant of Tasks RCU enabled. Jan 23 19:03:28.054974 kernel: Tracing variant of Tasks RCU enabled. Jan 23 19:03:28.054989 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 19:03:28.055004 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 19:03:28.055023 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 19:03:28.055038 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 19:03:28.055054 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 19:03:28.055069 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 19:03:28.055084 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 19:03:28.055099 kernel: Console: colour dummy device 80x25 Jan 23 19:03:28.055114 kernel: printk: legacy console [tty0] enabled Jan 23 19:03:28.055129 kernel: printk: legacy console [ttyS0] enabled Jan 23 19:03:28.055147 kernel: ACPI: Core revision 20240827 Jan 23 19:03:28.055163 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 23 19:03:28.055178 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 19:03:28.055193 kernel: x2apic enabled Jan 23 19:03:28.055208 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 19:03:28.055223 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 23 19:03:28.055238 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 23 19:03:28.055254 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 23 19:03:28.055269 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 23 19:03:28.055287 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 19:03:28.055302 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 19:03:28.055316 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 19:03:28.055331 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 23 19:03:28.055346 kernel: RETBleed: Vulnerable Jan 23 19:03:28.055361 kernel: Speculative Store Bypass: Vulnerable Jan 23 19:03:28.055376 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 19:03:28.055390 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 19:03:28.055405 kernel: GDS: Unknown: Dependent on hypervisor status Jan 23 19:03:28.055420 kernel: active return thunk: its_return_thunk Jan 23 19:03:28.055435 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 19:03:28.055453 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 19:03:28.055468 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 19:03:28.055484 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 19:03:28.055499 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 23 19:03:28.055514 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 23 19:03:28.055529 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 19:03:28.055544 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 19:03:28.055558 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 19:03:28.055573 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 19:03:28.055588 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 19:03:28.055602 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 23 19:03:28.055622 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 23 19:03:28.055638 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 23 19:03:28.055652 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 23 19:03:28.055667 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 23 19:03:28.055681 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 23 19:03:28.055696 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 23 19:03:28.055712 kernel: Freeing SMP alternatives memory: 32K Jan 23 19:03:28.055727 kernel: pid_max: default: 32768 minimum: 301 Jan 23 19:03:28.055742 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 19:03:28.055756 kernel: landlock: Up and running. Jan 23 19:03:28.055769 kernel: SELinux: Initializing. Jan 23 19:03:28.055784 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 19:03:28.055802 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 19:03:28.055817 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Jan 23 19:03:28.055832 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 23 19:03:28.055848 kernel: signal: max sigframe size: 3632 Jan 23 19:03:28.055863 kernel: rcu: Hierarchical SRCU implementation. Jan 23 19:03:28.055890 kernel: rcu: Max phase no-delay instances is 400. Jan 23 19:03:28.055911 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 19:03:28.055924 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 19:03:28.055936 kernel: smp: Bringing up secondary CPUs ... Jan 23 19:03:28.055951 kernel: smpboot: x86: Booting SMP configuration: Jan 23 19:03:28.055963 kernel: .... node #0, CPUs: #1 Jan 23 19:03:28.055977 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 23 19:03:28.055992 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 23 19:03:28.056006 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 19:03:28.056019 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 23 19:03:28.056035 kernel: Memory: 1899856K/2037804K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 133384K reserved, 0K cma-reserved) Jan 23 19:03:28.056048 kernel: devtmpfs: initialized Jan 23 19:03:28.056062 kernel: x86/mm: Memory block size: 128MB Jan 23 19:03:28.056078 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 23 19:03:28.056091 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 19:03:28.056104 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 19:03:28.056117 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 19:03:28.056130 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 19:03:28.056143 kernel: audit: initializing netlink subsys (disabled) Jan 23 19:03:28.056157 kernel: audit: type=2000 audit(1769195005.093:1): state=initialized audit_enabled=0 res=1 Jan 23 19:03:28.056170 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 19:03:28.056185 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 19:03:28.056198 kernel: cpuidle: using governor menu Jan 23 19:03:28.056211 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 19:03:28.056225 kernel: dca service started, version 1.12.1 Jan 23 19:03:28.056238 kernel: PCI: Using configuration type 1 for base access Jan 23 19:03:28.056251 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 19:03:28.056265 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 19:03:28.056278 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 19:03:28.056291 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 19:03:28.056304 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 19:03:28.056319 kernel: ACPI: Added _OSI(Module Device) Jan 23 19:03:28.056332 kernel: ACPI: Added _OSI(Processor Device) Jan 23 19:03:28.056345 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 19:03:28.056359 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 23 19:03:28.056372 kernel: ACPI: Interpreter enabled Jan 23 19:03:28.056383 kernel: ACPI: PM: (supports S0 S5) Jan 23 19:03:28.056396 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 19:03:28.056409 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 19:03:28.056422 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 19:03:28.056437 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 23 19:03:28.056449 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 19:03:28.056661 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 23 19:03:28.056801 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 23 19:03:28.059670 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 23 19:03:28.059705 kernel: acpiphp: Slot [3] registered Jan 23 19:03:28.059721 kernel: acpiphp: Slot [4] registered Jan 23 19:03:28.059742 kernel: acpiphp: Slot [5] registered Jan 23 19:03:28.059758 kernel: acpiphp: Slot [6] registered Jan 23 19:03:28.059772 kernel: acpiphp: Slot [7] registered Jan 23 19:03:28.059788 kernel: acpiphp: Slot [8] registered Jan 23 19:03:28.059803 kernel: acpiphp: Slot [9] registered Jan 23 19:03:28.059819 kernel: acpiphp: Slot [10] registered Jan 23 19:03:28.059834 kernel: acpiphp: Slot [11] registered Jan 23 19:03:28.059850 kernel: acpiphp: Slot [12] registered Jan 23 19:03:28.059865 kernel: acpiphp: Slot [13] registered Jan 23 19:03:28.059898 kernel: acpiphp: Slot [14] registered Jan 23 19:03:28.059914 kernel: acpiphp: Slot [15] registered Jan 23 19:03:28.059929 kernel: acpiphp: Slot [16] registered Jan 23 19:03:28.059944 kernel: acpiphp: Slot [17] registered Jan 23 19:03:28.059959 kernel: acpiphp: Slot [18] registered Jan 23 19:03:28.059975 kernel: acpiphp: Slot [19] registered Jan 23 19:03:28.059990 kernel: acpiphp: Slot [20] registered Jan 23 19:03:28.060005 kernel: acpiphp: Slot [21] registered Jan 23 19:03:28.060021 kernel: acpiphp: Slot [22] registered Jan 23 19:03:28.060036 kernel: acpiphp: Slot [23] registered Jan 23 19:03:28.060056 kernel: acpiphp: Slot [24] registered Jan 23 19:03:28.060071 kernel: acpiphp: Slot [25] registered Jan 23 19:03:28.060086 kernel: acpiphp: Slot [26] registered Jan 23 19:03:28.060102 kernel: acpiphp: Slot [27] registered Jan 23 19:03:28.060117 kernel: acpiphp: Slot [28] registered Jan 23 19:03:28.060133 kernel: acpiphp: Slot [29] registered Jan 23 19:03:28.060149 kernel: acpiphp: Slot [30] registered Jan 23 19:03:28.060164 kernel: acpiphp: Slot [31] registered Jan 23 19:03:28.060180 kernel: PCI host bridge to bus 0000:00 Jan 23 19:03:28.060332 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 19:03:28.060455 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 19:03:28.060574 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 19:03:28.060709 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 23 19:03:28.060828 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 23 19:03:28.060973 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 19:03:28.061133 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 23 19:03:28.061288 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 23 19:03:28.061434 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jan 23 19:03:28.061569 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 23 19:03:28.061713 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 23 19:03:28.061847 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 23 19:03:28.062016 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 23 19:03:28.062150 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 23 19:03:28.062281 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 23 19:03:28.062409 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 23 19:03:28.062544 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 19:03:28.062675 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jan 23 19:03:28.062805 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 19:03:28.063534 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 19:03:28.063702 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jan 23 19:03:28.063837 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jan 23 19:03:28.064042 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jan 23 19:03:28.064183 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jan 23 19:03:28.064203 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 19:03:28.064220 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 19:03:28.064235 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 19:03:28.064255 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 19:03:28.064271 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 23 19:03:28.064286 kernel: iommu: Default domain type: Translated Jan 23 19:03:28.064301 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 19:03:28.064317 kernel: efivars: Registered efivars operations Jan 23 19:03:28.064332 kernel: PCI: Using ACPI for IRQ routing Jan 23 19:03:28.064348 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 19:03:28.064363 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jan 23 19:03:28.064378 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 23 19:03:28.064396 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 23 19:03:28.064526 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 23 19:03:28.064657 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 23 19:03:28.064789 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 19:03:28.064808 kernel: vgaarb: loaded Jan 23 19:03:28.064824 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 23 19:03:28.064839 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 23 19:03:28.064854 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 19:03:28.064872 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 19:03:28.064946 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 19:03:28.064961 kernel: pnp: PnP ACPI init Jan 23 19:03:28.064977 kernel: pnp: PnP ACPI: found 5 devices Jan 23 19:03:28.064992 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 19:03:28.065008 kernel: NET: Registered PF_INET protocol family Jan 23 19:03:28.065024 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 19:03:28.065040 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 23 19:03:28.065055 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 19:03:28.065074 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 19:03:28.065089 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 23 19:03:28.065104 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 23 19:03:28.065120 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 19:03:28.065135 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 19:03:28.065150 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 19:03:28.065166 kernel: NET: Registered PF_XDP protocol family Jan 23 19:03:28.065293 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 19:03:28.065415 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 19:03:28.065536 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 19:03:28.065653 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 23 19:03:28.065780 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 23 19:03:28.065948 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 23 19:03:28.065970 kernel: PCI: CLS 0 bytes, default 64 Jan 23 19:03:28.065986 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 19:03:28.066002 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 23 19:03:28.066017 kernel: clocksource: Switched to clocksource tsc Jan 23 19:03:28.066037 kernel: Initialise system trusted keyrings Jan 23 19:03:28.066052 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 23 19:03:28.066067 kernel: Key type asymmetric registered Jan 23 19:03:28.066082 kernel: Asymmetric key parser 'x509' registered Jan 23 19:03:28.066097 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 19:03:28.066113 kernel: io scheduler mq-deadline registered Jan 23 19:03:28.066128 kernel: io scheduler kyber registered Jan 23 19:03:28.066143 kernel: io scheduler bfq registered Jan 23 19:03:28.066159 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 19:03:28.066177 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 19:03:28.066193 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 19:03:28.066208 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 19:03:28.066223 kernel: i8042: Warning: Keylock active Jan 23 19:03:28.066238 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 19:03:28.066254 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 19:03:28.066389 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 23 19:03:28.066514 kernel: rtc_cmos 00:00: registered as rtc0 Jan 23 19:03:28.066638 kernel: rtc_cmos 00:00: setting system clock to 2026-01-23T19:03:27 UTC (1769195007) Jan 23 19:03:28.066759 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 23 19:03:28.066799 kernel: intel_pstate: CPU model not supported Jan 23 19:03:28.066818 kernel: efifb: probing for efifb Jan 23 19:03:28.066834 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jan 23 19:03:28.066850 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 23 19:03:28.066867 kernel: efifb: scrolling: redraw Jan 23 19:03:28.066903 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 19:03:28.066920 kernel: Console: switching to colour frame buffer device 100x37 Jan 23 19:03:28.066939 kernel: fb0: EFI VGA frame buffer device Jan 23 19:03:28.066955 kernel: pstore: Using crash dump compression: deflate Jan 23 19:03:28.066971 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 19:03:28.066987 kernel: NET: Registered PF_INET6 protocol family Jan 23 19:03:28.067003 kernel: Segment Routing with IPv6 Jan 23 19:03:28.067019 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 19:03:28.067035 kernel: NET: Registered PF_PACKET protocol family Jan 23 19:03:28.067051 kernel: Key type dns_resolver registered Jan 23 19:03:28.067066 kernel: IPI shorthand broadcast: enabled Jan 23 19:03:28.067085 kernel: sched_clock: Marking stable (2841002739, 228034152)->(3267879143, -198842252) Jan 23 19:03:28.067101 kernel: registered taskstats version 1 Jan 23 19:03:28.067117 kernel: Loading compiled-in X.509 certificates Jan 23 19:03:28.067133 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 19:03:28.067149 kernel: Demotion targets for Node 0: null Jan 23 19:03:28.067165 kernel: Key type .fscrypt registered Jan 23 19:03:28.067183 kernel: Key type fscrypt-provisioning registered Jan 23 19:03:28.067199 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 19:03:28.067215 kernel: ima: Allocated hash algorithm: sha1 Jan 23 19:03:28.067233 kernel: ima: No architecture policies found Jan 23 19:03:28.067249 kernel: clk: Disabling unused clocks Jan 23 19:03:28.067265 kernel: Warning: unable to open an initial console. Jan 23 19:03:28.067282 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 19:03:28.067298 kernel: Write protecting the kernel read-only data: 40960k Jan 23 19:03:28.067317 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 19:03:28.067336 kernel: Run /init as init process Jan 23 19:03:28.067352 kernel: with arguments: Jan 23 19:03:28.067368 kernel: /init Jan 23 19:03:28.067384 kernel: with environment: Jan 23 19:03:28.067400 kernel: HOME=/ Jan 23 19:03:28.067416 kernel: TERM=linux Jan 23 19:03:28.067434 systemd[1]: Successfully made /usr/ read-only. Jan 23 19:03:28.067455 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:03:28.067475 systemd[1]: Detected virtualization amazon. Jan 23 19:03:28.067492 systemd[1]: Detected architecture x86-64. Jan 23 19:03:28.067508 systemd[1]: Running in initrd. Jan 23 19:03:28.067524 systemd[1]: No hostname configured, using default hostname. Jan 23 19:03:28.067541 systemd[1]: Hostname set to . Jan 23 19:03:28.067558 systemd[1]: Initializing machine ID from VM UUID. Jan 23 19:03:28.067574 systemd[1]: Queued start job for default target initrd.target. Jan 23 19:03:28.067594 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:03:28.067610 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:03:28.067628 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 19:03:28.067646 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:03:28.067662 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 19:03:28.067681 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 19:03:28.067699 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 19:03:28.067719 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 19:03:28.067736 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:03:28.067752 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:03:28.067767 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:03:28.067784 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:03:28.067801 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:03:28.067818 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:03:28.067835 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:03:28.067855 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:03:28.067872 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 19:03:28.067902 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 19:03:28.067919 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:03:28.067936 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:03:28.067954 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:03:28.067970 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:03:28.067987 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 19:03:28.068004 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:03:28.068025 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 19:03:28.068042 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 19:03:28.068060 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 19:03:28.068076 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 19:03:28.068093 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:03:28.068110 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:03:28.068127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:03:28.068175 systemd-journald[187]: Collecting audit messages is disabled. Jan 23 19:03:28.068214 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 19:03:28.068236 systemd-journald[187]: Journal started Jan 23 19:03:28.068263 systemd-journald[187]: Runtime Journal (/run/log/journal/ec2e5c7b6135efbef5e7a5a4121ecf2b) is 4.7M, max 38.1M, 33.3M free. Jan 23 19:03:28.074126 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:03:28.074352 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:03:28.076331 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 19:03:28.077283 systemd-modules-load[189]: Inserted module 'overlay' Jan 23 19:03:28.086179 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 19:03:28.097829 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:03:28.111864 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:03:28.119103 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 19:03:28.125253 systemd-tmpfiles[201]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 19:03:28.133904 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 19:03:28.136293 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:03:28.142075 kernel: Bridge firewalling registered Jan 23 19:03:28.137448 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 23 19:03:28.143054 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:03:28.150204 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:03:28.156032 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:03:28.160484 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:03:28.175252 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:03:28.179771 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 19:03:28.189479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:03:28.191395 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:03:28.197047 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:03:28.213301 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:03:28.265709 systemd-resolved[227]: Positive Trust Anchors: Jan 23 19:03:28.265734 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:03:28.265823 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:03:28.274454 systemd-resolved[227]: Defaulting to hostname 'linux'. Jan 23 19:03:28.278440 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:03:28.279661 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:03:28.338913 kernel: SCSI subsystem initialized Jan 23 19:03:28.350951 kernel: Loading iSCSI transport class v2.0-870. Jan 23 19:03:28.364059 kernel: iscsi: registered transport (tcp) Jan 23 19:03:28.388808 kernel: iscsi: registered transport (qla4xxx) Jan 23 19:03:28.388908 kernel: QLogic iSCSI HBA Driver Jan 23 19:03:28.412127 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:03:28.432526 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:03:28.436783 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:03:28.495526 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 19:03:28.498482 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 19:03:28.554925 kernel: raid6: avx512x4 gen() 17737 MB/s Jan 23 19:03:28.572917 kernel: raid6: avx512x2 gen() 15007 MB/s Jan 23 19:03:28.590911 kernel: raid6: avx512x1 gen() 17680 MB/s Jan 23 19:03:28.608912 kernel: raid6: avx2x4 gen() 16248 MB/s Jan 23 19:03:28.627924 kernel: raid6: avx2x2 gen() 16291 MB/s Jan 23 19:03:28.646705 kernel: raid6: avx2x1 gen() 13577 MB/s Jan 23 19:03:28.648004 kernel: raid6: using algorithm avx512x4 gen() 17737 MB/s Jan 23 19:03:28.668900 kernel: raid6: .... xor() 7855 MB/s, rmw enabled Jan 23 19:03:28.668977 kernel: raid6: using avx512x2 recovery algorithm Jan 23 19:03:28.693913 kernel: xor: automatically using best checksumming function avx Jan 23 19:03:28.875915 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 19:03:28.882858 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:03:28.885247 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:03:28.913713 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 23 19:03:28.920426 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:03:28.926382 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 19:03:28.952820 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Jan 23 19:03:28.982725 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:03:28.984931 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:03:29.040410 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:03:29.044165 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 19:03:29.131913 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 19:03:29.135911 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 23 19:03:29.151901 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 19:03:29.161206 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 19:03:29.184043 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 19:03:29.184240 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 19:03:29.184261 kernel: GPT:9289727 != 33554431 Jan 23 19:03:29.184280 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 19:03:29.184298 kernel: GPT:9289727 != 33554431 Jan 23 19:03:29.184314 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 19:03:29.184334 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 19:03:29.184355 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 23 19:03:29.189172 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 19:03:29.197135 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:13:62:e1:25:67 Jan 23 19:03:29.215910 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 19:03:29.229535 (udev-worker)[484]: Network interface NamePolicy= disabled on kernel command line. Jan 23 19:03:29.231344 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:03:29.232552 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:03:29.234132 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:03:29.237318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:03:29.246941 kernel: AES CTR mode by8 optimization enabled Jan 23 19:03:29.245651 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:03:29.265092 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:03:29.265237 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:03:29.282142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:03:29.295911 kernel: nvme nvme0: using unchecked data buffer Jan 23 19:03:29.325917 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:03:29.428394 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 19:03:29.441149 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 19:03:29.461661 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 19:03:29.472698 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 19:03:29.482101 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 19:03:29.482835 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 19:03:29.484333 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:03:29.485478 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:03:29.486725 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:03:29.488577 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 19:03:29.491711 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 19:03:29.516915 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 19:03:29.517946 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:03:29.519252 disk-uuid[668]: Primary Header is updated. Jan 23 19:03:29.519252 disk-uuid[668]: Secondary Entries is updated. Jan 23 19:03:29.519252 disk-uuid[668]: Secondary Header is updated. Jan 23 19:03:29.539917 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 19:03:30.542937 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 19:03:30.544842 disk-uuid[676]: The operation has completed successfully. Jan 23 19:03:30.706127 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 19:03:30.706279 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 19:03:30.737329 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 19:03:30.751714 sh[934]: Success Jan 23 19:03:30.774962 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 19:03:30.775039 kernel: device-mapper: uevent: version 1.0.3 Jan 23 19:03:30.778067 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 19:03:30.789916 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jan 23 19:03:30.910594 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 19:03:30.916032 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 19:03:30.930177 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 19:03:30.951906 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (957) Jan 23 19:03:30.957129 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 19:03:30.957195 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:03:31.037314 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 19:03:31.037395 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 19:03:31.037409 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 19:03:31.041306 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 19:03:31.043033 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:03:31.044198 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 19:03:31.046065 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 19:03:31.049035 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 19:03:31.092953 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (992) Jan 23 19:03:31.100119 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:03:31.100544 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:03:31.108984 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 19:03:31.109067 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 19:03:31.119050 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:03:31.120103 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 19:03:31.123094 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 19:03:31.185373 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:03:31.188667 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:03:31.251735 systemd-networkd[1126]: lo: Link UP Jan 23 19:03:31.251752 systemd-networkd[1126]: lo: Gained carrier Jan 23 19:03:31.253603 systemd-networkd[1126]: Enumeration completed Jan 23 19:03:31.254082 systemd-networkd[1126]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:03:31.254089 systemd-networkd[1126]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:03:31.254983 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:03:31.256454 systemd[1]: Reached target network.target - Network. Jan 23 19:03:31.257316 systemd-networkd[1126]: eth0: Link UP Jan 23 19:03:31.257322 systemd-networkd[1126]: eth0: Gained carrier Jan 23 19:03:31.257340 systemd-networkd[1126]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:03:31.270006 systemd-networkd[1126]: eth0: DHCPv4 address 172.31.18.6/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 19:03:31.509642 ignition[1049]: Ignition 2.22.0 Jan 23 19:03:31.509759 ignition[1049]: Stage: fetch-offline Jan 23 19:03:31.509964 ignition[1049]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:03:31.509972 ignition[1049]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 19:03:31.510165 ignition[1049]: Ignition finished successfully Jan 23 19:03:31.511944 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:03:31.514197 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 19:03:31.551917 ignition[1136]: Ignition 2.22.0 Jan 23 19:03:31.551951 ignition[1136]: Stage: fetch Jan 23 19:03:31.552344 ignition[1136]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:03:31.552356 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 19:03:31.552466 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 19:03:31.560987 ignition[1136]: PUT result: OK Jan 23 19:03:31.563142 ignition[1136]: parsed url from cmdline: "" Jan 23 19:03:31.563155 ignition[1136]: no config URL provided Jan 23 19:03:31.563162 ignition[1136]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 19:03:31.563172 ignition[1136]: no config at "/usr/lib/ignition/user.ign" Jan 23 19:03:31.563191 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 19:03:31.564059 ignition[1136]: PUT result: OK Jan 23 19:03:31.564129 ignition[1136]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 19:03:31.564916 ignition[1136]: GET result: OK Jan 23 19:03:31.564982 ignition[1136]: parsing config with SHA512: 5c907263019c07c519b90f693610e0391dfa9c8fca8450c3c37eda92ae334868d6c7124ec729037a6991bd46e2fd8381f3b7f74542b1e9597b939e8759449a8b Jan 23 19:03:31.571252 unknown[1136]: fetched base config from "system" Jan 23 19:03:31.571272 unknown[1136]: fetched base config from "system" Jan 23 19:03:31.571804 ignition[1136]: fetch: fetch complete Jan 23 19:03:31.571279 unknown[1136]: fetched user config from "aws" Jan 23 19:03:31.571811 ignition[1136]: fetch: fetch passed Jan 23 19:03:31.571866 ignition[1136]: Ignition finished successfully Jan 23 19:03:31.575540 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 19:03:31.577617 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 19:03:31.613030 ignition[1143]: Ignition 2.22.0 Jan 23 19:03:31.613042 ignition[1143]: Stage: kargs Jan 23 19:03:31.613335 ignition[1143]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:03:31.613343 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 19:03:31.613419 ignition[1143]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 19:03:31.614503 ignition[1143]: PUT result: OK Jan 23 19:03:31.619158 ignition[1143]: kargs: kargs passed Jan 23 19:03:31.619222 ignition[1143]: Ignition finished successfully Jan 23 19:03:31.621372 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 19:03:31.622864 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 19:03:31.654458 ignition[1150]: Ignition 2.22.0 Jan 23 19:03:31.654472 ignition[1150]: Stage: disks Jan 23 19:03:31.654752 ignition[1150]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:03:31.654773 ignition[1150]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 19:03:31.654844 ignition[1150]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 19:03:31.656124 ignition[1150]: PUT result: OK Jan 23 19:03:31.658364 ignition[1150]: disks: disks passed Jan 23 19:03:31.658422 ignition[1150]: Ignition finished successfully Jan 23 19:03:31.660192 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 19:03:31.660641 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 19:03:31.661281 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 19:03:31.661591 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:03:31.662232 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:03:31.662801 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:03:31.664228 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 19:03:31.704574 systemd-fsck[1158]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 19:03:31.707596 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 19:03:31.709976 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 19:03:31.879902 kernel: EXT4-fs (nvme0n1p9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 19:03:31.880087 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 19:03:31.881095 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 19:03:31.883098 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:03:31.884955 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 19:03:31.886140 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 19:03:31.887512 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 19:03:31.887541 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:03:31.893348 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 19:03:31.895540 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 19:03:31.912922 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1177) Jan 23 19:03:31.918388 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:03:31.918476 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:03:31.925169 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 19:03:31.925569 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 19:03:31.928634 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:03:32.203700 initrd-setup-root[1201]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 19:03:32.222140 initrd-setup-root[1208]: cut: /sysroot/etc/group: No such file or directory Jan 23 19:03:32.228307 initrd-setup-root[1215]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 19:03:32.234061 initrd-setup-root[1222]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 19:03:32.547695 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 19:03:32.550388 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 19:03:32.553038 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 19:03:32.570194 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 19:03:32.574826 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:03:32.600643 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 19:03:32.613174 ignition[1289]: INFO : Ignition 2.22.0 Jan 23 19:03:32.613174 ignition[1289]: INFO : Stage: mount Jan 23 19:03:32.614804 ignition[1289]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:03:32.614804 ignition[1289]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 19:03:32.614804 ignition[1289]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 19:03:32.614804 ignition[1289]: INFO : PUT result: OK Jan 23 19:03:32.617489 ignition[1289]: INFO : mount: mount passed Jan 23 19:03:32.618103 ignition[1289]: INFO : Ignition finished successfully Jan 23 19:03:32.619550 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 19:03:32.621516 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 19:03:32.643821 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:03:32.676905 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1302) Jan 23 19:03:32.679946 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:03:32.680004 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:03:32.691913 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 19:03:32.691991 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 19:03:32.693921 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:03:32.729374 ignition[1318]: INFO : Ignition 2.22.0 Jan 23 19:03:32.729374 ignition[1318]: INFO : Stage: files Jan 23 19:03:32.730773 ignition[1318]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:03:32.730773 ignition[1318]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 19:03:32.730773 ignition[1318]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 19:03:32.731947 ignition[1318]: INFO : PUT result: OK Jan 23 19:03:32.734025 ignition[1318]: DEBUG : files: compiled without relabeling support, skipping Jan 23 19:03:32.734741 ignition[1318]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 19:03:32.734741 ignition[1318]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 19:03:32.740199 ignition[1318]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 19:03:32.740899 ignition[1318]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 19:03:32.740899 ignition[1318]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 19:03:32.740576 unknown[1318]: wrote ssh authorized keys file for user: core Jan 23 19:03:32.756000 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 19:03:32.757046 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 19:03:32.832099 systemd-networkd[1126]: eth0: Gained IPv6LL Jan 23 19:03:32.837126 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 19:03:32.979387 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 19:03:32.979387 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 19:03:32.979387 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 23 19:03:33.196905 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 19:03:33.298699 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 19:03:33.300925 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 19:03:33.300925 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 19:03:33.300925 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 19:03:33.300925 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 19:03:33.300925 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 19:03:33.300925 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 19:03:33.300925 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 19:03:33.300925 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 19:03:33.307717 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:03:33.307717 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:03:33.307717 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:03:33.307717 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:03:33.307717 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:03:33.307717 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 19:03:33.700631 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 19:03:33.967327 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:03:33.967327 ignition[1318]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 19:03:33.969249 ignition[1318]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 19:03:33.973841 ignition[1318]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 19:03:33.973841 ignition[1318]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 19:03:33.973841 ignition[1318]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 19:03:33.973841 ignition[1318]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 19:03:33.979627 ignition[1318]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:03:33.979627 ignition[1318]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:03:33.979627 ignition[1318]: INFO : files: files passed Jan 23 19:03:33.979627 ignition[1318]: INFO : Ignition finished successfully Jan 23 19:03:33.979662 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 19:03:33.981735 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 19:03:33.985383 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 19:03:33.993995 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 19:03:33.994819 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 19:03:34.002388 initrd-setup-root-after-ignition[1349]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:03:34.003977 initrd-setup-root-after-ignition[1353]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:03:34.004996 initrd-setup-root-after-ignition[1349]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:03:34.006086 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:03:34.007600 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 19:03:34.009107 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 19:03:34.064041 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 19:03:34.064183 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 19:03:34.065528 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 19:03:34.066339 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 19:03:34.067105 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 19:03:34.067942 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 19:03:34.090569 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:03:34.092255 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 19:03:34.122969 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:03:34.123668 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:03:34.124742 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 19:03:34.125569 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 19:03:34.125871 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:03:34.127073 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 19:03:34.127985 systemd[1]: Stopped target basic.target - Basic System. Jan 23 19:03:34.128757 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 19:03:34.129560 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:03:34.130468 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 19:03:34.131252 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:03:34.132051 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 19:03:34.132818 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:03:34.133573 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 19:03:34.134860 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 19:03:34.135641 systemd[1]: Stopped target swap.target - Swaps. Jan 23 19:03:34.136324 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 19:03:34.136499 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:03:34.137776 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:03:34.138619 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:03:34.139312 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 19:03:34.140017 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:03:34.140633 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 19:03:34.140840 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 19:03:34.142382 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 19:03:34.142628 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:03:34.143291 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 19:03:34.143431 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 19:03:34.146000 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 19:03:34.146546 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 19:03:34.146757 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:03:34.150345 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 19:03:34.152962 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 19:03:34.153764 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:03:34.155243 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 19:03:34.155969 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:03:34.164998 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 19:03:34.165945 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 19:03:34.185719 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 19:03:34.189219 ignition[1373]: INFO : Ignition 2.22.0 Jan 23 19:03:34.189219 ignition[1373]: INFO : Stage: umount Jan 23 19:03:34.192078 ignition[1373]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:03:34.192078 ignition[1373]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 19:03:34.192078 ignition[1373]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 19:03:34.192078 ignition[1373]: INFO : PUT result: OK Jan 23 19:03:34.202168 ignition[1373]: INFO : umount: umount passed Jan 23 19:03:34.202945 ignition[1373]: INFO : Ignition finished successfully Jan 23 19:03:34.204659 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 19:03:34.204841 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 19:03:34.206121 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 19:03:34.206206 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 19:03:34.206738 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 19:03:34.206811 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 19:03:34.207529 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 19:03:34.207587 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 19:03:34.208274 systemd[1]: Stopped target network.target - Network. Jan 23 19:03:34.208943 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 19:03:34.209005 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:03:34.210609 systemd[1]: Stopped target paths.target - Path Units. Jan 23 19:03:34.211255 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 19:03:34.214975 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:03:34.215449 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 19:03:34.216406 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 19:03:34.217150 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 19:03:34.217206 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:03:34.217932 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 19:03:34.217980 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:03:34.218592 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 19:03:34.218674 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 19:03:34.219302 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 19:03:34.219363 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 19:03:34.220117 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 19:03:34.220774 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 19:03:34.223720 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 19:03:34.224239 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 19:03:34.229567 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 19:03:34.230129 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 19:03:34.230276 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 19:03:34.232769 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 19:03:34.234555 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 19:03:34.235488 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 19:03:34.235558 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:03:34.237502 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 19:03:34.239133 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 19:03:34.239210 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:03:34.241051 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 19:03:34.241114 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:03:34.244101 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 19:03:34.244173 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 19:03:34.245175 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 19:03:34.245247 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:03:34.246203 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:03:34.250159 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 19:03:34.250261 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:03:34.267525 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 19:03:34.271831 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:03:34.274316 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 19:03:34.274769 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 19:03:34.276181 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 19:03:34.276231 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:03:34.277162 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 19:03:34.277233 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:03:34.278658 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 19:03:34.278727 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 19:03:34.280075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 19:03:34.280145 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:03:34.282608 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 19:03:34.283936 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 19:03:34.284004 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:03:34.285730 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 19:03:34.285808 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:03:34.289494 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:03:34.289569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:03:34.293318 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 19:03:34.295489 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 19:03:34.295552 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:03:34.296079 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 19:03:34.296240 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 19:03:34.304264 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 19:03:34.304417 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 19:03:34.308478 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 19:03:34.308617 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 19:03:34.310271 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 19:03:34.311322 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 19:03:34.311430 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 19:03:34.313345 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 19:03:34.327944 systemd[1]: Switching root. Jan 23 19:03:34.382423 systemd-journald[187]: Journal stopped Jan 23 19:03:35.967527 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 23 19:03:35.967589 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 19:03:35.967604 kernel: SELinux: policy capability open_perms=1 Jan 23 19:03:35.967615 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 19:03:35.967626 kernel: SELinux: policy capability always_check_network=0 Jan 23 19:03:35.967637 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 19:03:35.967648 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 19:03:35.967659 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 19:03:35.967673 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 19:03:35.967686 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 19:03:35.967703 kernel: audit: type=1403 audit(1769195014.776:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 19:03:35.967715 systemd[1]: Successfully loaded SELinux policy in 78.062ms. Jan 23 19:03:35.967737 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.346ms. Jan 23 19:03:35.967750 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:03:35.967762 systemd[1]: Detected virtualization amazon. Jan 23 19:03:35.967775 systemd[1]: Detected architecture x86-64. Jan 23 19:03:35.967796 systemd[1]: Detected first boot. Jan 23 19:03:35.967807 systemd[1]: Initializing machine ID from VM UUID. Jan 23 19:03:35.967821 kernel: Guest personality initialized and is inactive Jan 23 19:03:35.967832 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 19:03:35.967843 kernel: Initialized host personality Jan 23 19:03:35.967854 zram_generator::config[1416]: No configuration found. Jan 23 19:03:35.967867 kernel: NET: Registered PF_VSOCK protocol family Jan 23 19:03:35.967896 systemd[1]: Populated /etc with preset unit settings. Jan 23 19:03:35.967908 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 19:03:35.967921 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 19:03:35.967936 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 19:03:35.967948 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 19:03:35.967960 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 19:03:35.967971 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 19:03:35.967983 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 19:03:35.967994 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 19:03:35.968006 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 19:03:35.968018 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 19:03:35.968030 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 19:03:35.968045 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 19:03:35.968057 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:03:35.968068 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:03:35.968084 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 19:03:35.968099 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 19:03:35.968111 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 19:03:35.968123 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:03:35.968137 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 19:03:35.968148 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:03:35.968160 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:03:35.968172 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 19:03:35.968183 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 19:03:35.968195 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 19:03:35.968208 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 19:03:35.968223 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:03:35.968239 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:03:35.968253 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:03:35.968265 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:03:35.968278 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 19:03:35.968289 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 19:03:35.968301 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 19:03:35.968313 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:03:35.968324 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:03:35.968336 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:03:35.968348 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 19:03:35.968359 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 19:03:35.968374 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 19:03:35.968385 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 19:03:35.968397 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:03:35.968408 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 19:03:35.968420 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 19:03:35.968432 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 19:03:35.968444 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 19:03:35.968455 systemd[1]: Reached target machines.target - Containers. Jan 23 19:03:35.968469 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 19:03:35.968481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:03:35.968493 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:03:35.968504 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 19:03:35.968516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:03:35.968528 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:03:35.968541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:03:35.968553 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 19:03:35.968564 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:03:35.968578 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 19:03:35.968590 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 19:03:35.968601 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 19:03:35.968612 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 19:03:35.968624 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 19:03:35.968636 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:03:35.968648 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:03:35.968660 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:03:35.968674 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:03:35.968686 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 19:03:35.968698 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 19:03:35.968710 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:03:35.968724 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 19:03:35.968735 systemd[1]: Stopped verity-setup.service. Jan 23 19:03:35.968749 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:03:35.968762 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 19:03:35.968774 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 19:03:35.968786 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 19:03:35.968800 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 19:03:35.968811 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 19:03:35.968823 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 19:03:35.968835 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:03:35.968847 kernel: ACPI: bus type drm_connector registered Jan 23 19:03:35.968858 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 19:03:35.968869 kernel: fuse: init (API version 7.41) Jan 23 19:03:35.968893 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 19:03:35.968905 kernel: loop: module loaded Jan 23 19:03:35.968919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:03:35.968931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:03:35.968947 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:03:35.968958 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:03:35.968970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:03:35.968982 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:03:35.968994 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 19:03:35.969005 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 19:03:35.969017 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 19:03:35.969053 systemd-journald[1502]: Collecting audit messages is disabled. Jan 23 19:03:35.969078 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:03:35.969090 systemd-journald[1502]: Journal started Jan 23 19:03:35.969114 systemd-journald[1502]: Runtime Journal (/run/log/journal/ec2e5c7b6135efbef5e7a5a4121ecf2b) is 4.7M, max 38.1M, 33.3M free. Jan 23 19:03:35.646345 systemd[1]: Queued start job for default target multi-user.target. Jan 23 19:03:35.661222 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 19:03:35.661930 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 19:03:35.972966 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:03:35.973020 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:03:35.974301 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:03:35.975005 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:03:35.975653 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 19:03:35.976325 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 19:03:35.988232 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:03:35.990975 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 19:03:35.993036 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 19:03:35.993988 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 19:03:35.994019 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:03:35.995817 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 19:03:36.006033 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 19:03:36.007483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:03:36.012087 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 19:03:36.018772 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 19:03:36.020061 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:03:36.022804 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 19:03:36.024989 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:03:36.026034 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:03:36.028993 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 19:03:36.034125 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 19:03:36.037535 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 19:03:36.041173 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:03:36.042109 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 19:03:36.070644 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 19:03:36.071933 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 19:03:36.078055 kernel: loop0: detected capacity change from 0 to 128560 Jan 23 19:03:36.076285 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 19:03:36.078782 systemd-journald[1502]: Time spent on flushing to /var/log/journal/ec2e5c7b6135efbef5e7a5a4121ecf2b is 40.018ms for 1028 entries. Jan 23 19:03:36.078782 systemd-journald[1502]: System Journal (/var/log/journal/ec2e5c7b6135efbef5e7a5a4121ecf2b) is 8M, max 195.6M, 187.6M free. Jan 23 19:03:36.127352 systemd-journald[1502]: Received client request to flush runtime journal. Jan 23 19:03:36.096038 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:03:36.096738 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 19:03:36.101251 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:03:36.131386 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 19:03:36.150309 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 19:03:36.174839 systemd-tmpfiles[1564]: ACLs are not supported, ignoring. Jan 23 19:03:36.178158 systemd-tmpfiles[1564]: ACLs are not supported, ignoring. Jan 23 19:03:36.187262 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 19:03:36.193008 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:03:36.205914 kernel: loop1: detected capacity change from 0 to 229808 Jan 23 19:03:36.254962 kernel: loop2: detected capacity change from 0 to 72368 Jan 23 19:03:36.294951 kernel: loop3: detected capacity change from 0 to 110984 Jan 23 19:03:36.380921 kernel: loop4: detected capacity change from 0 to 128560 Jan 23 19:03:36.399940 kernel: loop5: detected capacity change from 0 to 229808 Jan 23 19:03:36.428916 kernel: loop6: detected capacity change from 0 to 72368 Jan 23 19:03:36.457318 kernel: loop7: detected capacity change from 0 to 110984 Jan 23 19:03:36.512840 (sd-merge)[1578]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 19:03:36.514190 (sd-merge)[1578]: Merged extensions into '/usr'. Jan 23 19:03:36.521738 systemd[1]: Reload requested from client PID 1550 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 19:03:36.522323 systemd[1]: Reloading... Jan 23 19:03:36.636905 zram_generator::config[1604]: No configuration found. Jan 23 19:03:37.033906 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 19:03:37.034652 systemd[1]: Reloading finished in 511 ms. Jan 23 19:03:37.050798 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 19:03:37.051521 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 19:03:37.063361 systemd[1]: Starting ensure-sysext.service... Jan 23 19:03:37.067034 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:03:37.070994 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:03:37.090329 systemd[1]: Reload requested from client PID 1656 ('systemctl') (unit ensure-sysext.service)... Jan 23 19:03:37.090421 systemd[1]: Reloading... Jan 23 19:03:37.092028 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 19:03:37.092502 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 19:03:37.092777 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 19:03:37.093047 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 19:03:37.093826 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 19:03:37.094088 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Jan 23 19:03:37.094146 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Jan 23 19:03:37.105997 systemd-tmpfiles[1657]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:03:37.106008 systemd-tmpfiles[1657]: Skipping /boot Jan 23 19:03:37.116655 systemd-tmpfiles[1657]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:03:37.116674 systemd-tmpfiles[1657]: Skipping /boot Jan 23 19:03:37.131991 systemd-udevd[1658]: Using default interface naming scheme 'v255'. Jan 23 19:03:37.180925 zram_generator::config[1686]: No configuration found. Jan 23 19:03:37.486032 (udev-worker)[1699]: Network interface NamePolicy= disabled on kernel command line. Jan 23 19:03:37.669071 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 19:03:37.671070 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 19:03:37.677968 kernel: ACPI: button: Power Button [PWRF] Jan 23 19:03:37.680376 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 23 19:03:37.685921 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 23 19:03:37.692900 kernel: ACPI: button: Sleep Button [SLPF] Jan 23 19:03:37.707947 ldconfig[1545]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 19:03:37.712827 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 19:03:37.715551 systemd[1]: Reloading finished in 624 ms. Jan 23 19:03:37.729920 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:03:37.731664 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 19:03:37.734946 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:03:37.781278 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:03:37.788038 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 19:03:37.796301 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 19:03:37.803323 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:03:37.812120 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:03:37.823317 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 19:03:37.834727 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:03:37.835249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:03:37.838971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:03:37.853035 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:03:37.864278 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:03:37.865613 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:03:37.865798 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:03:37.867034 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:03:37.875635 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:03:37.876253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:03:37.876459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:03:37.876584 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:03:37.887000 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 19:03:37.887578 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:03:37.898106 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:03:37.898645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:03:37.904238 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:03:37.906125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:03:37.906304 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:03:37.906557 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 19:03:37.907786 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:03:37.920590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:03:37.921989 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:03:37.924070 systemd[1]: Finished ensure-sysext.service. Jan 23 19:03:37.939768 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 19:03:37.949706 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 19:03:37.963046 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 19:03:37.973654 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:03:37.974388 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:03:37.976430 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:03:37.977699 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:03:37.978245 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:03:37.979962 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:03:37.980350 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:03:37.984708 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:03:38.010739 augenrules[1865]: No rules Jan 23 19:03:38.015132 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:03:38.015479 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:03:38.019329 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 19:03:38.021131 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 19:03:38.023865 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 19:03:38.049260 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 19:03:38.182830 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 19:03:38.185098 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 19:03:38.234390 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 19:03:38.256483 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:03:38.268739 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:03:38.269535 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:03:38.275821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:03:38.363134 systemd-networkd[1809]: lo: Link UP Jan 23 19:03:38.365931 systemd-networkd[1809]: lo: Gained carrier Jan 23 19:03:38.371692 systemd-networkd[1809]: Enumeration completed Jan 23 19:03:38.371868 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:03:38.372184 systemd-networkd[1809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:03:38.372190 systemd-networkd[1809]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:03:38.377323 systemd-networkd[1809]: eth0: Link UP Jan 23 19:03:38.377579 systemd-networkd[1809]: eth0: Gained carrier Jan 23 19:03:38.377654 systemd-networkd[1809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:03:38.378607 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 19:03:38.384249 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 19:03:38.404007 systemd-networkd[1809]: eth0: DHCPv4 address 172.31.18.6/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 19:03:38.415134 systemd-resolved[1815]: Positive Trust Anchors: Jan 23 19:03:38.417164 systemd-resolved[1815]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:03:38.417237 systemd-resolved[1815]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:03:38.431602 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 19:03:38.434760 systemd-resolved[1815]: Defaulting to hostname 'linux'. Jan 23 19:03:38.443153 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:03:38.445057 systemd[1]: Reached target network.target - Network. Jan 23 19:03:38.445674 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:03:38.473782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:03:38.474547 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:03:38.475227 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 19:03:38.475676 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 19:03:38.476146 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 19:03:38.476678 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 19:03:38.477350 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 19:03:38.477778 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 19:03:38.478180 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 19:03:38.478230 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:03:38.478604 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:03:38.480399 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 19:03:38.482283 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 19:03:38.485002 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 19:03:38.485575 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 19:03:38.486079 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 19:03:38.488823 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 19:03:38.489753 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 19:03:38.490960 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 19:03:38.492221 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:03:38.492633 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:03:38.493106 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:03:38.493147 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:03:38.494267 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 19:03:38.498028 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 19:03:38.499971 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 19:03:38.506991 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 19:03:38.510090 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 19:03:38.514157 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 19:03:38.514790 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 19:03:38.518112 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 19:03:38.525122 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 19:03:38.531284 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 19:03:38.544321 jq[1942]: false Jan 23 19:03:38.551362 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 19:03:38.567848 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 19:03:38.583034 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 19:03:38.592307 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 19:03:38.601461 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 19:03:38.604801 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 19:03:38.608143 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 19:03:38.611067 google_oslogin_nss_cache[1944]: oslogin_cache_refresh[1944]: Refreshing passwd entry cache Jan 23 19:03:38.608110 oslogin_cache_refresh[1944]: Refreshing passwd entry cache Jan 23 19:03:38.610123 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 19:03:38.618092 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 19:03:38.620915 extend-filesystems[1943]: Found /dev/nvme0n1p6 Jan 23 19:03:38.627409 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 19:03:38.628371 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 19:03:38.628615 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 19:03:38.635920 google_oslogin_nss_cache[1944]: oslogin_cache_refresh[1944]: Failure getting users, quitting Jan 23 19:03:38.635920 google_oslogin_nss_cache[1944]: oslogin_cache_refresh[1944]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:03:38.635920 google_oslogin_nss_cache[1944]: oslogin_cache_refresh[1944]: Refreshing group entry cache Jan 23 19:03:38.632359 oslogin_cache_refresh[1944]: Failure getting users, quitting Jan 23 19:03:38.632385 oslogin_cache_refresh[1944]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:03:38.632472 oslogin_cache_refresh[1944]: Refreshing group entry cache Jan 23 19:03:38.643909 extend-filesystems[1943]: Found /dev/nvme0n1p9 Jan 23 19:03:38.643640 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 19:03:38.643946 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 19:03:38.655842 google_oslogin_nss_cache[1944]: oslogin_cache_refresh[1944]: Failure getting groups, quitting Jan 23 19:03:38.655842 google_oslogin_nss_cache[1944]: oslogin_cache_refresh[1944]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:03:38.655827 oslogin_cache_refresh[1944]: Failure getting groups, quitting Jan 23 19:03:38.655845 oslogin_cache_refresh[1944]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:03:38.670311 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 19:03:38.671451 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 19:03:38.672458 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 19:03:38.673956 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 19:03:38.683346 extend-filesystems[1943]: Checking size of /dev/nvme0n1p9 Jan 23 19:03:38.706764 jq[1963]: true Jan 23 19:03:38.730634 extend-filesystems[1943]: Resized partition /dev/nvme0n1p9 Jan 23 19:03:38.734995 update_engine[1960]: I20260123 19:03:38.734387 1960 main.cc:92] Flatcar Update Engine starting Jan 23 19:03:38.739904 tar[1967]: linux-amd64/LICENSE Jan 23 19:03:38.740216 tar[1967]: linux-amd64/helm Jan 23 19:03:38.747201 ntpd[1946]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 19:03:38.750246 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 19:03:38.750246 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 19:03:38.750246 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: ---------------------------------------------------- Jan 23 19:03:38.750246 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: ntp-4 is maintained by Network Time Foundation, Jan 23 19:03:38.750246 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 19:03:38.750246 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: corporation. Support and training for ntp-4 are Jan 23 19:03:38.750246 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: available at https://www.nwtime.org/support Jan 23 19:03:38.750246 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: ---------------------------------------------------- Jan 23 19:03:38.747270 ntpd[1946]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 19:03:38.747280 ntpd[1946]: ---------------------------------------------------- Jan 23 19:03:38.747290 ntpd[1946]: ntp-4 is maintained by Network Time Foundation, Jan 23 19:03:38.747299 ntpd[1946]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 19:03:38.747308 ntpd[1946]: corporation. Support and training for ntp-4 are Jan 23 19:03:38.747318 ntpd[1946]: available at https://www.nwtime.org/support Jan 23 19:03:38.747327 ntpd[1946]: ---------------------------------------------------- Jan 23 19:03:38.756811 extend-filesystems[1995]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 19:03:38.822331 kernel: ntpd[1946]: segfault at 24 ip 000055b2ea986aeb sp 00007ffc92fd5b20 error 4 in ntpd[68aeb,55b2ea924000+80000] likely on CPU 0 (core 0, socket 0) Jan 23 19:03:38.822392 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 19:03:38.822416 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 19:03:38.822437 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: proto: precision = 0.090 usec (-23) Jan 23 19:03:38.822437 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: basedate set to 2026-01-11 Jan 23 19:03:38.822437 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: gps base set to 2026-01-11 (week 2401) Jan 23 19:03:38.822437 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 19:03:38.822437 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 19:03:38.822437 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 19:03:38.822437 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: Listen normally on 3 eth0 172.31.18.6:123 Jan 23 19:03:38.822437 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: Listen normally on 4 lo [::1]:123 Jan 23 19:03:38.822437 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: bind(21) AF_INET6 [fe80::413:62ff:fee1:2567%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 19:03:38.822437 ntpd[1946]: 23 Jan 19:03:38 ntpd[1946]: unable to create socket on eth0 (5) for [fe80::413:62ff:fee1:2567%2]:123 Jan 23 19:03:38.764710 (ntainerd)[1988]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 19:03:38.759095 ntpd[1946]: proto: precision = 0.090 usec (-23) Jan 23 19:03:38.826228 coreos-metadata[1939]: Jan 23 19:03:38.825 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 19:03:38.774747 systemd-coredump[1996]: Process 1946 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 19:03:38.762329 ntpd[1946]: basedate set to 2026-01-11 Jan 23 19:03:38.790070 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 19:03:38.833325 jq[1989]: true Jan 23 19:03:38.762350 ntpd[1946]: gps base set to 2026-01-11 (week 2401) Jan 23 19:03:38.823834 systemd[1]: Started systemd-coredump@0-1996-0.service - Process Core Dump (PID 1996/UID 0). Jan 23 19:03:38.762489 ntpd[1946]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 19:03:38.826605 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 19:03:38.853195 coreos-metadata[1939]: Jan 23 19:03:38.834 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 19:03:38.853195 coreos-metadata[1939]: Jan 23 19:03:38.835 INFO Fetch successful Jan 23 19:03:38.853195 coreos-metadata[1939]: Jan 23 19:03:38.835 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 19:03:38.853195 coreos-metadata[1939]: Jan 23 19:03:38.843 INFO Fetch successful Jan 23 19:03:38.853195 coreos-metadata[1939]: Jan 23 19:03:38.843 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 19:03:38.762517 ntpd[1946]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 19:03:38.840273 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 19:03:38.764084 ntpd[1946]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 19:03:38.840340 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 19:03:38.764118 ntpd[1946]: Listen normally on 3 eth0 172.31.18.6:123 Jan 23 19:03:38.842069 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 19:03:38.764148 ntpd[1946]: Listen normally on 4 lo [::1]:123 Jan 23 19:03:38.842103 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 19:03:38.764178 ntpd[1946]: bind(21) AF_INET6 [fe80::413:62ff:fee1:2567%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 19:03:38.764197 ntpd[1946]: unable to create socket on eth0 (5) for [fe80::413:62ff:fee1:2567%2]:123 Jan 23 19:03:38.823176 dbus-daemon[1940]: [system] SELinux support is enabled Jan 23 19:03:38.856099 coreos-metadata[1939]: Jan 23 19:03:38.855 INFO Fetch successful Jan 23 19:03:38.856099 coreos-metadata[1939]: Jan 23 19:03:38.855 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 19:03:38.856099 coreos-metadata[1939]: Jan 23 19:03:38.855 INFO Fetch successful Jan 23 19:03:38.856099 coreos-metadata[1939]: Jan 23 19:03:38.856 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 19:03:38.862551 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 19:03:38.864539 coreos-metadata[1939]: Jan 23 19:03:38.863 INFO Fetch failed with 404: resource not found Jan 23 19:03:38.864539 coreos-metadata[1939]: Jan 23 19:03:38.863 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 19:03:38.865200 coreos-metadata[1939]: Jan 23 19:03:38.865 INFO Fetch successful Jan 23 19:03:38.865200 coreos-metadata[1939]: Jan 23 19:03:38.865 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 19:03:38.866096 dbus-daemon[1940]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1809 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 19:03:38.875972 coreos-metadata[1939]: Jan 23 19:03:38.871 INFO Fetch successful Jan 23 19:03:38.875972 coreos-metadata[1939]: Jan 23 19:03:38.872 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 19:03:38.875972 coreos-metadata[1939]: Jan 23 19:03:38.874 INFO Fetch successful Jan 23 19:03:38.875972 coreos-metadata[1939]: Jan 23 19:03:38.874 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 19:03:38.875972 coreos-metadata[1939]: Jan 23 19:03:38.875 INFO Fetch successful Jan 23 19:03:38.875972 coreos-metadata[1939]: Jan 23 19:03:38.875 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 19:03:38.882903 update_engine[1960]: I20260123 19:03:38.876488 1960 update_check_scheduler.cc:74] Next update check in 10m47s Jan 23 19:03:38.883001 coreos-metadata[1939]: Jan 23 19:03:38.879 INFO Fetch successful Jan 23 19:03:38.876991 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 19:03:38.878864 systemd[1]: Started update-engine.service - Update Engine. Jan 23 19:03:38.898524 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 19:03:38.951373 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 19:03:38.979079 extend-filesystems[1995]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 19:03:38.979079 extend-filesystems[1995]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 19:03:38.979079 extend-filesystems[1995]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 19:03:38.986684 extend-filesystems[1943]: Resized filesystem in /dev/nvme0n1p9 Jan 23 19:03:38.981102 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 19:03:38.981387 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 19:03:39.017566 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 19:03:39.018867 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 19:03:39.050395 systemd-logind[1959]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 19:03:39.050428 systemd-logind[1959]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 23 19:03:39.050449 systemd-logind[1959]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 19:03:39.055110 systemd-logind[1959]: New seat seat0. Jan 23 19:03:39.056082 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 19:03:39.057918 bash[2033]: Updated "/home/core/.ssh/authorized_keys" Jan 23 19:03:39.059213 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 19:03:39.073958 systemd[1]: Starting sshkeys.service... Jan 23 19:03:39.079235 sshd_keygen[1982]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 19:03:39.158798 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 19:03:39.165373 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 19:03:39.306628 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 19:03:39.313255 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 19:03:39.337025 systemd-coredump[1999]: Process 1946 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1946: #0 0x000055b2ea986aeb n/a (ntpd + 0x68aeb) #1 0x000055b2ea92fcdf n/a (ntpd + 0x11cdf) #2 0x000055b2ea930575 n/a (ntpd + 0x12575) #3 0x000055b2ea92bd8a n/a (ntpd + 0xdd8a) #4 0x000055b2ea92d5d3 n/a (ntpd + 0xf5d3) #5 0x000055b2ea935fd1 n/a (ntpd + 0x17fd1) #6 0x000055b2ea926c2d n/a (ntpd + 0x8c2d) #7 0x00007f7dc279216c n/a (libc.so.6 + 0x2716c) #8 0x00007f7dc2792229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055b2ea926c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 19:03:39.347147 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 19:03:39.347469 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 19:03:39.352097 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 19:03:39.353526 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 19:03:39.353732 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 19:03:39.372992 systemd[1]: systemd-coredump@0-1996-0.service: Deactivated successfully. Jan 23 19:03:39.380628 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 19:03:39.393834 dbus-daemon[1940]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 19:03:39.402325 dbus-daemon[1940]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2010 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 19:03:39.435303 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 19:03:39.473957 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 19:03:39.493830 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 19:03:39.521274 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 19:03:39.533157 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 19:03:39.543148 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 19:03:39.544250 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 19:03:39.583492 coreos-metadata[2063]: Jan 23 19:03:39.580 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 19:03:39.588211 coreos-metadata[2063]: Jan 23 19:03:39.587 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 19:03:39.589120 coreos-metadata[2063]: Jan 23 19:03:39.588 INFO Fetch successful Jan 23 19:03:39.589120 coreos-metadata[2063]: Jan 23 19:03:39.589 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 19:03:39.594761 coreos-metadata[2063]: Jan 23 19:03:39.591 INFO Fetch successful Jan 23 19:03:39.596383 unknown[2063]: wrote ssh authorized keys file for user: core Jan 23 19:03:39.602365 locksmithd[2011]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 19:03:39.612039 ntpd[2132]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 19:03:39.613283 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 19:03:39.613283 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 19:03:39.613283 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: ---------------------------------------------------- Jan 23 19:03:39.613283 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: ntp-4 is maintained by Network Time Foundation, Jan 23 19:03:39.613283 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 19:03:39.613283 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: corporation. Support and training for ntp-4 are Jan 23 19:03:39.613283 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: available at https://www.nwtime.org/support Jan 23 19:03:39.613283 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: ---------------------------------------------------- Jan 23 19:03:39.613283 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: proto: precision = 0.055 usec (-24) Jan 23 19:03:39.612117 ntpd[2132]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 19:03:39.612128 ntpd[2132]: ---------------------------------------------------- Jan 23 19:03:39.612138 ntpd[2132]: ntp-4 is maintained by Network Time Foundation, Jan 23 19:03:39.612147 ntpd[2132]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 19:03:39.612156 ntpd[2132]: corporation. Support and training for ntp-4 are Jan 23 19:03:39.612165 ntpd[2132]: available at https://www.nwtime.org/support Jan 23 19:03:39.612175 ntpd[2132]: ---------------------------------------------------- Jan 23 19:03:39.612792 ntpd[2132]: proto: precision = 0.055 usec (-24) Jan 23 19:03:39.622289 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: basedate set to 2026-01-11 Jan 23 19:03:39.622289 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: gps base set to 2026-01-11 (week 2401) Jan 23 19:03:39.622289 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 19:03:39.622289 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 19:03:39.619611 ntpd[2132]: basedate set to 2026-01-11 Jan 23 19:03:39.619635 ntpd[2132]: gps base set to 2026-01-11 (week 2401) Jan 23 19:03:39.619741 ntpd[2132]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 19:03:39.619767 ntpd[2132]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 19:03:39.631615 kernel: ntpd[2132]: segfault at 24 ip 000055ea96d9baeb sp 00007ffe99599960 error 4 in ntpd[68aeb,55ea96d39000+80000] likely on CPU 0 (core 0, socket 0) Jan 23 19:03:39.631726 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 19:03:39.622837 ntpd[2132]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 19:03:39.631830 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 19:03:39.631830 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: Listen normally on 3 eth0 172.31.18.6:123 Jan 23 19:03:39.631830 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: Listen normally on 4 lo [::1]:123 Jan 23 19:03:39.631830 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: bind(21) AF_INET6 [fe80::413:62ff:fee1:2567%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 19:03:39.631830 ntpd[2132]: 23 Jan 19:03:39 ntpd[2132]: unable to create socket on eth0 (5) for [fe80::413:62ff:fee1:2567%2]:123 Jan 23 19:03:39.622874 ntpd[2132]: Listen normally on 3 eth0 172.31.18.6:123 Jan 23 19:03:39.622924 ntpd[2132]: Listen normally on 4 lo [::1]:123 Jan 23 19:03:39.622954 ntpd[2132]: bind(21) AF_INET6 [fe80::413:62ff:fee1:2567%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 19:03:39.622976 ntpd[2132]: unable to create socket on eth0 (5) for [fe80::413:62ff:fee1:2567%2]:123 Jan 23 19:03:39.655075 systemd-coredump[2162]: Process 2132 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 19:03:39.666092 systemd[1]: Started systemd-coredump@1-2162-0.service - Process Core Dump (PID 2162/UID 0). Jan 23 19:03:39.676271 update-ssh-keys[2161]: Updated "/home/core/.ssh/authorized_keys" Jan 23 19:03:39.677937 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 19:03:39.688788 systemd[1]: Finished sshkeys.service. Jan 23 19:03:39.727781 containerd[1988]: time="2026-01-23T19:03:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 19:03:39.731776 containerd[1988]: time="2026-01-23T19:03:39.731717794Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 19:03:39.799490 containerd[1988]: time="2026-01-23T19:03:39.798728587Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.683µs" Jan 23 19:03:39.802002 containerd[1988]: time="2026-01-23T19:03:39.801957927Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 19:03:39.802091 containerd[1988]: time="2026-01-23T19:03:39.802009068Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 19:03:39.802234 containerd[1988]: time="2026-01-23T19:03:39.802209522Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 19:03:39.802286 containerd[1988]: time="2026-01-23T19:03:39.802240288Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 19:03:39.802322 containerd[1988]: time="2026-01-23T19:03:39.802286723Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:03:39.802617 containerd[1988]: time="2026-01-23T19:03:39.802359478Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:03:39.802617 containerd[1988]: time="2026-01-23T19:03:39.802376908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:03:39.802716 containerd[1988]: time="2026-01-23T19:03:39.802641831Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:03:39.802716 containerd[1988]: time="2026-01-23T19:03:39.802663535Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:03:39.802716 containerd[1988]: time="2026-01-23T19:03:39.802678195Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:03:39.802716 containerd[1988]: time="2026-01-23T19:03:39.802689990Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 19:03:39.802871 containerd[1988]: time="2026-01-23T19:03:39.802794619Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 19:03:39.803411 containerd[1988]: time="2026-01-23T19:03:39.803089400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:03:39.803411 containerd[1988]: time="2026-01-23T19:03:39.803138963Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:03:39.803411 containerd[1988]: time="2026-01-23T19:03:39.803158580Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 19:03:39.803411 containerd[1988]: time="2026-01-23T19:03:39.803203784Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 19:03:39.808291 containerd[1988]: time="2026-01-23T19:03:39.808145604Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 19:03:39.808291 containerd[1988]: time="2026-01-23T19:03:39.808258813Z" level=info msg="metadata content store policy set" policy=shared Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.815556100Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.815639352Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.815659249Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.815717966Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.815736986Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.815752761Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.815771357Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.815817730Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.815834679Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.815849911Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.815865116Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.815904212Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.816052862Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 19:03:39.816423 containerd[1988]: time="2026-01-23T19:03:39.816079492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 19:03:39.819750 containerd[1988]: time="2026-01-23T19:03:39.816100100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 19:03:39.819750 containerd[1988]: time="2026-01-23T19:03:39.816118924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 19:03:39.819750 containerd[1988]: time="2026-01-23T19:03:39.816135539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 19:03:39.819750 containerd[1988]: time="2026-01-23T19:03:39.816151999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 19:03:39.819750 containerd[1988]: time="2026-01-23T19:03:39.816167747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 19:03:39.819750 containerd[1988]: time="2026-01-23T19:03:39.816182459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 19:03:39.819750 containerd[1988]: time="2026-01-23T19:03:39.816198186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 19:03:39.819750 containerd[1988]: time="2026-01-23T19:03:39.816212974Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 19:03:39.819750 containerd[1988]: time="2026-01-23T19:03:39.816228347Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 19:03:39.819750 containerd[1988]: time="2026-01-23T19:03:39.816289315Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 19:03:39.819750 containerd[1988]: time="2026-01-23T19:03:39.816306553Z" level=info msg="Start snapshots syncer" Jan 23 19:03:39.819750 containerd[1988]: time="2026-01-23T19:03:39.817120217Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 19:03:39.820206 containerd[1988]: time="2026-01-23T19:03:39.817597548Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 19:03:39.820206 containerd[1988]: time="2026-01-23T19:03:39.817672997Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 19:03:39.821388 containerd[1988]: time="2026-01-23T19:03:39.820748365Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 19:03:39.821388 containerd[1988]: time="2026-01-23T19:03:39.820975604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 19:03:39.821388 containerd[1988]: time="2026-01-23T19:03:39.821018056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 19:03:39.821388 containerd[1988]: time="2026-01-23T19:03:39.821034197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 19:03:39.821388 containerd[1988]: time="2026-01-23T19:03:39.821048794Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 19:03:39.821388 containerd[1988]: time="2026-01-23T19:03:39.821075973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 19:03:39.821388 containerd[1988]: time="2026-01-23T19:03:39.821095267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 19:03:39.821388 containerd[1988]: time="2026-01-23T19:03:39.821111523Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 19:03:39.821388 containerd[1988]: time="2026-01-23T19:03:39.821142224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 19:03:39.821388 containerd[1988]: time="2026-01-23T19:03:39.821157148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 19:03:39.821388 containerd[1988]: time="2026-01-23T19:03:39.821173269Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 19:03:39.826343 containerd[1988]: time="2026-01-23T19:03:39.825004794Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:03:39.826343 containerd[1988]: time="2026-01-23T19:03:39.825044288Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:03:39.826343 containerd[1988]: time="2026-01-23T19:03:39.825059316Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:03:39.826343 containerd[1988]: time="2026-01-23T19:03:39.825081258Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:03:39.826343 containerd[1988]: time="2026-01-23T19:03:39.825093832Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 19:03:39.826343 containerd[1988]: time="2026-01-23T19:03:39.825107071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 19:03:39.826343 containerd[1988]: time="2026-01-23T19:03:39.825130137Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 19:03:39.826343 containerd[1988]: time="2026-01-23T19:03:39.825151992Z" level=info msg="runtime interface created" Jan 23 19:03:39.826343 containerd[1988]: time="2026-01-23T19:03:39.825159223Z" level=info msg="created NRI interface" Jan 23 19:03:39.826343 containerd[1988]: time="2026-01-23T19:03:39.825170420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 19:03:39.826343 containerd[1988]: time="2026-01-23T19:03:39.825191787Z" level=info msg="Connect containerd service" Jan 23 19:03:39.826343 containerd[1988]: time="2026-01-23T19:03:39.825231558Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 19:03:39.828296 containerd[1988]: time="2026-01-23T19:03:39.827259777Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 19:03:39.901148 polkitd[2127]: Started polkitd version 126 Jan 23 19:03:39.914975 polkitd[2127]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 19:03:39.919341 polkitd[2127]: Loading rules from directory /run/polkit-1/rules.d Jan 23 19:03:39.919420 polkitd[2127]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 19:03:39.921925 polkitd[2127]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 19:03:39.921982 polkitd[2127]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 19:03:39.922032 polkitd[2127]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 19:03:39.922676 polkitd[2127]: Finished loading, compiling and executing 2 rules Jan 23 19:03:39.923110 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 19:03:39.928160 dbus-daemon[1940]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 19:03:39.928506 polkitd[2127]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 19:03:39.932459 systemd-coredump[2163]: Process 2132 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2132: #0 0x000055ea96d9baeb n/a (ntpd + 0x68aeb) #1 0x000055ea96d44cdf n/a (ntpd + 0x11cdf) #2 0x000055ea96d45575 n/a (ntpd + 0x12575) #3 0x000055ea96d40d8a n/a (ntpd + 0xdd8a) #4 0x000055ea96d425d3 n/a (ntpd + 0xf5d3) #5 0x000055ea96d4afd1 n/a (ntpd + 0x17fd1) #6 0x000055ea96d3bc2d n/a (ntpd + 0x8c2d) #7 0x00007fefe8c9f16c n/a (libc.so.6 + 0x2716c) #8 0x00007fefe8c9f229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055ea96d3bc55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 19:03:39.935320 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 19:03:39.935527 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 19:03:39.941987 systemd[1]: systemd-coredump@1-2162-0.service: Deactivated successfully. Jan 23 19:03:39.955947 systemd-hostnamed[2010]: Hostname set to (transient) Jan 23 19:03:39.956985 systemd-resolved[1815]: System hostname changed to 'ip-172-31-18-6'. Jan 23 19:03:40.082950 containerd[1988]: time="2026-01-23T19:03:40.081924753Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 19:03:40.082950 containerd[1988]: time="2026-01-23T19:03:40.082012982Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 19:03:40.082950 containerd[1988]: time="2026-01-23T19:03:40.082042797Z" level=info msg="Start subscribing containerd event" Jan 23 19:03:40.082950 containerd[1988]: time="2026-01-23T19:03:40.082073094Z" level=info msg="Start recovering state" Jan 23 19:03:40.082950 containerd[1988]: time="2026-01-23T19:03:40.082173746Z" level=info msg="Start event monitor" Jan 23 19:03:40.082950 containerd[1988]: time="2026-01-23T19:03:40.082187579Z" level=info msg="Start cni network conf syncer for default" Jan 23 19:03:40.082950 containerd[1988]: time="2026-01-23T19:03:40.082196709Z" level=info msg="Start streaming server" Jan 23 19:03:40.082950 containerd[1988]: time="2026-01-23T19:03:40.082211857Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 19:03:40.082950 containerd[1988]: time="2026-01-23T19:03:40.082221585Z" level=info msg="runtime interface starting up..." Jan 23 19:03:40.082950 containerd[1988]: time="2026-01-23T19:03:40.082230177Z" level=info msg="starting plugins..." Jan 23 19:03:40.082950 containerd[1988]: time="2026-01-23T19:03:40.082246589Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 19:03:40.082512 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 19:03:40.085858 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Jan 23 19:03:40.089424 containerd[1988]: time="2026-01-23T19:03:40.086135136Z" level=info msg="containerd successfully booted in 0.358809s" Jan 23 19:03:40.090194 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 19:03:40.106915 tar[1967]: linux-amd64/README.md Jan 23 19:03:40.126806 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 19:03:40.131182 ntpd[2203]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 19:03:40.131250 ntpd[2203]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 19:03:40.131586 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 19:03:40.131586 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 19:03:40.131586 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: ---------------------------------------------------- Jan 23 19:03:40.131586 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: ntp-4 is maintained by Network Time Foundation, Jan 23 19:03:40.131586 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 19:03:40.131586 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: corporation. Support and training for ntp-4 are Jan 23 19:03:40.131586 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: available at https://www.nwtime.org/support Jan 23 19:03:40.131586 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: ---------------------------------------------------- Jan 23 19:03:40.131262 ntpd[2203]: ---------------------------------------------------- Jan 23 19:03:40.131271 ntpd[2203]: ntp-4 is maintained by Network Time Foundation, Jan 23 19:03:40.132299 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: proto: precision = 0.072 usec (-24) Jan 23 19:03:40.131280 ntpd[2203]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 19:03:40.131288 ntpd[2203]: corporation. Support and training for ntp-4 are Jan 23 19:03:40.132429 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: basedate set to 2026-01-11 Jan 23 19:03:40.132429 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: gps base set to 2026-01-11 (week 2401) Jan 23 19:03:40.131297 ntpd[2203]: available at https://www.nwtime.org/support Jan 23 19:03:40.132550 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 19:03:40.132550 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 19:03:40.131306 ntpd[2203]: ---------------------------------------------------- Jan 23 19:03:40.132088 ntpd[2203]: proto: precision = 0.072 usec (-24) Jan 23 19:03:40.132724 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 19:03:40.132724 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: Listen normally on 3 eth0 172.31.18.6:123 Jan 23 19:03:40.132724 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: Listen normally on 4 lo [::1]:123 Jan 23 19:03:40.132342 ntpd[2203]: basedate set to 2026-01-11 Jan 23 19:03:40.132971 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: bind(21) AF_INET6 [fe80::413:62ff:fee1:2567%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 19:03:40.132971 ntpd[2203]: 23 Jan 19:03:40 ntpd[2203]: unable to create socket on eth0 (5) for [fe80::413:62ff:fee1:2567%2]:123 Jan 23 19:03:40.132356 ntpd[2203]: gps base set to 2026-01-11 (week 2401) Jan 23 19:03:40.132439 ntpd[2203]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 19:03:40.132466 ntpd[2203]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 19:03:40.132638 ntpd[2203]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 19:03:40.132665 ntpd[2203]: Listen normally on 3 eth0 172.31.18.6:123 Jan 23 19:03:40.132691 ntpd[2203]: Listen normally on 4 lo [::1]:123 Jan 23 19:03:40.132725 ntpd[2203]: bind(21) AF_INET6 [fe80::413:62ff:fee1:2567%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 19:03:40.132744 ntpd[2203]: unable to create socket on eth0 (5) for [fe80::413:62ff:fee1:2567%2]:123 Jan 23 19:03:40.133710 kernel: ntpd[2203]: segfault at 24 ip 000056139d11faeb sp 00007ffee839b700 error 4 in ntpd[68aeb,56139d0bd000+80000] likely on CPU 1 (core 0, socket 0) Jan 23 19:03:40.135787 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 19:03:40.143498 systemd-coredump[2208]: Process 2203 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 19:03:40.148760 systemd[1]: Started systemd-coredump@2-2208-0.service - Process Core Dump (PID 2208/UID 0). Jan 23 19:03:40.228699 systemd-coredump[2209]: Process 2203 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2203: #0 0x000056139d11faeb n/a (ntpd + 0x68aeb) #1 0x000056139d0c8cdf n/a (ntpd + 0x11cdf) #2 0x000056139d0c9575 n/a (ntpd + 0x12575) #3 0x000056139d0c4d8a n/a (ntpd + 0xdd8a) #4 0x000056139d0c65d3 n/a (ntpd + 0xf5d3) #5 0x000056139d0cefd1 n/a (ntpd + 0x17fd1) #6 0x000056139d0bfc2d n/a (ntpd + 0x8c2d) #7 0x00007fa9c7d7d16c n/a (libc.so.6 + 0x2716c) #8 0x00007fa9c7d7d229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000056139d0bfc55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 19:03:40.230627 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 19:03:40.230824 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 19:03:40.234600 systemd[1]: systemd-coredump@2-2208-0.service: Deactivated successfully. Jan 23 19:03:40.320087 systemd-networkd[1809]: eth0: Gained IPv6LL Jan 23 19:03:40.323098 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 19:03:40.324257 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 19:03:40.326581 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 19:03:40.332091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:03:40.338260 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 19:03:40.346609 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 3. Jan 23 19:03:40.352206 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 19:03:40.412645 ntpd[2225]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 19:03:40.413142 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 19:03:40.413366 ntpd[2225]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 19:03:40.415150 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 19:03:40.415150 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: ---------------------------------------------------- Jan 23 19:03:40.415150 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: ntp-4 is maintained by Network Time Foundation, Jan 23 19:03:40.415150 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 19:03:40.415150 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: corporation. Support and training for ntp-4 are Jan 23 19:03:40.415150 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: available at https://www.nwtime.org/support Jan 23 19:03:40.415150 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: ---------------------------------------------------- Jan 23 19:03:40.415150 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: proto: precision = 0.098 usec (-23) Jan 23 19:03:40.415150 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: basedate set to 2026-01-11 Jan 23 19:03:40.415150 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: gps base set to 2026-01-11 (week 2401) Jan 23 19:03:40.415150 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 19:03:40.415150 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 19:03:40.413827 ntpd[2225]: ---------------------------------------------------- Jan 23 19:03:40.413838 ntpd[2225]: ntp-4 is maintained by Network Time Foundation, Jan 23 19:03:40.413848 ntpd[2225]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 19:03:40.413857 ntpd[2225]: corporation. Support and training for ntp-4 are Jan 23 19:03:40.413867 ntpd[2225]: available at https://www.nwtime.org/support Jan 23 19:03:40.413913 ntpd[2225]: ---------------------------------------------------- Jan 23 19:03:40.414646 ntpd[2225]: proto: precision = 0.098 usec (-23) Jan 23 19:03:40.414941 ntpd[2225]: basedate set to 2026-01-11 Jan 23 19:03:40.414955 ntpd[2225]: gps base set to 2026-01-11 (week 2401) Jan 23 19:03:40.415037 ntpd[2225]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 19:03:40.415063 ntpd[2225]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 19:03:40.416318 ntpd[2225]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 19:03:40.416392 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 19:03:40.416468 ntpd[2225]: Listen normally on 3 eth0 172.31.18.6:123 Jan 23 19:03:40.416645 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: Listen normally on 3 eth0 172.31.18.6:123 Jan 23 19:03:40.416645 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: Listen normally on 4 lo [::1]:123 Jan 23 19:03:40.416645 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: Listen normally on 5 eth0 [fe80::413:62ff:fee1:2567%2]:123 Jan 23 19:03:40.416645 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: Listening on routing socket on fd #22 for interface updates Jan 23 19:03:40.416561 ntpd[2225]: Listen normally on 4 lo [::1]:123 Jan 23 19:03:40.416587 ntpd[2225]: Listen normally on 5 eth0 [fe80::413:62ff:fee1:2567%2]:123 Jan 23 19:03:40.416612 ntpd[2225]: Listening on routing socket on fd #22 for interface updates Jan 23 19:03:40.420751 ntpd[2225]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 19:03:40.420907 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 19:03:40.420989 ntpd[2225]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 19:03:40.421061 ntpd[2225]: 23 Jan 19:03:40 ntpd[2225]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 19:03:40.421262 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 19:03:40.442458 amazon-ssm-agent[2220]: Initializing new seelog logger Jan 23 19:03:40.442861 amazon-ssm-agent[2220]: New Seelog Logger Creation Complete Jan 23 19:03:40.442861 amazon-ssm-agent[2220]: 2026/01/23 19:03:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 19:03:40.442861 amazon-ssm-agent[2220]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 19:03:40.443199 amazon-ssm-agent[2220]: 2026/01/23 19:03:40 processing appconfig overrides Jan 23 19:03:40.443552 amazon-ssm-agent[2220]: 2026/01/23 19:03:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 19:03:40.443552 amazon-ssm-agent[2220]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 19:03:40.443655 amazon-ssm-agent[2220]: 2026/01/23 19:03:40 processing appconfig overrides Jan 23 19:03:40.443909 amazon-ssm-agent[2220]: 2026/01/23 19:03:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 19:03:40.443909 amazon-ssm-agent[2220]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 19:03:40.444868 amazon-ssm-agent[2220]: 2026/01/23 19:03:40 processing appconfig overrides Jan 23 19:03:40.444868 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4434 INFO Proxy environment variables: Jan 23 19:03:40.447138 amazon-ssm-agent[2220]: 2026/01/23 19:03:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 19:03:40.447138 amazon-ssm-agent[2220]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 19:03:40.447267 amazon-ssm-agent[2220]: 2026/01/23 19:03:40 processing appconfig overrides Jan 23 19:03:40.544375 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4434 INFO https_proxy: Jan 23 19:03:40.643279 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4434 INFO http_proxy: Jan 23 19:03:40.741256 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4435 INFO no_proxy: Jan 23 19:03:40.830293 amazon-ssm-agent[2220]: 2026/01/23 19:03:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 19:03:40.830440 amazon-ssm-agent[2220]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 19:03:40.830514 amazon-ssm-agent[2220]: 2026/01/23 19:03:40 processing appconfig overrides Jan 23 19:03:40.839863 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4436 INFO Checking if agent identity type OnPrem can be assumed Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4438 INFO Checking if agent identity type EC2 can be assumed Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4938 INFO Agent will take identity from EC2 Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4963 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4963 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4963 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4963 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4963 INFO [Registrar] Starting registrar module Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4990 INFO [EC2Identity] Checking disk for registration info Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4991 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.4991 INFO [EC2Identity] Generating registration keypair Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.7823 INFO [EC2Identity] Checking write access before registering Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.7828 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.8300 INFO [EC2Identity] EC2 registration was successful. Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.8301 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.8301 INFO [CredentialRefresher] credentialRefresher has started Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.8301 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.8560 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 19:03:40.856383 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.8561 INFO [CredentialRefresher] Credentials ready Jan 23 19:03:40.937213 amazon-ssm-agent[2220]: 2026-01-23 19:03:40.8563 INFO [CredentialRefresher] Next credential rotation will be in 29.999994687366666 minutes Jan 23 19:03:41.575992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:03:41.576800 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 19:03:41.578136 systemd[1]: Startup finished in 2.930s (kernel) + 7.061s (initrd) + 6.877s (userspace) = 16.870s. Jan 23 19:03:41.582270 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:03:41.852178 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 19:03:41.854085 systemd[1]: Started sshd@0-172.31.18.6:22-68.220.241.50:54580.service - OpenSSH per-connection server daemon (68.220.241.50:54580). Jan 23 19:03:41.869634 amazon-ssm-agent[2220]: 2026-01-23 19:03:41.8694 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 19:03:41.971056 amazon-ssm-agent[2220]: 2026-01-23 19:03:41.8727 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2261) started Jan 23 19:03:42.071415 amazon-ssm-agent[2220]: 2026-01-23 19:03:41.8727 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 19:03:42.336285 kubelet[2247]: E0123 19:03:42.336159 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:03:42.339158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:03:42.339298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:03:42.339636 systemd[1]: kubelet.service: Consumed 1.036s CPU time, 268.5M memory peak. Jan 23 19:03:42.361930 sshd[2257]: Accepted publickey for core from 68.220.241.50 port 54580 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:03:42.363735 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:42.371071 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 19:03:42.372028 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 19:03:42.380549 systemd-logind[1959]: New session 1 of user core. Jan 23 19:03:42.390464 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 19:03:42.394200 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 19:03:42.411611 (systemd)[2278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 19:03:42.414570 systemd-logind[1959]: New session c1 of user core. Jan 23 19:03:42.575700 systemd[2278]: Queued start job for default target default.target. Jan 23 19:03:42.586046 systemd[2278]: Created slice app.slice - User Application Slice. Jan 23 19:03:42.586077 systemd[2278]: Reached target paths.target - Paths. Jan 23 19:03:42.586118 systemd[2278]: Reached target timers.target - Timers. Jan 23 19:03:42.587434 systemd[2278]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 19:03:42.598926 systemd[2278]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 19:03:42.599717 systemd[2278]: Reached target sockets.target - Sockets. Jan 23 19:03:42.599851 systemd[2278]: Reached target basic.target - Basic System. Jan 23 19:03:42.599931 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 19:03:42.600802 systemd[2278]: Reached target default.target - Main User Target. Jan 23 19:03:42.600835 systemd[2278]: Startup finished in 179ms. Jan 23 19:03:42.600953 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 19:03:42.960563 systemd[1]: Started sshd@1-172.31.18.6:22-68.220.241.50:54594.service - OpenSSH per-connection server daemon (68.220.241.50:54594). Jan 23 19:03:43.458000 sshd[2289]: Accepted publickey for core from 68.220.241.50 port 54594 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:03:43.459900 sshd-session[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:43.465232 systemd-logind[1959]: New session 2 of user core. Jan 23 19:03:43.471156 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 19:03:43.804717 sshd[2292]: Connection closed by 68.220.241.50 port 54594 Jan 23 19:03:43.806069 sshd-session[2289]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:43.809850 systemd-logind[1959]: Session 2 logged out. Waiting for processes to exit. Jan 23 19:03:43.810351 systemd[1]: sshd@1-172.31.18.6:22-68.220.241.50:54594.service: Deactivated successfully. Jan 23 19:03:43.811804 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 19:03:43.813732 systemd-logind[1959]: Removed session 2. Jan 23 19:03:43.891602 systemd[1]: Started sshd@2-172.31.18.6:22-68.220.241.50:54600.service - OpenSSH per-connection server daemon (68.220.241.50:54600). Jan 23 19:03:44.386285 sshd[2298]: Accepted publickey for core from 68.220.241.50 port 54600 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:03:44.387642 sshd-session[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:44.393771 systemd-logind[1959]: New session 3 of user core. Jan 23 19:03:44.409137 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 19:03:44.731453 sshd[2301]: Connection closed by 68.220.241.50 port 54600 Jan 23 19:03:44.732137 sshd-session[2298]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:44.737416 systemd[1]: sshd@2-172.31.18.6:22-68.220.241.50:54600.service: Deactivated successfully. Jan 23 19:03:44.739843 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 19:03:44.741229 systemd-logind[1959]: Session 3 logged out. Waiting for processes to exit. Jan 23 19:03:44.743097 systemd-logind[1959]: Removed session 3. Jan 23 19:03:44.823708 systemd[1]: Started sshd@3-172.31.18.6:22-68.220.241.50:54608.service - OpenSSH per-connection server daemon (68.220.241.50:54608). Jan 23 19:03:45.320877 sshd[2307]: Accepted publickey for core from 68.220.241.50 port 54608 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:03:45.322378 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:45.327933 systemd-logind[1959]: New session 4 of user core. Jan 23 19:03:45.339142 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 19:03:45.674421 sshd[2310]: Connection closed by 68.220.241.50 port 54608 Jan 23 19:03:45.675060 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:45.679586 systemd-logind[1959]: Session 4 logged out. Waiting for processes to exit. Jan 23 19:03:45.680795 systemd[1]: sshd@3-172.31.18.6:22-68.220.241.50:54608.service: Deactivated successfully. Jan 23 19:03:45.682970 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 19:03:45.684811 systemd-logind[1959]: Removed session 4. Jan 23 19:03:45.761491 systemd[1]: Started sshd@4-172.31.18.6:22-68.220.241.50:54610.service - OpenSSH per-connection server daemon (68.220.241.50:54610). Jan 23 19:03:46.254934 sshd[2316]: Accepted publickey for core from 68.220.241.50 port 54610 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:03:46.256094 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:46.261746 systemd-logind[1959]: New session 5 of user core. Jan 23 19:03:46.270090 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 19:03:46.539753 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 19:03:46.540212 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:03:46.555105 sudo[2320]: pam_unix(sudo:session): session closed for user root Jan 23 19:03:46.631043 sshd[2319]: Connection closed by 68.220.241.50 port 54610 Jan 23 19:03:46.632158 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:46.636966 systemd[1]: sshd@4-172.31.18.6:22-68.220.241.50:54610.service: Deactivated successfully. Jan 23 19:03:46.639272 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 19:03:46.641862 systemd-logind[1959]: Session 5 logged out. Waiting for processes to exit. Jan 23 19:03:46.643139 systemd-logind[1959]: Removed session 5. Jan 23 19:03:46.721971 systemd[1]: Started sshd@5-172.31.18.6:22-68.220.241.50:54624.service - OpenSSH per-connection server daemon (68.220.241.50:54624). Jan 23 19:03:47.217102 sshd[2326]: Accepted publickey for core from 68.220.241.50 port 54624 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:03:47.218784 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:47.223694 systemd-logind[1959]: New session 6 of user core. Jan 23 19:03:47.234134 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 19:03:48.170222 systemd-resolved[1815]: Clock change detected. Flushing caches. Jan 23 19:03:48.244668 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 19:03:48.244980 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:03:48.250276 sudo[2331]: pam_unix(sudo:session): session closed for user root Jan 23 19:03:48.259291 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 19:03:48.259650 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:03:48.284622 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:03:48.333028 augenrules[2353]: No rules Jan 23 19:03:48.333730 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:03:48.334077 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:03:48.336320 sudo[2330]: pam_unix(sudo:session): session closed for user root Jan 23 19:03:48.412383 sshd[2329]: Connection closed by 68.220.241.50 port 54624 Jan 23 19:03:48.413325 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:48.417407 systemd[1]: sshd@5-172.31.18.6:22-68.220.241.50:54624.service: Deactivated successfully. Jan 23 19:03:48.419334 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 19:03:48.421773 systemd-logind[1959]: Session 6 logged out. Waiting for processes to exit. Jan 23 19:03:48.423074 systemd-logind[1959]: Removed session 6. Jan 23 19:03:48.499192 systemd[1]: Started sshd@6-172.31.18.6:22-68.220.241.50:54626.service - OpenSSH per-connection server daemon (68.220.241.50:54626). Jan 23 19:03:48.999108 sshd[2362]: Accepted publickey for core from 68.220.241.50 port 54626 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:03:48.999817 sshd-session[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:49.005463 systemd-logind[1959]: New session 7 of user core. Jan 23 19:03:49.010285 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 19:03:49.270349 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 19:03:49.270605 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:03:49.661957 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 19:03:49.680598 (dockerd)[2385]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 19:03:49.957623 dockerd[2385]: time="2026-01-23T19:03:49.957353296Z" level=info msg="Starting up" Jan 23 19:03:49.959756 dockerd[2385]: time="2026-01-23T19:03:49.959404185Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 19:03:49.970715 dockerd[2385]: time="2026-01-23T19:03:49.970669897Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 19:03:50.011739 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3917655258-merged.mount: Deactivated successfully. Jan 23 19:03:50.085180 dockerd[2385]: time="2026-01-23T19:03:50.084951685Z" level=info msg="Loading containers: start." Jan 23 19:03:50.098131 kernel: Initializing XFRM netlink socket Jan 23 19:03:50.329592 (udev-worker)[2406]: Network interface NamePolicy= disabled on kernel command line. Jan 23 19:03:50.373371 systemd-networkd[1809]: docker0: Link UP Jan 23 19:03:50.386382 dockerd[2385]: time="2026-01-23T19:03:50.386316595Z" level=info msg="Loading containers: done." Jan 23 19:03:50.410141 dockerd[2385]: time="2026-01-23T19:03:50.409933878Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 19:03:50.410141 dockerd[2385]: time="2026-01-23T19:03:50.410017625Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 19:03:50.410323 dockerd[2385]: time="2026-01-23T19:03:50.410157272Z" level=info msg="Initializing buildkit" Jan 23 19:03:50.458176 dockerd[2385]: time="2026-01-23T19:03:50.458128875Z" level=info msg="Completed buildkit initialization" Jan 23 19:03:50.465930 dockerd[2385]: time="2026-01-23T19:03:50.465872512Z" level=info msg="Daemon has completed initialization" Jan 23 19:03:50.466049 dockerd[2385]: time="2026-01-23T19:03:50.465941433Z" level=info msg="API listen on /run/docker.sock" Jan 23 19:03:50.466334 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 19:03:51.422746 containerd[1988]: time="2026-01-23T19:03:51.422701989Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 19:03:51.989710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569322029.mount: Deactivated successfully. Jan 23 19:03:53.345329 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 19:03:53.348402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:03:53.633282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:03:53.642549 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:03:53.706002 kubelet[2663]: E0123 19:03:53.705902 2663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:03:53.711462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:03:53.711642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:03:53.712630 systemd[1]: kubelet.service: Consumed 219ms CPU time, 108.1M memory peak. Jan 23 19:03:53.721956 containerd[1988]: time="2026-01-23T19:03:53.721895404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:03:53.723198 containerd[1988]: time="2026-01-23T19:03:53.722998003Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 23 19:03:53.724344 containerd[1988]: time="2026-01-23T19:03:53.724307164Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:03:53.727620 containerd[1988]: time="2026-01-23T19:03:53.727580043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:03:53.728830 containerd[1988]: time="2026-01-23T19:03:53.728529534Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.305772302s" Jan 23 19:03:53.728830 containerd[1988]: time="2026-01-23T19:03:53.728567174Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 19:03:53.729314 containerd[1988]: time="2026-01-23T19:03:53.729295155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 19:03:55.516971 containerd[1988]: time="2026-01-23T19:03:55.516911923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:03:55.519144 containerd[1988]: time="2026-01-23T19:03:55.518938127Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 23 19:03:55.521475 containerd[1988]: time="2026-01-23T19:03:55.521443830Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:03:55.525128 containerd[1988]: time="2026-01-23T19:03:55.525080089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:03:55.526208 containerd[1988]: time="2026-01-23T19:03:55.525798403Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.796307881s" Jan 23 19:03:55.526208 containerd[1988]: time="2026-01-23T19:03:55.525829336Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 19:03:55.526568 containerd[1988]: time="2026-01-23T19:03:55.526551202Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 19:03:57.062080 containerd[1988]: time="2026-01-23T19:03:57.062025700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:03:57.064250 containerd[1988]: time="2026-01-23T19:03:57.064193609Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 23 19:03:57.066886 containerd[1988]: time="2026-01-23T19:03:57.066828091Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:03:57.071508 containerd[1988]: time="2026-01-23T19:03:57.070951315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:03:57.072237 containerd[1988]: time="2026-01-23T19:03:57.072185169Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.545604589s" Jan 23 19:03:57.072333 containerd[1988]: time="2026-01-23T19:03:57.072241421Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 19:03:57.073021 containerd[1988]: time="2026-01-23T19:03:57.072870288Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 19:03:58.184821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount538860686.mount: Deactivated successfully. Jan 23 19:03:58.841269 containerd[1988]: time="2026-01-23T19:03:58.841192881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:03:58.843678 containerd[1988]: time="2026-01-23T19:03:58.843536847Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 23 19:03:58.846686 containerd[1988]: time="2026-01-23T19:03:58.846061166Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:03:58.849879 containerd[1988]: time="2026-01-23T19:03:58.849837743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:03:58.851864 containerd[1988]: time="2026-01-23T19:03:58.851825449Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.778904022s" Jan 23 19:03:58.851976 containerd[1988]: time="2026-01-23T19:03:58.851871777Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 19:03:58.855484 containerd[1988]: time="2026-01-23T19:03:58.855452420Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 19:03:59.375741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount883112897.mount: Deactivated successfully. Jan 23 19:04:00.698659 containerd[1988]: time="2026-01-23T19:04:00.698595420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:00.700748 containerd[1988]: time="2026-01-23T19:04:00.700357241Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 23 19:04:00.703011 containerd[1988]: time="2026-01-23T19:04:00.702960780Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:00.707300 containerd[1988]: time="2026-01-23T19:04:00.707258528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:00.708239 containerd[1988]: time="2026-01-23T19:04:00.708051752Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.852277187s" Jan 23 19:04:00.708239 containerd[1988]: time="2026-01-23T19:04:00.708103906Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 19:04:00.708794 containerd[1988]: time="2026-01-23T19:04:00.708523204Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 19:04:01.238239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount13379991.mount: Deactivated successfully. Jan 23 19:04:01.256116 containerd[1988]: time="2026-01-23T19:04:01.255474934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:04:01.262515 containerd[1988]: time="2026-01-23T19:04:01.262441012Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 19:04:01.271845 containerd[1988]: time="2026-01-23T19:04:01.271771514Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:04:01.282270 containerd[1988]: time="2026-01-23T19:04:01.282198611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:04:01.284921 containerd[1988]: time="2026-01-23T19:04:01.284412342Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 575.858259ms" Jan 23 19:04:01.284921 containerd[1988]: time="2026-01-23T19:04:01.284465113Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 19:04:01.287248 containerd[1988]: time="2026-01-23T19:04:01.287210571Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 19:04:02.176946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1999159397.mount: Deactivated successfully. Jan 23 19:04:03.962197 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 19:04:03.965149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:04:04.314237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:04:04.327009 (kubelet)[2802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:04:04.487772 kubelet[2802]: E0123 19:04:04.487676 2802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:04:04.492010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:04:04.492207 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:04:04.492693 systemd[1]: kubelet.service: Consumed 231ms CPU time, 110M memory peak. Jan 23 19:04:05.322083 containerd[1988]: time="2026-01-23T19:04:05.322023965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:05.324199 containerd[1988]: time="2026-01-23T19:04:05.324127506Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 23 19:04:05.327422 containerd[1988]: time="2026-01-23T19:04:05.326582591Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:05.337621 containerd[1988]: time="2026-01-23T19:04:05.337570289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:05.339122 containerd[1988]: time="2026-01-23T19:04:05.339061240Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.05180636s" Jan 23 19:04:05.339389 containerd[1988]: time="2026-01-23T19:04:05.339358188Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 19:04:08.603652 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:04:08.603887 systemd[1]: kubelet.service: Consumed 231ms CPU time, 110M memory peak. Jan 23 19:04:08.606803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:04:08.648201 systemd[1]: Reload requested from client PID 2841 ('systemctl') (unit session-7.scope)... Jan 23 19:04:08.648400 systemd[1]: Reloading... Jan 23 19:04:08.794138 zram_generator::config[2886]: No configuration found. Jan 23 19:04:09.118736 systemd[1]: Reloading finished in 469 ms. Jan 23 19:04:09.202659 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:04:09.207128 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 19:04:09.207431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:04:09.207495 systemd[1]: kubelet.service: Consumed 155ms CPU time, 98.3M memory peak. Jan 23 19:04:09.209639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:04:09.508340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:04:09.519799 (kubelet)[2951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:04:09.574467 kubelet[2951]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:04:09.574467 kubelet[2951]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:04:09.574467 kubelet[2951]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:04:09.576423 kubelet[2951]: I0123 19:04:09.576341 2951 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:04:10.289130 kubelet[2951]: I0123 19:04:10.287244 2951 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 19:04:10.289130 kubelet[2951]: I0123 19:04:10.287388 2951 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:04:10.289130 kubelet[2951]: I0123 19:04:10.288034 2951 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 19:04:10.370292 kubelet[2951]: I0123 19:04:10.368411 2951 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:04:10.370940 kubelet[2951]: E0123 19:04:10.370708 2951 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 19:04:10.402147 kubelet[2951]: I0123 19:04:10.401797 2951 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:04:10.411756 kubelet[2951]: I0123 19:04:10.411717 2951 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 19:04:10.418660 kubelet[2951]: I0123 19:04:10.418588 2951 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:04:10.423852 kubelet[2951]: I0123 19:04:10.418662 2951 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:04:10.423852 kubelet[2951]: I0123 19:04:10.423831 2951 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:04:10.423852 kubelet[2951]: I0123 19:04:10.423848 2951 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 19:04:10.425633 kubelet[2951]: I0123 19:04:10.425577 2951 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:04:10.430072 kubelet[2951]: I0123 19:04:10.429836 2951 kubelet.go:480] "Attempting to sync node with API server" Jan 23 19:04:10.430072 kubelet[2951]: I0123 19:04:10.429888 2951 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:04:10.430072 kubelet[2951]: I0123 19:04:10.429913 2951 kubelet.go:386] "Adding apiserver pod source" Jan 23 19:04:10.432788 kubelet[2951]: I0123 19:04:10.432379 2951 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:04:10.443664 kubelet[2951]: E0123 19:04:10.443623 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-6&limit=500&resourceVersion=0\": dial tcp 172.31.18.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 19:04:10.448529 kubelet[2951]: I0123 19:04:10.448483 2951 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:04:10.449715 kubelet[2951]: I0123 19:04:10.449605 2951 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 19:04:10.450674 kubelet[2951]: W0123 19:04:10.450636 2951 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 19:04:10.452174 kubelet[2951]: E0123 19:04:10.452116 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:04:10.456282 kubelet[2951]: I0123 19:04:10.456251 2951 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 19:04:10.456395 kubelet[2951]: I0123 19:04:10.456342 2951 server.go:1289] "Started kubelet" Jan 23 19:04:10.464421 kubelet[2951]: I0123 19:04:10.464196 2951 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:04:10.476220 kubelet[2951]: E0123 19:04:10.469211 2951 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.6:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.6:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-6.188d718658d3704b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-6,UID:ip-172-31-18-6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-6,},FirstTimestamp:2026-01-23 19:04:10.456281163 +0000 UTC m=+0.930818428,LastTimestamp:2026-01-23 19:04:10.456281163 +0000 UTC m=+0.930818428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-6,}" Jan 23 19:04:10.481052 kubelet[2951]: I0123 19:04:10.480990 2951 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:04:10.483050 kubelet[2951]: I0123 19:04:10.482946 2951 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 19:04:10.483626 kubelet[2951]: E0123 19:04:10.483601 2951 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-6\" not found" Jan 23 19:04:10.490875 kubelet[2951]: I0123 19:04:10.490831 2951 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:04:10.496288 kubelet[2951]: E0123 19:04:10.495233 2951 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:04:10.496288 kubelet[2951]: I0123 19:04:10.495309 2951 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 19:04:10.496288 kubelet[2951]: I0123 19:04:10.495372 2951 reconciler.go:26] "Reconciler: start to sync state" Jan 23 19:04:10.497824 kubelet[2951]: I0123 19:04:10.497800 2951 server.go:317] "Adding debug handlers to kubelet server" Jan 23 19:04:10.513019 kubelet[2951]: I0123 19:04:10.512925 2951 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:04:10.513467 kubelet[2951]: I0123 19:04:10.513447 2951 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:04:10.513812 kubelet[2951]: E0123 19:04:10.513787 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-6?timeout=10s\": dial tcp 172.31.18.6:6443: connect: connection refused" interval="200ms" Jan 23 19:04:10.514517 kubelet[2951]: E0123 19:04:10.514473 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:04:10.515524 kubelet[2951]: I0123 19:04:10.515483 2951 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:04:10.522488 kubelet[2951]: I0123 19:04:10.522438 2951 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 19:04:10.525581 kubelet[2951]: I0123 19:04:10.525481 2951 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 19:04:10.525581 kubelet[2951]: I0123 19:04:10.525509 2951 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 19:04:10.525581 kubelet[2951]: I0123 19:04:10.525534 2951 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:04:10.525581 kubelet[2951]: I0123 19:04:10.525544 2951 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 19:04:10.525813 kubelet[2951]: E0123 19:04:10.525592 2951 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 19:04:10.531148 kubelet[2951]: I0123 19:04:10.530101 2951 factory.go:223] Registration of the containerd container factory successfully Jan 23 19:04:10.531148 kubelet[2951]: I0123 19:04:10.530334 2951 factory.go:223] Registration of the systemd container factory successfully Jan 23 19:04:10.533939 kubelet[2951]: E0123 19:04:10.533905 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 19:04:10.574130 kubelet[2951]: I0123 19:04:10.572417 2951 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:04:10.574745 kubelet[2951]: I0123 19:04:10.574687 2951 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:04:10.576033 kubelet[2951]: I0123 19:04:10.575505 2951 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:04:10.581146 kubelet[2951]: I0123 19:04:10.580962 2951 policy_none.go:49] "None policy: Start" Jan 23 19:04:10.581146 kubelet[2951]: I0123 19:04:10.580991 2951 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 19:04:10.581146 kubelet[2951]: I0123 19:04:10.581007 2951 state_mem.go:35] "Initializing new in-memory state store" Jan 23 19:04:10.584109 kubelet[2951]: E0123 19:04:10.584039 2951 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-6\" not found" Jan 23 19:04:10.592599 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 19:04:10.605723 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 19:04:10.609472 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 19:04:10.615282 kubelet[2951]: E0123 19:04:10.615249 2951 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 19:04:10.615697 kubelet[2951]: I0123 19:04:10.615426 2951 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:04:10.615697 kubelet[2951]: I0123 19:04:10.615437 2951 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:04:10.616292 kubelet[2951]: I0123 19:04:10.616248 2951 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:04:10.617973 kubelet[2951]: E0123 19:04:10.617941 2951 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:04:10.618721 kubelet[2951]: E0123 19:04:10.617991 2951 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-6\" not found" Jan 23 19:04:10.647927 systemd[1]: Created slice kubepods-burstable-pod8fc1a2db4fdad12c6504c67fa1db16ab.slice - libcontainer container kubepods-burstable-pod8fc1a2db4fdad12c6504c67fa1db16ab.slice. Jan 23 19:04:10.658266 kubelet[2951]: E0123 19:04:10.658233 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-6\" not found" node="ip-172-31-18-6" Jan 23 19:04:10.664510 systemd[1]: Created slice kubepods-burstable-pod3b98c6a4f9554918c6b6e4eb70128810.slice - libcontainer container kubepods-burstable-pod3b98c6a4f9554918c6b6e4eb70128810.slice. Jan 23 19:04:10.680375 kubelet[2951]: E0123 19:04:10.680004 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-6\" not found" node="ip-172-31-18-6" Jan 23 19:04:10.686437 systemd[1]: Created slice kubepods-burstable-pod4ed3c5924a1237241201e712f36721ba.slice - libcontainer container kubepods-burstable-pod4ed3c5924a1237241201e712f36721ba.slice. Jan 23 19:04:10.689080 kubelet[2951]: E0123 19:04:10.689042 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-6\" not found" node="ip-172-31-18-6" Jan 23 19:04:10.703070 kubelet[2951]: I0123 19:04:10.703001 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8fc1a2db4fdad12c6504c67fa1db16ab-ca-certs\") pod \"kube-apiserver-ip-172-31-18-6\" (UID: \"8fc1a2db4fdad12c6504c67fa1db16ab\") " pod="kube-system/kube-apiserver-ip-172-31-18-6" Jan 23 19:04:10.703070 kubelet[2951]: I0123 19:04:10.703066 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8fc1a2db4fdad12c6504c67fa1db16ab-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-6\" (UID: \"8fc1a2db4fdad12c6504c67fa1db16ab\") " pod="kube-system/kube-apiserver-ip-172-31-18-6" Jan 23 19:04:10.703334 kubelet[2951]: I0123 19:04:10.703119 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8fc1a2db4fdad12c6504c67fa1db16ab-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-6\" (UID: \"8fc1a2db4fdad12c6504c67fa1db16ab\") " pod="kube-system/kube-apiserver-ip-172-31-18-6" Jan 23 19:04:10.703334 kubelet[2951]: I0123 19:04:10.703151 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b98c6a4f9554918c6b6e4eb70128810-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-6\" (UID: \"3b98c6a4f9554918c6b6e4eb70128810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-6" Jan 23 19:04:10.703334 kubelet[2951]: I0123 19:04:10.703176 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b98c6a4f9554918c6b6e4eb70128810-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-6\" (UID: \"3b98c6a4f9554918c6b6e4eb70128810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-6" Jan 23 19:04:10.703334 kubelet[2951]: I0123 19:04:10.703199 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ed3c5924a1237241201e712f36721ba-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-6\" (UID: \"4ed3c5924a1237241201e712f36721ba\") " pod="kube-system/kube-scheduler-ip-172-31-18-6" Jan 23 19:04:10.703334 kubelet[2951]: I0123 19:04:10.703218 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b98c6a4f9554918c6b6e4eb70128810-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-6\" (UID: \"3b98c6a4f9554918c6b6e4eb70128810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-6" Jan 23 19:04:10.704977 kubelet[2951]: I0123 19:04:10.703239 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3b98c6a4f9554918c6b6e4eb70128810-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-6\" (UID: \"3b98c6a4f9554918c6b6e4eb70128810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-6" Jan 23 19:04:10.704977 kubelet[2951]: I0123 19:04:10.703262 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b98c6a4f9554918c6b6e4eb70128810-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-6\" (UID: \"3b98c6a4f9554918c6b6e4eb70128810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-6" Jan 23 19:04:10.715492 kubelet[2951]: E0123 19:04:10.715443 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-6?timeout=10s\": dial tcp 172.31.18.6:6443: connect: connection refused" interval="400ms" Jan 23 19:04:10.721359 kubelet[2951]: I0123 19:04:10.721314 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-6" Jan 23 19:04:10.721784 kubelet[2951]: E0123 19:04:10.721736 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.6:6443/api/v1/nodes\": dial tcp 172.31.18.6:6443: connect: connection refused" node="ip-172-31-18-6" Jan 23 19:04:10.723706 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 19:04:10.926603 kubelet[2951]: I0123 19:04:10.926431 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-6" Jan 23 19:04:10.933190 kubelet[2951]: E0123 19:04:10.933063 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.6:6443/api/v1/nodes\": dial tcp 172.31.18.6:6443: connect: connection refused" node="ip-172-31-18-6" Jan 23 19:04:10.961902 containerd[1988]: time="2026-01-23T19:04:10.961803098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-6,Uid:8fc1a2db4fdad12c6504c67fa1db16ab,Namespace:kube-system,Attempt:0,}" Jan 23 19:04:10.997743 containerd[1988]: time="2026-01-23T19:04:10.990252651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-6,Uid:3b98c6a4f9554918c6b6e4eb70128810,Namespace:kube-system,Attempt:0,}" Jan 23 19:04:11.014393 containerd[1988]: time="2026-01-23T19:04:11.014274590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-6,Uid:4ed3c5924a1237241201e712f36721ba,Namespace:kube-system,Attempt:0,}" Jan 23 19:04:11.119771 kubelet[2951]: E0123 19:04:11.117754 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-6?timeout=10s\": dial tcp 172.31.18.6:6443: connect: connection refused" interval="800ms" Jan 23 19:04:11.197916 containerd[1988]: time="2026-01-23T19:04:11.197662496Z" level=info msg="connecting to shim e6fe8ed4e8d68f0fec7611de368808e56d71ba19daec3365256b8af5ddac47cb" address="unix:///run/containerd/s/dc9d85b8ea267dbbad9c541d16a9db0732c2afd9e48d54dbfb9108b5da1961fb" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:04:11.211065 containerd[1988]: time="2026-01-23T19:04:11.210997576Z" level=info msg="connecting to shim da5bd2f500408d2e14458e5a9a189d84d99a9c4c93b4d37d7f065381a63929ac" address="unix:///run/containerd/s/272a3786f5f00134c7dcdee3852659789c1fac20ab17210fa5cd71f3dff2b7f6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:04:11.216720 containerd[1988]: time="2026-01-23T19:04:11.216630235Z" level=info msg="connecting to shim ebf332a63a0a9807e7366081f10ed4467f95532fa7beb5ef26c126e41cc34c9f" address="unix:///run/containerd/s/e7be4b74e0dac4875d1f7c13d2f664f3b2a0f63649d071920d50671414848f87" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:04:11.337435 kubelet[2951]: I0123 19:04:11.337405 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-6" Jan 23 19:04:11.338105 kubelet[2951]: E0123 19:04:11.337771 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.6:6443/api/v1/nodes\": dial tcp 172.31.18.6:6443: connect: connection refused" node="ip-172-31-18-6" Jan 23 19:04:11.345254 systemd[1]: Started cri-containerd-da5bd2f500408d2e14458e5a9a189d84d99a9c4c93b4d37d7f065381a63929ac.scope - libcontainer container da5bd2f500408d2e14458e5a9a189d84d99a9c4c93b4d37d7f065381a63929ac. Jan 23 19:04:11.348280 systemd[1]: Started cri-containerd-e6fe8ed4e8d68f0fec7611de368808e56d71ba19daec3365256b8af5ddac47cb.scope - libcontainer container e6fe8ed4e8d68f0fec7611de368808e56d71ba19daec3365256b8af5ddac47cb. Jan 23 19:04:11.351242 systemd[1]: Started cri-containerd-ebf332a63a0a9807e7366081f10ed4467f95532fa7beb5ef26c126e41cc34c9f.scope - libcontainer container ebf332a63a0a9807e7366081f10ed4467f95532fa7beb5ef26c126e41cc34c9f. Jan 23 19:04:11.363246 kubelet[2951]: E0123 19:04:11.363203 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:04:11.466328 containerd[1988]: time="2026-01-23T19:04:11.466059498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-6,Uid:8fc1a2db4fdad12c6504c67fa1db16ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6fe8ed4e8d68f0fec7611de368808e56d71ba19daec3365256b8af5ddac47cb\"" Jan 23 19:04:11.480760 containerd[1988]: time="2026-01-23T19:04:11.480347645Z" level=info msg="CreateContainer within sandbox \"e6fe8ed4e8d68f0fec7611de368808e56d71ba19daec3365256b8af5ddac47cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 19:04:11.500254 containerd[1988]: time="2026-01-23T19:04:11.500162627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-6,Uid:3b98c6a4f9554918c6b6e4eb70128810,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebf332a63a0a9807e7366081f10ed4467f95532fa7beb5ef26c126e41cc34c9f\"" Jan 23 19:04:11.503126 containerd[1988]: time="2026-01-23T19:04:11.503061989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-6,Uid:4ed3c5924a1237241201e712f36721ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"da5bd2f500408d2e14458e5a9a189d84d99a9c4c93b4d37d7f065381a63929ac\"" Jan 23 19:04:11.510763 containerd[1988]: time="2026-01-23T19:04:11.510722630Z" level=info msg="CreateContainer within sandbox \"da5bd2f500408d2e14458e5a9a189d84d99a9c4c93b4d37d7f065381a63929ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 19:04:11.510927 containerd[1988]: time="2026-01-23T19:04:11.510904997Z" level=info msg="Container b30bdb8fa677f1b9d54c38acd933fb75103fd9043d415097be3d26e1fa5f9e36: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:11.517896 containerd[1988]: time="2026-01-23T19:04:11.517839532Z" level=info msg="CreateContainer within sandbox \"ebf332a63a0a9807e7366081f10ed4467f95532fa7beb5ef26c126e41cc34c9f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 19:04:11.536328 containerd[1988]: time="2026-01-23T19:04:11.536262205Z" level=info msg="Container fe3244c103ba3b8e561fcb45ee98aedeaeae6d62b3e5baf9750bc790ccca5b0a: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:11.543336 containerd[1988]: time="2026-01-23T19:04:11.543167892Z" level=info msg="CreateContainer within sandbox \"e6fe8ed4e8d68f0fec7611de368808e56d71ba19daec3365256b8af5ddac47cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b30bdb8fa677f1b9d54c38acd933fb75103fd9043d415097be3d26e1fa5f9e36\"" Jan 23 19:04:11.544719 containerd[1988]: time="2026-01-23T19:04:11.544504881Z" level=info msg="StartContainer for \"b30bdb8fa677f1b9d54c38acd933fb75103fd9043d415097be3d26e1fa5f9e36\"" Jan 23 19:04:11.546260 containerd[1988]: time="2026-01-23T19:04:11.546214268Z" level=info msg="connecting to shim b30bdb8fa677f1b9d54c38acd933fb75103fd9043d415097be3d26e1fa5f9e36" address="unix:///run/containerd/s/dc9d85b8ea267dbbad9c541d16a9db0732c2afd9e48d54dbfb9108b5da1961fb" protocol=ttrpc version=3 Jan 23 19:04:11.554703 containerd[1988]: time="2026-01-23T19:04:11.554645152Z" level=info msg="CreateContainer within sandbox \"da5bd2f500408d2e14458e5a9a189d84d99a9c4c93b4d37d7f065381a63929ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fe3244c103ba3b8e561fcb45ee98aedeaeae6d62b3e5baf9750bc790ccca5b0a\"" Jan 23 19:04:11.556405 containerd[1988]: time="2026-01-23T19:04:11.555998259Z" level=info msg="StartContainer for \"fe3244c103ba3b8e561fcb45ee98aedeaeae6d62b3e5baf9750bc790ccca5b0a\"" Jan 23 19:04:11.565307 containerd[1988]: time="2026-01-23T19:04:11.565255243Z" level=info msg="Container e02eb9880f773fdd86dcc2aea79465c478850930856081a153307936cc1bf6ca: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:11.568199 containerd[1988]: time="2026-01-23T19:04:11.568155743Z" level=info msg="connecting to shim fe3244c103ba3b8e561fcb45ee98aedeaeae6d62b3e5baf9750bc790ccca5b0a" address="unix:///run/containerd/s/272a3786f5f00134c7dcdee3852659789c1fac20ab17210fa5cd71f3dff2b7f6" protocol=ttrpc version=3 Jan 23 19:04:11.585053 systemd[1]: Started cri-containerd-b30bdb8fa677f1b9d54c38acd933fb75103fd9043d415097be3d26e1fa5f9e36.scope - libcontainer container b30bdb8fa677f1b9d54c38acd933fb75103fd9043d415097be3d26e1fa5f9e36. Jan 23 19:04:11.608738 containerd[1988]: time="2026-01-23T19:04:11.607976448Z" level=info msg="CreateContainer within sandbox \"ebf332a63a0a9807e7366081f10ed4467f95532fa7beb5ef26c126e41cc34c9f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e02eb9880f773fdd86dcc2aea79465c478850930856081a153307936cc1bf6ca\"" Jan 23 19:04:11.612031 containerd[1988]: time="2026-01-23T19:04:11.611980929Z" level=info msg="StartContainer for \"e02eb9880f773fdd86dcc2aea79465c478850930856081a153307936cc1bf6ca\"" Jan 23 19:04:11.613188 systemd[1]: Started cri-containerd-fe3244c103ba3b8e561fcb45ee98aedeaeae6d62b3e5baf9750bc790ccca5b0a.scope - libcontainer container fe3244c103ba3b8e561fcb45ee98aedeaeae6d62b3e5baf9750bc790ccca5b0a. Jan 23 19:04:11.616900 containerd[1988]: time="2026-01-23T19:04:11.616835531Z" level=info msg="connecting to shim e02eb9880f773fdd86dcc2aea79465c478850930856081a153307936cc1bf6ca" address="unix:///run/containerd/s/e7be4b74e0dac4875d1f7c13d2f664f3b2a0f63649d071920d50671414848f87" protocol=ttrpc version=3 Jan 23 19:04:11.647002 systemd[1]: Started cri-containerd-e02eb9880f773fdd86dcc2aea79465c478850930856081a153307936cc1bf6ca.scope - libcontainer container e02eb9880f773fdd86dcc2aea79465c478850930856081a153307936cc1bf6ca. Jan 23 19:04:11.728693 containerd[1988]: time="2026-01-23T19:04:11.727614805Z" level=info msg="StartContainer for \"b30bdb8fa677f1b9d54c38acd933fb75103fd9043d415097be3d26e1fa5f9e36\" returns successfully" Jan 23 19:04:11.759867 containerd[1988]: time="2026-01-23T19:04:11.759827567Z" level=info msg="StartContainer for \"fe3244c103ba3b8e561fcb45ee98aedeaeae6d62b3e5baf9750bc790ccca5b0a\" returns successfully" Jan 23 19:04:11.764205 containerd[1988]: time="2026-01-23T19:04:11.764148250Z" level=info msg="StartContainer for \"e02eb9880f773fdd86dcc2aea79465c478850930856081a153307936cc1bf6ca\" returns successfully" Jan 23 19:04:11.899425 kubelet[2951]: E0123 19:04:11.899382 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:04:11.918516 kubelet[2951]: E0123 19:04:11.918387 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-6?timeout=10s\": dial tcp 172.31.18.6:6443: connect: connection refused" interval="1.6s" Jan 23 19:04:11.944778 kubelet[2951]: E0123 19:04:11.944730 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 19:04:12.006247 kubelet[2951]: E0123 19:04:12.006130 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-6&limit=500&resourceVersion=0\": dial tcp 172.31.18.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 19:04:12.141320 kubelet[2951]: I0123 19:04:12.141289 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-6" Jan 23 19:04:12.142076 kubelet[2951]: E0123 19:04:12.142043 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.6:6443/api/v1/nodes\": dial tcp 172.31.18.6:6443: connect: connection refused" node="ip-172-31-18-6" Jan 23 19:04:12.626917 kubelet[2951]: E0123 19:04:12.626883 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-6\" not found" node="ip-172-31-18-6" Jan 23 19:04:12.633036 kubelet[2951]: E0123 19:04:12.633004 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-6\" not found" node="ip-172-31-18-6" Jan 23 19:04:12.636157 kubelet[2951]: E0123 19:04:12.636131 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-6\" not found" node="ip-172-31-18-6" Jan 23 19:04:13.639954 kubelet[2951]: E0123 19:04:13.639525 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-6\" not found" node="ip-172-31-18-6" Jan 23 19:04:13.642186 kubelet[2951]: E0123 19:04:13.641742 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-6\" not found" node="ip-172-31-18-6" Jan 23 19:04:13.643285 kubelet[2951]: E0123 19:04:13.642979 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-6\" not found" node="ip-172-31-18-6" Jan 23 19:04:13.745464 kubelet[2951]: I0123 19:04:13.745299 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-6" Jan 23 19:04:14.642794 kubelet[2951]: E0123 19:04:14.642761 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-6\" not found" node="ip-172-31-18-6" Jan 23 19:04:14.644455 kubelet[2951]: E0123 19:04:14.643524 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-6\" not found" node="ip-172-31-18-6" Jan 23 19:04:14.824822 kubelet[2951]: E0123 19:04:14.824586 2951 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-6\" not found" node="ip-172-31-18-6" Jan 23 19:04:14.914621 kubelet[2951]: I0123 19:04:14.914240 2951 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-6" Jan 23 19:04:14.987873 kubelet[2951]: I0123 19:04:14.987633 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-6" Jan 23 19:04:14.995436 kubelet[2951]: E0123 19:04:14.995194 2951 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-6" Jan 23 19:04:14.995436 kubelet[2951]: I0123 19:04:14.995220 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-6" Jan 23 19:04:14.997547 kubelet[2951]: E0123 19:04:14.997483 2951 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-6" Jan 23 19:04:14.997547 kubelet[2951]: I0123 19:04:14.997508 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-6" Jan 23 19:04:14.999596 kubelet[2951]: E0123 19:04:14.999557 2951 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-6" Jan 23 19:04:15.451307 kubelet[2951]: I0123 19:04:15.451148 2951 apiserver.go:52] "Watching apiserver" Jan 23 19:04:15.495779 kubelet[2951]: I0123 19:04:15.495744 2951 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 19:04:16.193470 kubelet[2951]: I0123 19:04:16.193436 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-6" Jan 23 19:04:17.524151 systemd[1]: Reload requested from client PID 3234 ('systemctl') (unit session-7.scope)... Jan 23 19:04:17.524175 systemd[1]: Reloading... Jan 23 19:04:17.662133 zram_generator::config[3277]: No configuration found. Jan 23 19:04:18.020391 systemd[1]: Reloading finished in 495 ms. Jan 23 19:04:18.058700 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:04:18.081735 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 19:04:18.082282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:04:18.082457 systemd[1]: kubelet.service: Consumed 1.378s CPU time, 129.2M memory peak. Jan 23 19:04:18.084978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:04:18.442959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:04:18.463839 (kubelet)[3338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:04:18.562309 kubelet[3338]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:04:18.562309 kubelet[3338]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:04:18.562309 kubelet[3338]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:04:18.562309 kubelet[3338]: I0123 19:04:18.561520 3338 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:04:18.571482 kubelet[3338]: I0123 19:04:18.571437 3338 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 19:04:18.571482 kubelet[3338]: I0123 19:04:18.571470 3338 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:04:18.571801 kubelet[3338]: I0123 19:04:18.571776 3338 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 19:04:18.575566 kubelet[3338]: I0123 19:04:18.575511 3338 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 19:04:18.580272 kubelet[3338]: I0123 19:04:18.579391 3338 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:04:18.599899 kubelet[3338]: I0123 19:04:18.599869 3338 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:04:18.600772 sudo[3352]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 19:04:18.601981 sudo[3352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 19:04:18.608784 kubelet[3338]: I0123 19:04:18.608697 3338 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 19:04:18.609712 kubelet[3338]: I0123 19:04:18.609191 3338 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:04:18.609712 kubelet[3338]: I0123 19:04:18.609230 3338 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:04:18.609712 kubelet[3338]: I0123 19:04:18.609436 3338 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:04:18.609712 kubelet[3338]: I0123 19:04:18.609450 3338 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 19:04:18.609712 kubelet[3338]: I0123 19:04:18.609509 3338 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:04:18.610029 kubelet[3338]: I0123 19:04:18.610011 3338 kubelet.go:480] "Attempting to sync node with API server" Jan 23 19:04:18.610803 kubelet[3338]: I0123 19:04:18.610784 3338 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:04:18.610948 kubelet[3338]: I0123 19:04:18.610939 3338 kubelet.go:386] "Adding apiserver pod source" Jan 23 19:04:18.611719 kubelet[3338]: I0123 19:04:18.611703 3338 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:04:18.615454 kubelet[3338]: I0123 19:04:18.615427 3338 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:04:18.616109 kubelet[3338]: I0123 19:04:18.616072 3338 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 19:04:18.628298 kubelet[3338]: I0123 19:04:18.628264 3338 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 19:04:18.628431 kubelet[3338]: I0123 19:04:18.628333 3338 server.go:1289] "Started kubelet" Jan 23 19:04:18.631972 kubelet[3338]: I0123 19:04:18.631664 3338 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:04:18.646241 kubelet[3338]: I0123 19:04:18.646180 3338 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:04:18.650995 kubelet[3338]: I0123 19:04:18.649389 3338 server.go:317] "Adding debug handlers to kubelet server" Jan 23 19:04:18.657052 kubelet[3338]: I0123 19:04:18.656427 3338 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:04:18.657962 kubelet[3338]: I0123 19:04:18.657939 3338 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:04:18.658814 kubelet[3338]: I0123 19:04:18.658427 3338 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:04:18.663149 kubelet[3338]: I0123 19:04:18.663122 3338 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 19:04:18.664943 kubelet[3338]: E0123 19:04:18.663813 3338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-6\" not found" Jan 23 19:04:18.670605 kubelet[3338]: I0123 19:04:18.670456 3338 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 19:04:18.671239 kubelet[3338]: I0123 19:04:18.671117 3338 reconciler.go:26] "Reconciler: start to sync state" Jan 23 19:04:18.681447 kubelet[3338]: I0123 19:04:18.681324 3338 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:04:18.691300 kubelet[3338]: E0123 19:04:18.690035 3338 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:04:18.694234 kubelet[3338]: I0123 19:04:18.694139 3338 factory.go:223] Registration of the containerd container factory successfully Jan 23 19:04:18.694957 kubelet[3338]: I0123 19:04:18.694396 3338 factory.go:223] Registration of the systemd container factory successfully Jan 23 19:04:18.721190 kubelet[3338]: I0123 19:04:18.721147 3338 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 19:04:18.729361 kubelet[3338]: I0123 19:04:18.729328 3338 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 19:04:18.729529 kubelet[3338]: I0123 19:04:18.729518 3338 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 19:04:18.729624 kubelet[3338]: I0123 19:04:18.729610 3338 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:04:18.729690 kubelet[3338]: I0123 19:04:18.729682 3338 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 19:04:18.729847 kubelet[3338]: E0123 19:04:18.729819 3338 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 19:04:18.830019 kubelet[3338]: E0123 19:04:18.829987 3338 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:04:18.841280 kubelet[3338]: I0123 19:04:18.841254 3338 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:04:18.841940 kubelet[3338]: I0123 19:04:18.841437 3338 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:04:18.841940 kubelet[3338]: I0123 19:04:18.841463 3338 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:04:18.841940 kubelet[3338]: I0123 19:04:18.841647 3338 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 19:04:18.841940 kubelet[3338]: I0123 19:04:18.841660 3338 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 19:04:18.841940 kubelet[3338]: I0123 19:04:18.841683 3338 policy_none.go:49] "None policy: Start" Jan 23 19:04:18.841940 kubelet[3338]: I0123 19:04:18.841695 3338 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 19:04:18.841940 kubelet[3338]: I0123 19:04:18.841708 3338 state_mem.go:35] "Initializing new in-memory state store" Jan 23 19:04:18.841940 kubelet[3338]: I0123 19:04:18.841842 3338 state_mem.go:75] "Updated machine memory state" Jan 23 19:04:18.858218 kubelet[3338]: E0123 19:04:18.857400 3338 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 19:04:18.858897 kubelet[3338]: I0123 19:04:18.858851 3338 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:04:18.859263 kubelet[3338]: I0123 19:04:18.858873 3338 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:04:18.860357 kubelet[3338]: I0123 19:04:18.860336 3338 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:04:18.870253 kubelet[3338]: E0123 19:04:18.870196 3338 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:04:18.983182 kubelet[3338]: I0123 19:04:18.982731 3338 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-6" Jan 23 19:04:19.000711 kubelet[3338]: I0123 19:04:19.000203 3338 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-18-6" Jan 23 19:04:19.000711 kubelet[3338]: I0123 19:04:19.000289 3338 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-6" Jan 23 19:04:19.030958 kubelet[3338]: I0123 19:04:19.030912 3338 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-6" Jan 23 19:04:19.033921 kubelet[3338]: I0123 19:04:19.032392 3338 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-6" Jan 23 19:04:19.033921 kubelet[3338]: I0123 19:04:19.033739 3338 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-6" Jan 23 19:04:19.044968 kubelet[3338]: E0123 19:04:19.044072 3338 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-6\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-6" Jan 23 19:04:19.075914 kubelet[3338]: I0123 19:04:19.075871 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8fc1a2db4fdad12c6504c67fa1db16ab-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-6\" (UID: \"8fc1a2db4fdad12c6504c67fa1db16ab\") " pod="kube-system/kube-apiserver-ip-172-31-18-6" Jan 23 19:04:19.076066 kubelet[3338]: I0123 19:04:19.075944 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b98c6a4f9554918c6b6e4eb70128810-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-6\" (UID: \"3b98c6a4f9554918c6b6e4eb70128810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-6" Jan 23 19:04:19.076216 kubelet[3338]: I0123 19:04:19.076032 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b98c6a4f9554918c6b6e4eb70128810-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-6\" (UID: \"3b98c6a4f9554918c6b6e4eb70128810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-6" Jan 23 19:04:19.076544 kubelet[3338]: I0123 19:04:19.076456 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ed3c5924a1237241201e712f36721ba-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-6\" (UID: \"4ed3c5924a1237241201e712f36721ba\") " pod="kube-system/kube-scheduler-ip-172-31-18-6" Jan 23 19:04:19.076628 kubelet[3338]: I0123 19:04:19.076574 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8fc1a2db4fdad12c6504c67fa1db16ab-ca-certs\") pod \"kube-apiserver-ip-172-31-18-6\" (UID: \"8fc1a2db4fdad12c6504c67fa1db16ab\") " pod="kube-system/kube-apiserver-ip-172-31-18-6" Jan 23 19:04:19.076628 kubelet[3338]: I0123 19:04:19.076620 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8fc1a2db4fdad12c6504c67fa1db16ab-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-6\" (UID: \"8fc1a2db4fdad12c6504c67fa1db16ab\") " pod="kube-system/kube-apiserver-ip-172-31-18-6" Jan 23 19:04:19.076717 kubelet[3338]: I0123 19:04:19.076652 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b98c6a4f9554918c6b6e4eb70128810-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-6\" (UID: \"3b98c6a4f9554918c6b6e4eb70128810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-6" Jan 23 19:04:19.076717 kubelet[3338]: I0123 19:04:19.076693 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3b98c6a4f9554918c6b6e4eb70128810-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-6\" (UID: \"3b98c6a4f9554918c6b6e4eb70128810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-6" Jan 23 19:04:19.076808 kubelet[3338]: I0123 19:04:19.076719 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b98c6a4f9554918c6b6e4eb70128810-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-6\" (UID: \"3b98c6a4f9554918c6b6e4eb70128810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-6" Jan 23 19:04:19.210862 sudo[3352]: pam_unix(sudo:session): session closed for user root Jan 23 19:04:19.613329 kubelet[3338]: I0123 19:04:19.613231 3338 apiserver.go:52] "Watching apiserver" Jan 23 19:04:19.671805 kubelet[3338]: I0123 19:04:19.671743 3338 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 19:04:19.782430 kubelet[3338]: I0123 19:04:19.782280 3338 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-6" Jan 23 19:04:19.782932 kubelet[3338]: I0123 19:04:19.782886 3338 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-6" Jan 23 19:04:19.796977 kubelet[3338]: E0123 19:04:19.795615 3338 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-6\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-6" Jan 23 19:04:19.799285 kubelet[3338]: E0123 19:04:19.799250 3338 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-6\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-6" Jan 23 19:04:19.836394 kubelet[3338]: I0123 19:04:19.835077 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-6" podStartSLOduration=0.835056672 podStartE2EDuration="835.056672ms" podCreationTimestamp="2026-01-23 19:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:04:19.834262499 +0000 UTC m=+1.351628294" watchObservedRunningTime="2026-01-23 19:04:19.835056672 +0000 UTC m=+1.352422467" Jan 23 19:04:19.836394 kubelet[3338]: I0123 19:04:19.836275 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-6" podStartSLOduration=3.836248715 podStartE2EDuration="3.836248715s" podCreationTimestamp="2026-01-23 19:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:04:19.820311395 +0000 UTC m=+1.337677191" watchObservedRunningTime="2026-01-23 19:04:19.836248715 +0000 UTC m=+1.353614512" Jan 23 19:04:19.867210 kubelet[3338]: I0123 19:04:19.866445 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-6" podStartSLOduration=0.866424997 podStartE2EDuration="866.424997ms" podCreationTimestamp="2026-01-23 19:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:04:19.847854569 +0000 UTC m=+1.365220364" watchObservedRunningTime="2026-01-23 19:04:19.866424997 +0000 UTC m=+1.383790793" Jan 23 19:04:21.236687 sudo[2366]: pam_unix(sudo:session): session closed for user root Jan 23 19:04:21.313082 sshd[2365]: Connection closed by 68.220.241.50 port 54626 Jan 23 19:04:21.316974 sshd-session[2362]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:21.321717 systemd-logind[1959]: Session 7 logged out. Waiting for processes to exit. Jan 23 19:04:21.322725 systemd[1]: sshd@6-172.31.18.6:22-68.220.241.50:54626.service: Deactivated successfully. Jan 23 19:04:21.325624 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 19:04:21.325871 systemd[1]: session-7.scope: Consumed 5.194s CPU time, 206.8M memory peak. Jan 23 19:04:21.328428 systemd-logind[1959]: Removed session 7. Jan 23 19:04:22.997676 kubelet[3338]: I0123 19:04:22.997522 3338 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 19:04:22.998458 containerd[1988]: time="2026-01-23T19:04:22.998427172Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 19:04:22.999827 kubelet[3338]: I0123 19:04:22.998697 3338 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 19:04:23.809360 systemd[1]: Created slice kubepods-besteffort-pod107ebc15_9cbc_41d2_99c0_094e794227c1.slice - libcontainer container kubepods-besteffort-pod107ebc15_9cbc_41d2_99c0_094e794227c1.slice. Jan 23 19:04:23.815284 systemd[1]: Created slice kubepods-burstable-pod1e522511_4190_44ff_9b14_179f7e0f284e.slice - libcontainer container kubepods-burstable-pod1e522511_4190_44ff_9b14_179f7e0f284e.slice. Jan 23 19:04:23.822667 kubelet[3338]: I0123 19:04:23.822632 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-cilium-run\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.822812 kubelet[3338]: I0123 19:04:23.822686 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/107ebc15-9cbc-41d2-99c0-094e794227c1-xtables-lock\") pod \"kube-proxy-9zgbt\" (UID: \"107ebc15-9cbc-41d2-99c0-094e794227c1\") " pod="kube-system/kube-proxy-9zgbt" Jan 23 19:04:23.822812 kubelet[3338]: I0123 19:04:23.822722 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/107ebc15-9cbc-41d2-99c0-094e794227c1-lib-modules\") pod \"kube-proxy-9zgbt\" (UID: \"107ebc15-9cbc-41d2-99c0-094e794227c1\") " pod="kube-system/kube-proxy-9zgbt" Jan 23 19:04:23.822812 kubelet[3338]: I0123 19:04:23.822748 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-hostproc\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.822812 kubelet[3338]: I0123 19:04:23.822775 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-cilium-cgroup\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.822812 kubelet[3338]: I0123 19:04:23.822800 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-etc-cni-netd\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.823026 kubelet[3338]: I0123 19:04:23.822842 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-host-proc-sys-kernel\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.823026 kubelet[3338]: I0123 19:04:23.822873 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-cni-path\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.823026 kubelet[3338]: I0123 19:04:23.822903 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-lib-modules\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.823026 kubelet[3338]: I0123 19:04:23.822931 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e522511-4190-44ff-9b14-179f7e0f284e-cilium-config-path\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.823026 kubelet[3338]: I0123 19:04:23.822971 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-host-proc-sys-net\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.823026 kubelet[3338]: I0123 19:04:23.823003 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/107ebc15-9cbc-41d2-99c0-094e794227c1-kube-proxy\") pod \"kube-proxy-9zgbt\" (UID: \"107ebc15-9cbc-41d2-99c0-094e794227c1\") " pod="kube-system/kube-proxy-9zgbt" Jan 23 19:04:23.826441 kubelet[3338]: I0123 19:04:23.823044 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-xtables-lock\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.826441 kubelet[3338]: I0123 19:04:23.823069 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8fb4\" (UniqueName: \"kubernetes.io/projected/1e522511-4190-44ff-9b14-179f7e0f284e-kube-api-access-p8fb4\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.826441 kubelet[3338]: I0123 19:04:23.824080 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glf2x\" (UniqueName: \"kubernetes.io/projected/107ebc15-9cbc-41d2-99c0-094e794227c1-kube-api-access-glf2x\") pod \"kube-proxy-9zgbt\" (UID: \"107ebc15-9cbc-41d2-99c0-094e794227c1\") " pod="kube-system/kube-proxy-9zgbt" Jan 23 19:04:23.826441 kubelet[3338]: I0123 19:04:23.824160 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-bpf-maps\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.826441 kubelet[3338]: I0123 19:04:23.824186 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e522511-4190-44ff-9b14-179f7e0f284e-clustermesh-secrets\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.826703 kubelet[3338]: I0123 19:04:23.824208 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e522511-4190-44ff-9b14-179f7e0f284e-hubble-tls\") pod \"cilium-2jgjj\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " pod="kube-system/cilium-2jgjj" Jan 23 19:04:23.915069 systemd[1]: Created slice kubepods-besteffort-pod491d016d_1bd1_4ee6_a195_336105b15bbf.slice - libcontainer container kubepods-besteffort-pod491d016d_1bd1_4ee6_a195_336105b15bbf.slice. Jan 23 19:04:24.025950 kubelet[3338]: I0123 19:04:24.025888 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntg4\" (UniqueName: \"kubernetes.io/projected/491d016d-1bd1-4ee6-a195-336105b15bbf-kube-api-access-bntg4\") pod \"cilium-operator-6c4d7847fc-7lv5g\" (UID: \"491d016d-1bd1-4ee6-a195-336105b15bbf\") " pod="kube-system/cilium-operator-6c4d7847fc-7lv5g" Jan 23 19:04:24.025950 kubelet[3338]: I0123 19:04:24.025937 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/491d016d-1bd1-4ee6-a195-336105b15bbf-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7lv5g\" (UID: \"491d016d-1bd1-4ee6-a195-336105b15bbf\") " pod="kube-system/cilium-operator-6c4d7847fc-7lv5g" Jan 23 19:04:24.138039 containerd[1988]: time="2026-01-23T19:04:24.137908087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2jgjj,Uid:1e522511-4190-44ff-9b14-179f7e0f284e,Namespace:kube-system,Attempt:0,}" Jan 23 19:04:24.138039 containerd[1988]: time="2026-01-23T19:04:24.138367091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zgbt,Uid:107ebc15-9cbc-41d2-99c0-094e794227c1,Namespace:kube-system,Attempt:0,}" Jan 23 19:04:24.198271 containerd[1988]: time="2026-01-23T19:04:24.198064406Z" level=info msg="connecting to shim c6e55cfc616c2aaff442f199f7b3ef83b2b0348b71320fed66bd7ade9f74ddab" address="unix:///run/containerd/s/1ed3c722d6871ace000fcc739cc65f2bf21c4859119d9c7a110ab0981adeb9ec" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:04:24.200601 containerd[1988]: time="2026-01-23T19:04:24.200557930Z" level=info msg="connecting to shim e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e" address="unix:///run/containerd/s/db93eb4a44dc8302d5df13580f1f0c1892fb59e72f64ec89e3db5668c8b22bbd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:04:24.224934 containerd[1988]: time="2026-01-23T19:04:24.222448971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7lv5g,Uid:491d016d-1bd1-4ee6-a195-336105b15bbf,Namespace:kube-system,Attempt:0,}" Jan 23 19:04:24.242513 systemd[1]: Started cri-containerd-c6e55cfc616c2aaff442f199f7b3ef83b2b0348b71320fed66bd7ade9f74ddab.scope - libcontainer container c6e55cfc616c2aaff442f199f7b3ef83b2b0348b71320fed66bd7ade9f74ddab. Jan 23 19:04:24.249843 systemd[1]: Started cri-containerd-e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e.scope - libcontainer container e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e. Jan 23 19:04:24.276231 containerd[1988]: time="2026-01-23T19:04:24.275279586Z" level=info msg="connecting to shim 65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113" address="unix:///run/containerd/s/53864a75d2b17426066a765e322a954bf3d5a47fba833ff1812eefbe30168c15" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:04:24.319951 containerd[1988]: time="2026-01-23T19:04:24.319905922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zgbt,Uid:107ebc15-9cbc-41d2-99c0-094e794227c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6e55cfc616c2aaff442f199f7b3ef83b2b0348b71320fed66bd7ade9f74ddab\"" Jan 23 19:04:24.333848 containerd[1988]: time="2026-01-23T19:04:24.333297841Z" level=info msg="CreateContainer within sandbox \"c6e55cfc616c2aaff442f199f7b3ef83b2b0348b71320fed66bd7ade9f74ddab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 19:04:24.333572 systemd[1]: Started cri-containerd-65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113.scope - libcontainer container 65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113. Jan 23 19:04:24.345113 containerd[1988]: time="2026-01-23T19:04:24.344681253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2jgjj,Uid:1e522511-4190-44ff-9b14-179f7e0f284e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\"" Jan 23 19:04:24.349368 containerd[1988]: time="2026-01-23T19:04:24.349200457Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 19:04:24.365385 containerd[1988]: time="2026-01-23T19:04:24.365338536Z" level=info msg="Container 8c47564b87f98d29001afa462cd4df39afc7b61fd44eb153c6dc5242fd4f05eb: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:24.419671 containerd[1988]: time="2026-01-23T19:04:24.419550661Z" level=info msg="CreateContainer within sandbox \"c6e55cfc616c2aaff442f199f7b3ef83b2b0348b71320fed66bd7ade9f74ddab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8c47564b87f98d29001afa462cd4df39afc7b61fd44eb153c6dc5242fd4f05eb\"" Jan 23 19:04:24.423712 containerd[1988]: time="2026-01-23T19:04:24.423675306Z" level=info msg="StartContainer for \"8c47564b87f98d29001afa462cd4df39afc7b61fd44eb153c6dc5242fd4f05eb\"" Jan 23 19:04:24.427361 containerd[1988]: time="2026-01-23T19:04:24.427321631Z" level=info msg="connecting to shim 8c47564b87f98d29001afa462cd4df39afc7b61fd44eb153c6dc5242fd4f05eb" address="unix:///run/containerd/s/1ed3c722d6871ace000fcc739cc65f2bf21c4859119d9c7a110ab0981adeb9ec" protocol=ttrpc version=3 Jan 23 19:04:24.439001 containerd[1988]: time="2026-01-23T19:04:24.438961052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7lv5g,Uid:491d016d-1bd1-4ee6-a195-336105b15bbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\"" Jan 23 19:04:24.459356 systemd[1]: Started cri-containerd-8c47564b87f98d29001afa462cd4df39afc7b61fd44eb153c6dc5242fd4f05eb.scope - libcontainer container 8c47564b87f98d29001afa462cd4df39afc7b61fd44eb153c6dc5242fd4f05eb. Jan 23 19:04:24.463178 update_engine[1960]: I20260123 19:04:24.463122 1960 update_attempter.cc:509] Updating boot flags... Jan 23 19:04:24.570431 containerd[1988]: time="2026-01-23T19:04:24.570218798Z" level=info msg="StartContainer for \"8c47564b87f98d29001afa462cd4df39afc7b61fd44eb153c6dc5242fd4f05eb\" returns successfully" Jan 23 19:04:25.662080 kubelet[3338]: I0123 19:04:25.662016 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9zgbt" podStartSLOduration=2.661981013 podStartE2EDuration="2.661981013s" podCreationTimestamp="2026-01-23 19:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:04:24.876945888 +0000 UTC m=+6.394311684" watchObservedRunningTime="2026-01-23 19:04:25.661981013 +0000 UTC m=+7.179346806" Jan 23 19:04:30.932529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2423578694.mount: Deactivated successfully. Jan 23 19:04:33.508624 containerd[1988]: time="2026-01-23T19:04:33.508340359Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:33.510421 containerd[1988]: time="2026-01-23T19:04:33.510381617Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 19:04:33.512786 containerd[1988]: time="2026-01-23T19:04:33.512518969Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:33.513875 containerd[1988]: time="2026-01-23T19:04:33.513845319Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.164599279s" Jan 23 19:04:33.513983 containerd[1988]: time="2026-01-23T19:04:33.513968759Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 19:04:33.518140 containerd[1988]: time="2026-01-23T19:04:33.518113455Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 19:04:33.523836 containerd[1988]: time="2026-01-23T19:04:33.523797188Z" level=info msg="CreateContainer within sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 19:04:33.569622 containerd[1988]: time="2026-01-23T19:04:33.569077836Z" level=info msg="Container eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:33.582994 containerd[1988]: time="2026-01-23T19:04:33.582951574Z" level=info msg="CreateContainer within sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b\"" Jan 23 19:04:33.583600 containerd[1988]: time="2026-01-23T19:04:33.583539305Z" level=info msg="StartContainer for \"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b\"" Jan 23 19:04:33.585737 containerd[1988]: time="2026-01-23T19:04:33.585706622Z" level=info msg="connecting to shim eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b" address="unix:///run/containerd/s/db93eb4a44dc8302d5df13580f1f0c1892fb59e72f64ec89e3db5668c8b22bbd" protocol=ttrpc version=3 Jan 23 19:04:33.668310 systemd[1]: Started cri-containerd-eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b.scope - libcontainer container eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b. Jan 23 19:04:33.708075 containerd[1988]: time="2026-01-23T19:04:33.708033687Z" level=info msg="StartContainer for \"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b\" returns successfully" Jan 23 19:04:33.734264 systemd[1]: cri-containerd-eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b.scope: Deactivated successfully. Jan 23 19:04:33.734529 systemd[1]: cri-containerd-eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b.scope: Consumed 25ms CPU time, 6.5M memory peak, 3.2M written to disk. Jan 23 19:04:33.747653 containerd[1988]: time="2026-01-23T19:04:33.747591349Z" level=info msg="received container exit event container_id:\"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b\" id:\"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b\" pid:3940 exited_at:{seconds:1769195073 nanos:739049293}" Jan 23 19:04:33.882282 containerd[1988]: time="2026-01-23T19:04:33.881263503Z" level=info msg="CreateContainer within sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 19:04:33.896229 containerd[1988]: time="2026-01-23T19:04:33.896185750Z" level=info msg="Container 302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:33.916856 containerd[1988]: time="2026-01-23T19:04:33.916791961Z" level=info msg="CreateContainer within sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45\"" Jan 23 19:04:33.918030 containerd[1988]: time="2026-01-23T19:04:33.917422658Z" level=info msg="StartContainer for \"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45\"" Jan 23 19:04:33.918658 containerd[1988]: time="2026-01-23T19:04:33.918627786Z" level=info msg="connecting to shim 302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45" address="unix:///run/containerd/s/db93eb4a44dc8302d5df13580f1f0c1892fb59e72f64ec89e3db5668c8b22bbd" protocol=ttrpc version=3 Jan 23 19:04:33.947331 systemd[1]: Started cri-containerd-302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45.scope - libcontainer container 302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45. Jan 23 19:04:33.993231 containerd[1988]: time="2026-01-23T19:04:33.993185894Z" level=info msg="StartContainer for \"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45\" returns successfully" Jan 23 19:04:34.011706 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 19:04:34.012584 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:04:34.012839 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:04:34.017179 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:04:34.021907 systemd[1]: cri-containerd-302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45.scope: Deactivated successfully. Jan 23 19:04:34.025518 containerd[1988]: time="2026-01-23T19:04:34.025474846Z" level=info msg="received container exit event container_id:\"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45\" id:\"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45\" pid:3984 exited_at:{seconds:1769195074 nanos:22455507}" Jan 23 19:04:34.051578 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:04:34.553788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b-rootfs.mount: Deactivated successfully. Jan 23 19:04:34.560401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3573831679.mount: Deactivated successfully. Jan 23 19:04:34.916412 containerd[1988]: time="2026-01-23T19:04:34.916291841Z" level=info msg="CreateContainer within sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 19:04:34.972680 containerd[1988]: time="2026-01-23T19:04:34.972212492Z" level=info msg="Container cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:35.002488 containerd[1988]: time="2026-01-23T19:04:35.002434094Z" level=info msg="CreateContainer within sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab\"" Jan 23 19:04:35.004197 containerd[1988]: time="2026-01-23T19:04:35.003343267Z" level=info msg="StartContainer for \"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab\"" Jan 23 19:04:35.006221 containerd[1988]: time="2026-01-23T19:04:35.006184325Z" level=info msg="connecting to shim cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab" address="unix:///run/containerd/s/db93eb4a44dc8302d5df13580f1f0c1892fb59e72f64ec89e3db5668c8b22bbd" protocol=ttrpc version=3 Jan 23 19:04:35.050519 systemd[1]: Started cri-containerd-cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab.scope - libcontainer container cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab. Jan 23 19:04:35.141681 containerd[1988]: time="2026-01-23T19:04:35.141630991Z" level=info msg="StartContainer for \"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab\" returns successfully" Jan 23 19:04:35.153635 systemd[1]: cri-containerd-cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab.scope: Deactivated successfully. Jan 23 19:04:35.153981 systemd[1]: cri-containerd-cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab.scope: Consumed 36ms CPU time, 6.1M memory peak, 1.1M read from disk. Jan 23 19:04:35.160750 containerd[1988]: time="2026-01-23T19:04:35.160695192Z" level=info msg="received container exit event container_id:\"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab\" id:\"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab\" pid:4038 exited_at:{seconds:1769195075 nanos:160450619}" Jan 23 19:04:35.549918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab-rootfs.mount: Deactivated successfully. Jan 23 19:04:35.775078 containerd[1988]: time="2026-01-23T19:04:35.775018057Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:35.777255 containerd[1988]: time="2026-01-23T19:04:35.777018800Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 19:04:35.779497 containerd[1988]: time="2026-01-23T19:04:35.779460336Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:35.781021 containerd[1988]: time="2026-01-23T19:04:35.780985042Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.262840454s" Jan 23 19:04:35.781122 containerd[1988]: time="2026-01-23T19:04:35.781024003Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 19:04:35.787034 containerd[1988]: time="2026-01-23T19:04:35.787000419Z" level=info msg="CreateContainer within sandbox \"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 19:04:35.800484 containerd[1988]: time="2026-01-23T19:04:35.800214162Z" level=info msg="Container aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:35.820719 containerd[1988]: time="2026-01-23T19:04:35.820657889Z" level=info msg="CreateContainer within sandbox \"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\"" Jan 23 19:04:35.821519 containerd[1988]: time="2026-01-23T19:04:35.821438559Z" level=info msg="StartContainer for \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\"" Jan 23 19:04:35.822914 containerd[1988]: time="2026-01-23T19:04:35.822858464Z" level=info msg="connecting to shim aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc" address="unix:///run/containerd/s/53864a75d2b17426066a765e322a954bf3d5a47fba833ff1812eefbe30168c15" protocol=ttrpc version=3 Jan 23 19:04:35.852353 systemd[1]: Started cri-containerd-aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc.scope - libcontainer container aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc. Jan 23 19:04:35.892515 containerd[1988]: time="2026-01-23T19:04:35.892473720Z" level=info msg="StartContainer for \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\" returns successfully" Jan 23 19:04:35.918010 containerd[1988]: time="2026-01-23T19:04:35.917015342Z" level=info msg="CreateContainer within sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 19:04:35.938823 containerd[1988]: time="2026-01-23T19:04:35.936992823Z" level=info msg="Container b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:35.957980 containerd[1988]: time="2026-01-23T19:04:35.957939892Z" level=info msg="CreateContainer within sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997\"" Jan 23 19:04:35.960440 containerd[1988]: time="2026-01-23T19:04:35.960406010Z" level=info msg="StartContainer for \"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997\"" Jan 23 19:04:35.962816 containerd[1988]: time="2026-01-23T19:04:35.962618678Z" level=info msg="connecting to shim b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997" address="unix:///run/containerd/s/db93eb4a44dc8302d5df13580f1f0c1892fb59e72f64ec89e3db5668c8b22bbd" protocol=ttrpc version=3 Jan 23 19:04:36.001735 kubelet[3338]: I0123 19:04:36.001651 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7lv5g" podStartSLOduration=1.661446566 podStartE2EDuration="13.001631055s" podCreationTimestamp="2026-01-23 19:04:23 +0000 UTC" firstStartedPulling="2026-01-23 19:04:24.4415686 +0000 UTC m=+5.958934386" lastFinishedPulling="2026-01-23 19:04:35.781753103 +0000 UTC m=+17.299118875" observedRunningTime="2026-01-23 19:04:35.925732503 +0000 UTC m=+17.443098297" watchObservedRunningTime="2026-01-23 19:04:36.001631055 +0000 UTC m=+17.518996844" Jan 23 19:04:36.047353 systemd[1]: Started cri-containerd-b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997.scope - libcontainer container b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997. Jan 23 19:04:36.112673 containerd[1988]: time="2026-01-23T19:04:36.112561751Z" level=info msg="StartContainer for \"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997\" returns successfully" Jan 23 19:04:36.116213 systemd[1]: cri-containerd-b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997.scope: Deactivated successfully. Jan 23 19:04:36.123446 containerd[1988]: time="2026-01-23T19:04:36.123302291Z" level=info msg="received container exit event container_id:\"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997\" id:\"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997\" pid:4120 exited_at:{seconds:1769195076 nanos:121560519}" Jan 23 19:04:36.930084 containerd[1988]: time="2026-01-23T19:04:36.930033691Z" level=info msg="CreateContainer within sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 19:04:36.954340 containerd[1988]: time="2026-01-23T19:04:36.954295613Z" level=info msg="Container 9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:36.972642 containerd[1988]: time="2026-01-23T19:04:36.972491094Z" level=info msg="CreateContainer within sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\"" Jan 23 19:04:36.973898 containerd[1988]: time="2026-01-23T19:04:36.973868324Z" level=info msg="StartContainer for \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\"" Jan 23 19:04:36.977578 containerd[1988]: time="2026-01-23T19:04:36.977524029Z" level=info msg="connecting to shim 9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0" address="unix:///run/containerd/s/db93eb4a44dc8302d5df13580f1f0c1892fb59e72f64ec89e3db5668c8b22bbd" protocol=ttrpc version=3 Jan 23 19:04:37.012364 systemd[1]: Started cri-containerd-9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0.scope - libcontainer container 9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0. Jan 23 19:04:37.113752 containerd[1988]: time="2026-01-23T19:04:37.113694639Z" level=info msg="StartContainer for \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\" returns successfully" Jan 23 19:04:37.340010 kubelet[3338]: I0123 19:04:37.339288 3338 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 19:04:37.409835 systemd[1]: Created slice kubepods-burstable-pod7b0352b5_b97a_4c76_bb9b_815a4fb8f064.slice - libcontainer container kubepods-burstable-pod7b0352b5_b97a_4c76_bb9b_815a4fb8f064.slice. Jan 23 19:04:37.430373 kubelet[3338]: I0123 19:04:37.430309 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b0352b5-b97a-4c76-bb9b-815a4fb8f064-config-volume\") pod \"coredns-674b8bbfcf-xhhwh\" (UID: \"7b0352b5-b97a-4c76-bb9b-815a4fb8f064\") " pod="kube-system/coredns-674b8bbfcf-xhhwh" Jan 23 19:04:37.430683 kubelet[3338]: I0123 19:04:37.430552 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxcb\" (UniqueName: \"kubernetes.io/projected/2a06aefe-1a9a-4038-9384-b8eef1badadb-kube-api-access-flxcb\") pod \"coredns-674b8bbfcf-s8brs\" (UID: \"2a06aefe-1a9a-4038-9384-b8eef1badadb\") " pod="kube-system/coredns-674b8bbfcf-s8brs" Jan 23 19:04:37.430683 kubelet[3338]: I0123 19:04:37.430627 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a06aefe-1a9a-4038-9384-b8eef1badadb-config-volume\") pod \"coredns-674b8bbfcf-s8brs\" (UID: \"2a06aefe-1a9a-4038-9384-b8eef1badadb\") " pod="kube-system/coredns-674b8bbfcf-s8brs" Jan 23 19:04:37.430683 kubelet[3338]: I0123 19:04:37.430656 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t5dt\" (UniqueName: \"kubernetes.io/projected/7b0352b5-b97a-4c76-bb9b-815a4fb8f064-kube-api-access-8t5dt\") pod \"coredns-674b8bbfcf-xhhwh\" (UID: \"7b0352b5-b97a-4c76-bb9b-815a4fb8f064\") " pod="kube-system/coredns-674b8bbfcf-xhhwh" Jan 23 19:04:37.438506 systemd[1]: Created slice kubepods-burstable-pod2a06aefe_1a9a_4038_9384_b8eef1badadb.slice - libcontainer container kubepods-burstable-pod2a06aefe_1a9a_4038_9384_b8eef1badadb.slice. Jan 23 19:04:37.732744 containerd[1988]: time="2026-01-23T19:04:37.732650690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xhhwh,Uid:7b0352b5-b97a-4c76-bb9b-815a4fb8f064,Namespace:kube-system,Attempt:0,}" Jan 23 19:04:37.747346 containerd[1988]: time="2026-01-23T19:04:37.747279856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s8brs,Uid:2a06aefe-1a9a-4038-9384-b8eef1badadb,Namespace:kube-system,Attempt:0,}" Jan 23 19:04:39.860800 (udev-worker)[4249]: Network interface NamePolicy= disabled on kernel command line. Jan 23 19:04:39.860933 systemd-networkd[1809]: cilium_host: Link UP Jan 23 19:04:39.862795 (udev-worker)[4285]: Network interface NamePolicy= disabled on kernel command line. Jan 23 19:04:39.863918 systemd-networkd[1809]: cilium_net: Link UP Jan 23 19:04:39.864084 systemd-networkd[1809]: cilium_net: Gained carrier Jan 23 19:04:39.864249 systemd-networkd[1809]: cilium_host: Gained carrier Jan 23 19:04:39.994538 (udev-worker)[4297]: Network interface NamePolicy= disabled on kernel command line. Jan 23 19:04:40.004076 systemd-networkd[1809]: cilium_vxlan: Link UP Jan 23 19:04:40.004446 systemd-networkd[1809]: cilium_vxlan: Gained carrier Jan 23 19:04:40.219383 systemd-networkd[1809]: cilium_host: Gained IPv6LL Jan 23 19:04:40.723259 systemd-networkd[1809]: cilium_net: Gained IPv6LL Jan 23 19:04:40.837114 kernel: NET: Registered PF_ALG protocol family Jan 23 19:04:41.171420 systemd-networkd[1809]: cilium_vxlan: Gained IPv6LL Jan 23 19:04:41.614489 (udev-worker)[4295]: Network interface NamePolicy= disabled on kernel command line. Jan 23 19:04:41.634684 systemd-networkd[1809]: lxc_health: Link UP Jan 23 19:04:41.640217 systemd-networkd[1809]: lxc_health: Gained carrier Jan 23 19:04:41.872372 kernel: eth0: renamed from tmpfff0d Jan 23 19:04:41.873938 systemd-networkd[1809]: lxc137c70cda177: Link UP Jan 23 19:04:41.883263 systemd-networkd[1809]: lxc137c70cda177: Gained carrier Jan 23 19:04:41.885183 systemd-networkd[1809]: lxc91bb4d887590: Link UP Jan 23 19:04:41.892255 kernel: eth0: renamed from tmp8f35b Jan 23 19:04:41.897180 systemd-networkd[1809]: lxc91bb4d887590: Gained carrier Jan 23 19:04:41.903644 (udev-worker)[4296]: Network interface NamePolicy= disabled on kernel command line. Jan 23 19:04:42.168217 kubelet[3338]: I0123 19:04:42.168151 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2jgjj" podStartSLOduration=9.998819132 podStartE2EDuration="19.168132104s" podCreationTimestamp="2026-01-23 19:04:23 +0000 UTC" firstStartedPulling="2026-01-23 19:04:24.3482576 +0000 UTC m=+5.865623397" lastFinishedPulling="2026-01-23 19:04:33.517570597 +0000 UTC m=+15.034936369" observedRunningTime="2026-01-23 19:04:37.965392352 +0000 UTC m=+19.482758149" watchObservedRunningTime="2026-01-23 19:04:42.168132104 +0000 UTC m=+23.685497900" Jan 23 19:04:42.771258 systemd-networkd[1809]: lxc_health: Gained IPv6LL Jan 23 19:04:43.348296 systemd-networkd[1809]: lxc137c70cda177: Gained IPv6LL Jan 23 19:04:43.859436 systemd-networkd[1809]: lxc91bb4d887590: Gained IPv6LL Jan 23 19:04:45.473256 kubelet[3338]: I0123 19:04:45.472612 3338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 19:04:46.169695 ntpd[2225]: Listen normally on 6 cilium_host 192.168.0.15:123 Jan 23 19:04:46.170446 ntpd[2225]: 23 Jan 19:04:46 ntpd[2225]: Listen normally on 6 cilium_host 192.168.0.15:123 Jan 23 19:04:46.170446 ntpd[2225]: 23 Jan 19:04:46 ntpd[2225]: Listen normally on 7 cilium_net [fe80::b450:a3ff:fede:ebf2%4]:123 Jan 23 19:04:46.170446 ntpd[2225]: 23 Jan 19:04:46 ntpd[2225]: Listen normally on 8 cilium_host [fe80::94c7:9ff:fedf:d67a%5]:123 Jan 23 19:04:46.170446 ntpd[2225]: 23 Jan 19:04:46 ntpd[2225]: Listen normally on 9 cilium_vxlan [fe80::481d:21ff:fea1:367c%6]:123 Jan 23 19:04:46.170446 ntpd[2225]: 23 Jan 19:04:46 ntpd[2225]: Listen normally on 10 lxc_health [fe80::98ab:faff:fe53:6fc5%8]:123 Jan 23 19:04:46.170446 ntpd[2225]: 23 Jan 19:04:46 ntpd[2225]: Listen normally on 11 lxc137c70cda177 [fe80::6460:23ff:fec1:bdd0%10]:123 Jan 23 19:04:46.170446 ntpd[2225]: 23 Jan 19:04:46 ntpd[2225]: Listen normally on 12 lxc91bb4d887590 [fe80::7863:baff:fea0:310f%12]:123 Jan 23 19:04:46.169768 ntpd[2225]: Listen normally on 7 cilium_net [fe80::b450:a3ff:fede:ebf2%4]:123 Jan 23 19:04:46.169799 ntpd[2225]: Listen normally on 8 cilium_host [fe80::94c7:9ff:fedf:d67a%5]:123 Jan 23 19:04:46.169827 ntpd[2225]: Listen normally on 9 cilium_vxlan [fe80::481d:21ff:fea1:367c%6]:123 Jan 23 19:04:46.169854 ntpd[2225]: Listen normally on 10 lxc_health [fe80::98ab:faff:fe53:6fc5%8]:123 Jan 23 19:04:46.169881 ntpd[2225]: Listen normally on 11 lxc137c70cda177 [fe80::6460:23ff:fec1:bdd0%10]:123 Jan 23 19:04:46.169907 ntpd[2225]: Listen normally on 12 lxc91bb4d887590 [fe80::7863:baff:fea0:310f%12]:123 Jan 23 19:04:46.503702 containerd[1988]: time="2026-01-23T19:04:46.501157322Z" level=info msg="connecting to shim 8f35b5380f58f0e4de4b9cea9a3993b04856e11ab83fd05f7e061c2df36da5f0" address="unix:///run/containerd/s/66cbd93ffe229f07ed54e531113b8c7f139009b7136050d8710ea34dd19768b7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:04:46.523948 containerd[1988]: time="2026-01-23T19:04:46.523898840Z" level=info msg="connecting to shim fff0d0caf4acb4b5e5dd82839934b1e9f9ac1912317d9f314d78d2c47ad861e5" address="unix:///run/containerd/s/b98db33320fbcb8c8a3ffde5802fd7f9849b55ec6f225431f3e0368dbc83e847" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:04:46.577649 systemd[1]: Started cri-containerd-8f35b5380f58f0e4de4b9cea9a3993b04856e11ab83fd05f7e061c2df36da5f0.scope - libcontainer container 8f35b5380f58f0e4de4b9cea9a3993b04856e11ab83fd05f7e061c2df36da5f0. Jan 23 19:04:46.597186 systemd[1]: Started cri-containerd-fff0d0caf4acb4b5e5dd82839934b1e9f9ac1912317d9f314d78d2c47ad861e5.scope - libcontainer container fff0d0caf4acb4b5e5dd82839934b1e9f9ac1912317d9f314d78d2c47ad861e5. Jan 23 19:04:46.686712 containerd[1988]: time="2026-01-23T19:04:46.686670346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xhhwh,Uid:7b0352b5-b97a-4c76-bb9b-815a4fb8f064,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f35b5380f58f0e4de4b9cea9a3993b04856e11ab83fd05f7e061c2df36da5f0\"" Jan 23 19:04:46.691405 containerd[1988]: time="2026-01-23T19:04:46.691345239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s8brs,Uid:2a06aefe-1a9a-4038-9384-b8eef1badadb,Namespace:kube-system,Attempt:0,} returns sandbox id \"fff0d0caf4acb4b5e5dd82839934b1e9f9ac1912317d9f314d78d2c47ad861e5\"" Jan 23 19:04:46.699530 containerd[1988]: time="2026-01-23T19:04:46.699426129Z" level=info msg="CreateContainer within sandbox \"8f35b5380f58f0e4de4b9cea9a3993b04856e11ab83fd05f7e061c2df36da5f0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:04:46.703429 containerd[1988]: time="2026-01-23T19:04:46.703377074Z" level=info msg="CreateContainer within sandbox \"fff0d0caf4acb4b5e5dd82839934b1e9f9ac1912317d9f314d78d2c47ad861e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:04:46.729806 containerd[1988]: time="2026-01-23T19:04:46.729760523Z" level=info msg="Container fbbce5e2230d8862f16984d2be49964e828d68f961242f912400257880536942: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:46.730635 containerd[1988]: time="2026-01-23T19:04:46.730605891Z" level=info msg="Container 46edca3e90287114430136b17ff7aa6f8e82b45a3dedcce8c10d6c0c3d5ffb79: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:46.744044 containerd[1988]: time="2026-01-23T19:04:46.744005219Z" level=info msg="CreateContainer within sandbox \"8f35b5380f58f0e4de4b9cea9a3993b04856e11ab83fd05f7e061c2df36da5f0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46edca3e90287114430136b17ff7aa6f8e82b45a3dedcce8c10d6c0c3d5ffb79\"" Jan 23 19:04:46.744795 containerd[1988]: time="2026-01-23T19:04:46.744680366Z" level=info msg="StartContainer for \"46edca3e90287114430136b17ff7aa6f8e82b45a3dedcce8c10d6c0c3d5ffb79\"" Jan 23 19:04:46.745667 containerd[1988]: time="2026-01-23T19:04:46.745574834Z" level=info msg="connecting to shim 46edca3e90287114430136b17ff7aa6f8e82b45a3dedcce8c10d6c0c3d5ffb79" address="unix:///run/containerd/s/66cbd93ffe229f07ed54e531113b8c7f139009b7136050d8710ea34dd19768b7" protocol=ttrpc version=3 Jan 23 19:04:46.751778 containerd[1988]: time="2026-01-23T19:04:46.751741393Z" level=info msg="CreateContainer within sandbox \"fff0d0caf4acb4b5e5dd82839934b1e9f9ac1912317d9f314d78d2c47ad861e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fbbce5e2230d8862f16984d2be49964e828d68f961242f912400257880536942\"" Jan 23 19:04:46.753112 containerd[1988]: time="2026-01-23T19:04:46.753069691Z" level=info msg="StartContainer for \"fbbce5e2230d8862f16984d2be49964e828d68f961242f912400257880536942\"" Jan 23 19:04:46.756449 containerd[1988]: time="2026-01-23T19:04:46.755839164Z" level=info msg="connecting to shim fbbce5e2230d8862f16984d2be49964e828d68f961242f912400257880536942" address="unix:///run/containerd/s/b98db33320fbcb8c8a3ffde5802fd7f9849b55ec6f225431f3e0368dbc83e847" protocol=ttrpc version=3 Jan 23 19:04:46.770295 systemd[1]: Started cri-containerd-46edca3e90287114430136b17ff7aa6f8e82b45a3dedcce8c10d6c0c3d5ffb79.scope - libcontainer container 46edca3e90287114430136b17ff7aa6f8e82b45a3dedcce8c10d6c0c3d5ffb79. Jan 23 19:04:46.775330 systemd[1]: Started cri-containerd-fbbce5e2230d8862f16984d2be49964e828d68f961242f912400257880536942.scope - libcontainer container fbbce5e2230d8862f16984d2be49964e828d68f961242f912400257880536942. Jan 23 19:04:46.840075 containerd[1988]: time="2026-01-23T19:04:46.840040929Z" level=info msg="StartContainer for \"46edca3e90287114430136b17ff7aa6f8e82b45a3dedcce8c10d6c0c3d5ffb79\" returns successfully" Jan 23 19:04:46.845710 containerd[1988]: time="2026-01-23T19:04:46.845670421Z" level=info msg="StartContainer for \"fbbce5e2230d8862f16984d2be49964e828d68f961242f912400257880536942\" returns successfully" Jan 23 19:04:46.986287 kubelet[3338]: I0123 19:04:46.986240 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xhhwh" podStartSLOduration=23.986224067 podStartE2EDuration="23.986224067s" podCreationTimestamp="2026-01-23 19:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:04:46.985365249 +0000 UTC m=+28.502731043" watchObservedRunningTime="2026-01-23 19:04:46.986224067 +0000 UTC m=+28.503589861" Jan 23 19:04:47.022360 kubelet[3338]: I0123 19:04:47.021912 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-s8brs" podStartSLOduration=24.021896648 podStartE2EDuration="24.021896648s" podCreationTimestamp="2026-01-23 19:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:04:47.020749512 +0000 UTC m=+28.538115307" watchObservedRunningTime="2026-01-23 19:04:47.021896648 +0000 UTC m=+28.539262443" Jan 23 19:04:47.440698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount357038108.mount: Deactivated successfully. Jan 23 19:05:00.830815 systemd[1]: Started sshd@7-172.31.18.6:22-68.220.241.50:49596.service - OpenSSH per-connection server daemon (68.220.241.50:49596). Jan 23 19:05:01.446146 sshd[4837]: Accepted publickey for core from 68.220.241.50 port 49596 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:01.450778 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:01.465948 systemd-logind[1959]: New session 8 of user core. Jan 23 19:05:01.471456 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 19:05:03.419532 sshd[4849]: Connection closed by 68.220.241.50 port 49596 Jan 23 19:05:03.448829 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:03.500976 systemd[1]: sshd@7-172.31.18.6:22-68.220.241.50:49596.service: Deactivated successfully. Jan 23 19:05:03.506720 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 19:05:03.508619 systemd-logind[1959]: Session 8 logged out. Waiting for processes to exit. Jan 23 19:05:03.512071 systemd-logind[1959]: Removed session 8. Jan 23 19:05:08.519556 systemd[1]: Started sshd@8-172.31.18.6:22-68.220.241.50:40624.service - OpenSSH per-connection server daemon (68.220.241.50:40624). Jan 23 19:05:09.059512 sshd[4864]: Accepted publickey for core from 68.220.241.50 port 40624 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:09.063570 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:09.085447 systemd-logind[1959]: New session 9 of user core. Jan 23 19:05:09.096742 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 19:05:09.678758 sshd[4867]: Connection closed by 68.220.241.50 port 40624 Jan 23 19:05:09.682383 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:09.690023 systemd[1]: sshd@8-172.31.18.6:22-68.220.241.50:40624.service: Deactivated successfully. Jan 23 19:05:09.693443 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 19:05:09.696666 systemd-logind[1959]: Session 9 logged out. Waiting for processes to exit. Jan 23 19:05:09.698007 systemd-logind[1959]: Removed session 9. Jan 23 19:05:14.772453 systemd[1]: Started sshd@9-172.31.18.6:22-68.220.241.50:41318.service - OpenSSH per-connection server daemon (68.220.241.50:41318). Jan 23 19:05:15.298929 sshd[4881]: Accepted publickey for core from 68.220.241.50 port 41318 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:15.301494 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:15.308392 systemd-logind[1959]: New session 10 of user core. Jan 23 19:05:15.316537 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 19:05:15.734396 sshd[4884]: Connection closed by 68.220.241.50 port 41318 Jan 23 19:05:15.735494 sshd-session[4881]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:15.741388 systemd[1]: sshd@9-172.31.18.6:22-68.220.241.50:41318.service: Deactivated successfully. Jan 23 19:05:15.745306 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 19:05:15.747052 systemd-logind[1959]: Session 10 logged out. Waiting for processes to exit. Jan 23 19:05:15.748796 systemd-logind[1959]: Removed session 10. Jan 23 19:05:20.834812 systemd[1]: Started sshd@10-172.31.18.6:22-68.220.241.50:41326.service - OpenSSH per-connection server daemon (68.220.241.50:41326). Jan 23 19:05:21.344520 sshd[4899]: Accepted publickey for core from 68.220.241.50 port 41326 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:21.346741 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:21.357788 systemd-logind[1959]: New session 11 of user core. Jan 23 19:05:21.366375 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 19:05:21.764619 sshd[4902]: Connection closed by 68.220.241.50 port 41326 Jan 23 19:05:21.766303 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:21.770599 systemd-logind[1959]: Session 11 logged out. Waiting for processes to exit. Jan 23 19:05:21.771338 systemd[1]: sshd@10-172.31.18.6:22-68.220.241.50:41326.service: Deactivated successfully. Jan 23 19:05:21.773856 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 19:05:21.775699 systemd-logind[1959]: Removed session 11. Jan 23 19:05:21.850659 systemd[1]: Started sshd@11-172.31.18.6:22-68.220.241.50:41342.service - OpenSSH per-connection server daemon (68.220.241.50:41342). Jan 23 19:05:22.356184 sshd[4916]: Accepted publickey for core from 68.220.241.50 port 41342 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:22.358081 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:22.363780 systemd-logind[1959]: New session 12 of user core. Jan 23 19:05:22.376340 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 19:05:22.835527 sshd[4919]: Connection closed by 68.220.241.50 port 41342 Jan 23 19:05:22.837269 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:22.842163 systemd-logind[1959]: Session 12 logged out. Waiting for processes to exit. Jan 23 19:05:22.843211 systemd[1]: sshd@11-172.31.18.6:22-68.220.241.50:41342.service: Deactivated successfully. Jan 23 19:05:22.846520 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 19:05:22.848971 systemd-logind[1959]: Removed session 12. Jan 23 19:05:22.923040 systemd[1]: Started sshd@12-172.31.18.6:22-68.220.241.50:51830.service - OpenSSH per-connection server daemon (68.220.241.50:51830). Jan 23 19:05:23.417554 sshd[4930]: Accepted publickey for core from 68.220.241.50 port 51830 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:23.419619 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:23.427562 systemd-logind[1959]: New session 13 of user core. Jan 23 19:05:23.434524 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 19:05:23.836534 sshd[4933]: Connection closed by 68.220.241.50 port 51830 Jan 23 19:05:23.838196 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:23.842388 systemd-logind[1959]: Session 13 logged out. Waiting for processes to exit. Jan 23 19:05:23.843514 systemd[1]: sshd@12-172.31.18.6:22-68.220.241.50:51830.service: Deactivated successfully. Jan 23 19:05:23.846660 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 19:05:23.848844 systemd-logind[1959]: Removed session 13. Jan 23 19:05:28.932598 systemd[1]: Started sshd@13-172.31.18.6:22-68.220.241.50:51834.service - OpenSSH per-connection server daemon (68.220.241.50:51834). Jan 23 19:05:29.439802 sshd[4948]: Accepted publickey for core from 68.220.241.50 port 51834 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:29.441178 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:29.446916 systemd-logind[1959]: New session 14 of user core. Jan 23 19:05:29.454355 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 19:05:29.868559 sshd[4951]: Connection closed by 68.220.241.50 port 51834 Jan 23 19:05:29.870822 sshd-session[4948]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:29.875773 systemd[1]: sshd@13-172.31.18.6:22-68.220.241.50:51834.service: Deactivated successfully. Jan 23 19:05:29.879707 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 19:05:29.880794 systemd-logind[1959]: Session 14 logged out. Waiting for processes to exit. Jan 23 19:05:29.882870 systemd-logind[1959]: Removed session 14. Jan 23 19:05:34.955304 systemd[1]: Started sshd@14-172.31.18.6:22-68.220.241.50:55316.service - OpenSSH per-connection server daemon (68.220.241.50:55316). Jan 23 19:05:35.460282 sshd[4964]: Accepted publickey for core from 68.220.241.50 port 55316 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:35.461919 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:35.468371 systemd-logind[1959]: New session 15 of user core. Jan 23 19:05:35.474366 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 19:05:35.879581 sshd[4967]: Connection closed by 68.220.241.50 port 55316 Jan 23 19:05:35.881264 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:35.885357 systemd[1]: sshd@14-172.31.18.6:22-68.220.241.50:55316.service: Deactivated successfully. Jan 23 19:05:35.887896 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 19:05:35.888756 systemd-logind[1959]: Session 15 logged out. Waiting for processes to exit. Jan 23 19:05:35.890314 systemd-logind[1959]: Removed session 15. Jan 23 19:05:35.982457 systemd[1]: Started sshd@15-172.31.18.6:22-68.220.241.50:55326.service - OpenSSH per-connection server daemon (68.220.241.50:55326). Jan 23 19:05:36.525663 sshd[4979]: Accepted publickey for core from 68.220.241.50 port 55326 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:36.527071 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:36.532162 systemd-logind[1959]: New session 16 of user core. Jan 23 19:05:36.538305 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 19:05:37.418614 sshd[4982]: Connection closed by 68.220.241.50 port 55326 Jan 23 19:05:37.420981 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:37.440924 systemd-logind[1959]: Session 16 logged out. Waiting for processes to exit. Jan 23 19:05:37.441327 systemd[1]: sshd@15-172.31.18.6:22-68.220.241.50:55326.service: Deactivated successfully. Jan 23 19:05:37.443918 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 19:05:37.446484 systemd-logind[1959]: Removed session 16. Jan 23 19:05:37.516366 systemd[1]: Started sshd@16-172.31.18.6:22-68.220.241.50:55332.service - OpenSSH per-connection server daemon (68.220.241.50:55332). Jan 23 19:05:38.078153 sshd[4992]: Accepted publickey for core from 68.220.241.50 port 55332 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:38.079363 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:38.085844 systemd-logind[1959]: New session 17 of user core. Jan 23 19:05:38.096340 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 19:05:39.024658 sshd[4995]: Connection closed by 68.220.241.50 port 55332 Jan 23 19:05:39.026323 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:39.030696 systemd[1]: sshd@16-172.31.18.6:22-68.220.241.50:55332.service: Deactivated successfully. Jan 23 19:05:39.033465 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 19:05:39.034399 systemd-logind[1959]: Session 17 logged out. Waiting for processes to exit. Jan 23 19:05:39.036502 systemd-logind[1959]: Removed session 17. Jan 23 19:05:39.121587 systemd[1]: Started sshd@17-172.31.18.6:22-68.220.241.50:55338.service - OpenSSH per-connection server daemon (68.220.241.50:55338). Jan 23 19:05:39.660804 sshd[5013]: Accepted publickey for core from 68.220.241.50 port 55338 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:39.661343 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:39.668207 systemd-logind[1959]: New session 18 of user core. Jan 23 19:05:39.674275 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 19:05:40.261664 sshd[5016]: Connection closed by 68.220.241.50 port 55338 Jan 23 19:05:40.263293 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:40.268638 systemd-logind[1959]: Session 18 logged out. Waiting for processes to exit. Jan 23 19:05:40.269612 systemd[1]: sshd@17-172.31.18.6:22-68.220.241.50:55338.service: Deactivated successfully. Jan 23 19:05:40.272461 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 19:05:40.275171 systemd-logind[1959]: Removed session 18. Jan 23 19:05:40.354281 systemd[1]: Started sshd@18-172.31.18.6:22-68.220.241.50:55350.service - OpenSSH per-connection server daemon (68.220.241.50:55350). Jan 23 19:05:40.880455 sshd[5025]: Accepted publickey for core from 68.220.241.50 port 55350 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:40.881825 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:40.887018 systemd-logind[1959]: New session 19 of user core. Jan 23 19:05:40.889284 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 19:05:41.331030 sshd[5028]: Connection closed by 68.220.241.50 port 55350 Jan 23 19:05:41.331724 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:41.336594 systemd[1]: sshd@18-172.31.18.6:22-68.220.241.50:55350.service: Deactivated successfully. Jan 23 19:05:41.338801 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 19:05:41.342633 systemd-logind[1959]: Session 19 logged out. Waiting for processes to exit. Jan 23 19:05:41.344169 systemd-logind[1959]: Removed session 19. Jan 23 19:05:46.423336 systemd[1]: Started sshd@19-172.31.18.6:22-68.220.241.50:50270.service - OpenSSH per-connection server daemon (68.220.241.50:50270). Jan 23 19:05:46.911352 sshd[5044]: Accepted publickey for core from 68.220.241.50 port 50270 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:46.912972 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:46.919165 systemd-logind[1959]: New session 20 of user core. Jan 23 19:05:46.925357 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 19:05:47.325056 sshd[5048]: Connection closed by 68.220.241.50 port 50270 Jan 23 19:05:47.326715 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:47.330837 systemd-logind[1959]: Session 20 logged out. Waiting for processes to exit. Jan 23 19:05:47.331432 systemd[1]: sshd@19-172.31.18.6:22-68.220.241.50:50270.service: Deactivated successfully. Jan 23 19:05:47.334944 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 19:05:47.337530 systemd-logind[1959]: Removed session 20. Jan 23 19:05:52.414591 systemd[1]: Started sshd@20-172.31.18.6:22-68.220.241.50:50276.service - OpenSSH per-connection server daemon (68.220.241.50:50276). Jan 23 19:05:52.929214 sshd[5060]: Accepted publickey for core from 68.220.241.50 port 50276 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:52.931368 sshd-session[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:52.936756 systemd-logind[1959]: New session 21 of user core. Jan 23 19:05:52.941275 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 19:05:53.379728 sshd[5063]: Connection closed by 68.220.241.50 port 50276 Jan 23 19:05:53.380663 sshd-session[5060]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:53.385759 systemd[1]: sshd@20-172.31.18.6:22-68.220.241.50:50276.service: Deactivated successfully. Jan 23 19:05:53.389735 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 19:05:53.390877 systemd-logind[1959]: Session 21 logged out. Waiting for processes to exit. Jan 23 19:05:53.393470 systemd-logind[1959]: Removed session 21. Jan 23 19:05:53.467053 systemd[1]: Started sshd@21-172.31.18.6:22-68.220.241.50:39332.service - OpenSSH per-connection server daemon (68.220.241.50:39332). Jan 23 19:05:53.979063 sshd[5075]: Accepted publickey for core from 68.220.241.50 port 39332 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:53.980562 sshd-session[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:53.986637 systemd-logind[1959]: New session 22 of user core. Jan 23 19:05:53.991284 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 19:05:55.643628 containerd[1988]: time="2026-01-23T19:05:55.643360665Z" level=info msg="StopContainer for \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\" with timeout 30 (s)" Jan 23 19:05:55.647359 containerd[1988]: time="2026-01-23T19:05:55.647176717Z" level=info msg="Stop container \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\" with signal terminated" Jan 23 19:05:55.666387 systemd[1]: cri-containerd-aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc.scope: Deactivated successfully. Jan 23 19:05:55.669563 containerd[1988]: time="2026-01-23T19:05:55.669382097Z" level=info msg="received container exit event container_id:\"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\" id:\"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\" pid:4088 exited_at:{seconds:1769195155 nanos:668020851}" Jan 23 19:05:55.701248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc-rootfs.mount: Deactivated successfully. Jan 23 19:05:55.712562 containerd[1988]: time="2026-01-23T19:05:55.712033844Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 19:05:55.712562 containerd[1988]: time="2026-01-23T19:05:55.712332631Z" level=info msg="StopContainer for \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\" with timeout 2 (s)" Jan 23 19:05:55.713381 containerd[1988]: time="2026-01-23T19:05:55.712820344Z" level=info msg="Stop container \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\" with signal terminated" Jan 23 19:05:55.722927 systemd-networkd[1809]: lxc_health: Link DOWN Jan 23 19:05:55.723922 systemd-networkd[1809]: lxc_health: Lost carrier Jan 23 19:05:55.726859 containerd[1988]: time="2026-01-23T19:05:55.726696189Z" level=info msg="StopContainer for \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\" returns successfully" Jan 23 19:05:55.728724 containerd[1988]: time="2026-01-23T19:05:55.728693338Z" level=info msg="StopPodSandbox for \"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\"" Jan 23 19:05:55.733304 containerd[1988]: time="2026-01-23T19:05:55.733167034Z" level=info msg="Container to stop \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:05:55.748030 systemd[1]: cri-containerd-65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113.scope: Deactivated successfully. Jan 23 19:05:55.749990 systemd[1]: cri-containerd-9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0.scope: Deactivated successfully. Jan 23 19:05:55.750971 systemd[1]: cri-containerd-9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0.scope: Consumed 8.094s CPU time, 232.7M memory peak, 105.8M read from disk, 13.3M written to disk. Jan 23 19:05:55.751711 containerd[1988]: time="2026-01-23T19:05:55.751542566Z" level=info msg="received sandbox exit event container_id:\"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\" id:\"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\" exit_status:137 exited_at:{seconds:1769195155 nanos:749386468}" monitor_name=podsandbox Jan 23 19:05:55.753799 containerd[1988]: time="2026-01-23T19:05:55.753245140Z" level=info msg="received container exit event container_id:\"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\" id:\"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\" pid:4158 exited_at:{seconds:1769195155 nanos:752784628}" Jan 23 19:05:55.795121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113-rootfs.mount: Deactivated successfully. Jan 23 19:05:55.808064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0-rootfs.mount: Deactivated successfully. Jan 23 19:05:55.816508 containerd[1988]: time="2026-01-23T19:05:55.816462448Z" level=info msg="shim disconnected" id=65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113 namespace=k8s.io Jan 23 19:05:55.816753 containerd[1988]: time="2026-01-23T19:05:55.816511853Z" level=warning msg="cleaning up after shim disconnected" id=65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113 namespace=k8s.io Jan 23 19:05:55.833129 containerd[1988]: time="2026-01-23T19:05:55.816522396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 19:05:55.833457 containerd[1988]: time="2026-01-23T19:05:55.822565889Z" level=info msg="StopContainer for \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\" returns successfully" Jan 23 19:05:55.834423 containerd[1988]: time="2026-01-23T19:05:55.834282176Z" level=info msg="StopPodSandbox for \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\"" Jan 23 19:05:55.834606 containerd[1988]: time="2026-01-23T19:05:55.834584568Z" level=info msg="Container to stop \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:05:55.834693 containerd[1988]: time="2026-01-23T19:05:55.834678405Z" level=info msg="Container to stop \"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:05:55.834765 containerd[1988]: time="2026-01-23T19:05:55.834750333Z" level=info msg="Container to stop \"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:05:55.835043 containerd[1988]: time="2026-01-23T19:05:55.834829795Z" level=info msg="Container to stop \"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:05:55.835043 containerd[1988]: time="2026-01-23T19:05:55.834847676Z" level=info msg="Container to stop \"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:05:55.848268 systemd[1]: cri-containerd-e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e.scope: Deactivated successfully. Jan 23 19:05:55.850625 containerd[1988]: time="2026-01-23T19:05:55.850577115Z" level=info msg="received sandbox exit event container_id:\"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" id:\"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" exit_status:137 exited_at:{seconds:1769195155 nanos:849656126}" monitor_name=podsandbox Jan 23 19:05:55.872330 containerd[1988]: time="2026-01-23T19:05:55.872250452Z" level=info msg="received sandbox container exit event sandbox_id:\"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\" exit_status:137 exited_at:{seconds:1769195155 nanos:749386468}" monitor_name=criService Jan 23 19:05:55.872508 containerd[1988]: time="2026-01-23T19:05:55.872480199Z" level=info msg="TearDown network for sandbox \"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\" successfully" Jan 23 19:05:55.872585 containerd[1988]: time="2026-01-23T19:05:55.872512609Z" level=info msg="StopPodSandbox for \"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\" returns successfully" Jan 23 19:05:55.873717 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113-shm.mount: Deactivated successfully. Jan 23 19:05:55.893559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e-rootfs.mount: Deactivated successfully. Jan 23 19:05:55.908193 containerd[1988]: time="2026-01-23T19:05:55.907878822Z" level=info msg="shim disconnected" id=e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e namespace=k8s.io Jan 23 19:05:55.908193 containerd[1988]: time="2026-01-23T19:05:55.907915445Z" level=warning msg="cleaning up after shim disconnected" id=e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e namespace=k8s.io Jan 23 19:05:55.908193 containerd[1988]: time="2026-01-23T19:05:55.907925900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 19:05:55.939209 containerd[1988]: time="2026-01-23T19:05:55.939135772Z" level=info msg="TearDown network for sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" successfully" Jan 23 19:05:55.940277 containerd[1988]: time="2026-01-23T19:05:55.939357280Z" level=info msg="StopPodSandbox for \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" returns successfully" Jan 23 19:05:55.940743 containerd[1988]: time="2026-01-23T19:05:55.940707865Z" level=info msg="received sandbox container exit event sandbox_id:\"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" exit_status:137 exited_at:{seconds:1769195155 nanos:849656126}" monitor_name=criService Jan 23 19:05:56.001114 kubelet[3338]: I0123 19:05:56.000818 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/491d016d-1bd1-4ee6-a195-336105b15bbf-cilium-config-path\") pod \"491d016d-1bd1-4ee6-a195-336105b15bbf\" (UID: \"491d016d-1bd1-4ee6-a195-336105b15bbf\") " Jan 23 19:05:56.001114 kubelet[3338]: I0123 19:05:56.000887 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bntg4\" (UniqueName: \"kubernetes.io/projected/491d016d-1bd1-4ee6-a195-336105b15bbf-kube-api-access-bntg4\") pod \"491d016d-1bd1-4ee6-a195-336105b15bbf\" (UID: \"491d016d-1bd1-4ee6-a195-336105b15bbf\") " Jan 23 19:05:56.007208 kubelet[3338]: I0123 19:05:56.006459 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/491d016d-1bd1-4ee6-a195-336105b15bbf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "491d016d-1bd1-4ee6-a195-336105b15bbf" (UID: "491d016d-1bd1-4ee6-a195-336105b15bbf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 19:05:56.011114 kubelet[3338]: I0123 19:05:56.011037 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/491d016d-1bd1-4ee6-a195-336105b15bbf-kube-api-access-bntg4" (OuterVolumeSpecName: "kube-api-access-bntg4") pod "491d016d-1bd1-4ee6-a195-336105b15bbf" (UID: "491d016d-1bd1-4ee6-a195-336105b15bbf"). InnerVolumeSpecName "kube-api-access-bntg4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:05:56.102071 kubelet[3338]: I0123 19:05:56.102017 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-xtables-lock\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102071 kubelet[3338]: I0123 19:05:56.102068 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-cilium-run\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102386 kubelet[3338]: I0123 19:05:56.102119 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-cilium-cgroup\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102386 kubelet[3338]: I0123 19:05:56.102139 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-cni-path\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102386 kubelet[3338]: I0123 19:05:56.102167 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8fb4\" (UniqueName: \"kubernetes.io/projected/1e522511-4190-44ff-9b14-179f7e0f284e-kube-api-access-p8fb4\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102386 kubelet[3338]: I0123 19:05:56.102191 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-host-proc-sys-kernel\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102386 kubelet[3338]: I0123 19:05:56.102210 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-host-proc-sys-net\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102386 kubelet[3338]: I0123 19:05:56.102232 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e522511-4190-44ff-9b14-179f7e0f284e-clustermesh-secrets\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102633 kubelet[3338]: I0123 19:05:56.102251 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-lib-modules\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102633 kubelet[3338]: I0123 19:05:56.102272 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-bpf-maps\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102633 kubelet[3338]: I0123 19:05:56.102292 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-hostproc\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102633 kubelet[3338]: I0123 19:05:56.102321 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e522511-4190-44ff-9b14-179f7e0f284e-cilium-config-path\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102633 kubelet[3338]: I0123 19:05:56.102342 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-etc-cni-netd\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102633 kubelet[3338]: I0123 19:05:56.102369 3338 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e522511-4190-44ff-9b14-179f7e0f284e-hubble-tls\") pod \"1e522511-4190-44ff-9b14-179f7e0f284e\" (UID: \"1e522511-4190-44ff-9b14-179f7e0f284e\") " Jan 23 19:05:56.102978 kubelet[3338]: I0123 19:05:56.102421 3338 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bntg4\" (UniqueName: \"kubernetes.io/projected/491d016d-1bd1-4ee6-a195-336105b15bbf-kube-api-access-bntg4\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.102978 kubelet[3338]: I0123 19:05:56.102437 3338 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/491d016d-1bd1-4ee6-a195-336105b15bbf-cilium-config-path\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.104069 kubelet[3338]: I0123 19:05:56.104020 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:05:56.104237 kubelet[3338]: I0123 19:05:56.104102 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:05:56.104237 kubelet[3338]: I0123 19:05:56.104126 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:05:56.104237 kubelet[3338]: I0123 19:05:56.104144 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:05:56.104237 kubelet[3338]: I0123 19:05:56.104162 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-cni-path" (OuterVolumeSpecName: "cni-path") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:05:56.104841 kubelet[3338]: I0123 19:05:56.104784 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:05:56.105234 kubelet[3338]: I0123 19:05:56.105136 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:05:56.105234 kubelet[3338]: I0123 19:05:56.105172 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:05:56.105234 kubelet[3338]: I0123 19:05:56.105193 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-hostproc" (OuterVolumeSpecName: "hostproc") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:05:56.106968 kubelet[3338]: I0123 19:05:56.106928 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:05:56.110901 kubelet[3338]: I0123 19:05:56.110840 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e522511-4190-44ff-9b14-179f7e0f284e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 19:05:56.111664 kubelet[3338]: I0123 19:05:56.111616 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e522511-4190-44ff-9b14-179f7e0f284e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:05:56.112043 kubelet[3338]: I0123 19:05:56.112009 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e522511-4190-44ff-9b14-179f7e0f284e-kube-api-access-p8fb4" (OuterVolumeSpecName: "kube-api-access-p8fb4") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "kube-api-access-p8fb4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:05:56.112460 kubelet[3338]: I0123 19:05:56.112433 3338 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e522511-4190-44ff-9b14-179f7e0f284e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1e522511-4190-44ff-9b14-179f7e0f284e" (UID: "1e522511-4190-44ff-9b14-179f7e0f284e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 19:05:56.203369 kubelet[3338]: I0123 19:05:56.203243 3338 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-xtables-lock\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.203369 kubelet[3338]: I0123 19:05:56.203279 3338 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-cilium-run\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.203369 kubelet[3338]: I0123 19:05:56.203290 3338 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-cilium-cgroup\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.203369 kubelet[3338]: I0123 19:05:56.203298 3338 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-cni-path\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.203369 kubelet[3338]: I0123 19:05:56.203307 3338 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p8fb4\" (UniqueName: \"kubernetes.io/projected/1e522511-4190-44ff-9b14-179f7e0f284e-kube-api-access-p8fb4\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.203369 kubelet[3338]: I0123 19:05:56.203317 3338 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-host-proc-sys-kernel\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.203369 kubelet[3338]: I0123 19:05:56.203324 3338 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-host-proc-sys-net\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.203369 kubelet[3338]: I0123 19:05:56.203333 3338 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e522511-4190-44ff-9b14-179f7e0f284e-clustermesh-secrets\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.203788 kubelet[3338]: I0123 19:05:56.203341 3338 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-lib-modules\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.203788 kubelet[3338]: I0123 19:05:56.203349 3338 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-bpf-maps\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.203788 kubelet[3338]: I0123 19:05:56.203358 3338 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-hostproc\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.203788 kubelet[3338]: I0123 19:05:56.203366 3338 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e522511-4190-44ff-9b14-179f7e0f284e-cilium-config-path\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.203788 kubelet[3338]: I0123 19:05:56.203373 3338 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e522511-4190-44ff-9b14-179f7e0f284e-etc-cni-netd\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.203788 kubelet[3338]: I0123 19:05:56.203381 3338 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e522511-4190-44ff-9b14-179f7e0f284e-hubble-tls\") on node \"ip-172-31-18-6\" DevicePath \"\"" Jan 23 19:05:56.274992 kubelet[3338]: I0123 19:05:56.274874 3338 scope.go:117] "RemoveContainer" containerID="aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc" Jan 23 19:05:56.280327 containerd[1988]: time="2026-01-23T19:05:56.280216261Z" level=info msg="RemoveContainer for \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\"" Jan 23 19:05:56.285666 systemd[1]: Removed slice kubepods-besteffort-pod491d016d_1bd1_4ee6_a195_336105b15bbf.slice - libcontainer container kubepods-besteffort-pod491d016d_1bd1_4ee6_a195_336105b15bbf.slice. Jan 23 19:05:56.293839 containerd[1988]: time="2026-01-23T19:05:56.293549054Z" level=info msg="RemoveContainer for \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\" returns successfully" Jan 23 19:05:56.296811 kubelet[3338]: I0123 19:05:56.296679 3338 scope.go:117] "RemoveContainer" containerID="aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc" Jan 23 19:05:56.297452 containerd[1988]: time="2026-01-23T19:05:56.297405690Z" level=error msg="ContainerStatus for \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\": not found" Jan 23 19:05:56.297775 kubelet[3338]: E0123 19:05:56.297713 3338 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\": not found" containerID="aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc" Jan 23 19:05:56.297963 kubelet[3338]: I0123 19:05:56.297751 3338 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc"} err="failed to get container status \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa2d95ef847ae9c5c9b90fa4fa8cdbe9dbf632cc3db8eb22203cfc864e8dffdc\": not found" Jan 23 19:05:56.297963 kubelet[3338]: I0123 19:05:56.297908 3338 scope.go:117] "RemoveContainer" containerID="9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0" Jan 23 19:05:56.298737 systemd[1]: Removed slice kubepods-burstable-pod1e522511_4190_44ff_9b14_179f7e0f284e.slice - libcontainer container kubepods-burstable-pod1e522511_4190_44ff_9b14_179f7e0f284e.slice. Jan 23 19:05:56.298881 systemd[1]: kubepods-burstable-pod1e522511_4190_44ff_9b14_179f7e0f284e.slice: Consumed 8.222s CPU time, 233.2M memory peak, 107.1M read from disk, 16.6M written to disk. Jan 23 19:05:56.327631 containerd[1988]: time="2026-01-23T19:05:56.327570349Z" level=info msg="RemoveContainer for \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\"" Jan 23 19:05:56.336995 containerd[1988]: time="2026-01-23T19:05:56.336940619Z" level=info msg="RemoveContainer for \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\" returns successfully" Jan 23 19:05:56.337257 kubelet[3338]: I0123 19:05:56.337216 3338 scope.go:117] "RemoveContainer" containerID="b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997" Jan 23 19:05:56.340160 containerd[1988]: time="2026-01-23T19:05:56.339968680Z" level=info msg="RemoveContainer for \"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997\"" Jan 23 19:05:56.350271 containerd[1988]: time="2026-01-23T19:05:56.350202226Z" level=info msg="RemoveContainer for \"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997\" returns successfully" Jan 23 19:05:56.354998 kubelet[3338]: I0123 19:05:56.354321 3338 scope.go:117] "RemoveContainer" containerID="cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab" Jan 23 19:05:56.358078 containerd[1988]: time="2026-01-23T19:05:56.358043634Z" level=info msg="RemoveContainer for \"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab\"" Jan 23 19:05:56.366654 containerd[1988]: time="2026-01-23T19:05:56.366621077Z" level=info msg="RemoveContainer for \"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab\" returns successfully" Jan 23 19:05:56.367020 kubelet[3338]: I0123 19:05:56.367000 3338 scope.go:117] "RemoveContainer" containerID="302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45" Jan 23 19:05:56.368934 containerd[1988]: time="2026-01-23T19:05:56.368856806Z" level=info msg="RemoveContainer for \"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45\"" Jan 23 19:05:56.377003 containerd[1988]: time="2026-01-23T19:05:56.376933590Z" level=info msg="RemoveContainer for \"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45\" returns successfully" Jan 23 19:05:56.377781 kubelet[3338]: I0123 19:05:56.377731 3338 scope.go:117] "RemoveContainer" containerID="eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b" Jan 23 19:05:56.380154 containerd[1988]: time="2026-01-23T19:05:56.380121797Z" level=info msg="RemoveContainer for \"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b\"" Jan 23 19:05:56.385893 containerd[1988]: time="2026-01-23T19:05:56.385796684Z" level=info msg="RemoveContainer for \"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b\" returns successfully" Jan 23 19:05:56.386199 kubelet[3338]: I0123 19:05:56.386172 3338 scope.go:117] "RemoveContainer" containerID="9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0" Jan 23 19:05:56.386701 containerd[1988]: time="2026-01-23T19:05:56.386662227Z" level=error msg="ContainerStatus for \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\": not found" Jan 23 19:05:56.386906 kubelet[3338]: E0123 19:05:56.386880 3338 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\": not found" containerID="9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0" Jan 23 19:05:56.387002 kubelet[3338]: I0123 19:05:56.386984 3338 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0"} err="failed to get container status \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\": rpc error: code = NotFound desc = an error occurred when try to find container \"9cbbfcc00afa76023eb60ad67fdd43a52ffd8281b15f9b9044aa0c61db79dba0\": not found" Jan 23 19:05:56.387122 kubelet[3338]: I0123 19:05:56.387057 3338 scope.go:117] "RemoveContainer" containerID="b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997" Jan 23 19:05:56.387380 containerd[1988]: time="2026-01-23T19:05:56.387354145Z" level=error msg="ContainerStatus for \"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997\": not found" Jan 23 19:05:56.388259 kubelet[3338]: E0123 19:05:56.387862 3338 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997\": not found" containerID="b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997" Jan 23 19:05:56.388406 kubelet[3338]: I0123 19:05:56.387923 3338 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997"} err="failed to get container status \"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997\": rpc error: code = NotFound desc = an error occurred when try to find container \"b594f072d30200bc1314cd4f191ac6bf7a0f02c1fb913934d4d0bac1957ab997\": not found" Jan 23 19:05:56.388406 kubelet[3338]: I0123 19:05:56.388367 3338 scope.go:117] "RemoveContainer" containerID="cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab" Jan 23 19:05:56.388793 containerd[1988]: time="2026-01-23T19:05:56.388748179Z" level=error msg="ContainerStatus for \"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab\": not found" Jan 23 19:05:56.388922 kubelet[3338]: E0123 19:05:56.388893 3338 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab\": not found" containerID="cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab" Jan 23 19:05:56.389006 kubelet[3338]: I0123 19:05:56.388987 3338 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab"} err="failed to get container status \"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb131dcba2767e623a6328089115ad055582ab3f9cd1773b45875dba4de1f5ab\": not found" Jan 23 19:05:56.389104 kubelet[3338]: I0123 19:05:56.389046 3338 scope.go:117] "RemoveContainer" containerID="302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45" Jan 23 19:05:56.389283 containerd[1988]: time="2026-01-23T19:05:56.389249484Z" level=error msg="ContainerStatus for \"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45\": not found" Jan 23 19:05:56.389844 kubelet[3338]: E0123 19:05:56.389697 3338 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45\": not found" containerID="302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45" Jan 23 19:05:56.389844 kubelet[3338]: I0123 19:05:56.389716 3338 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45"} err="failed to get container status \"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45\": rpc error: code = NotFound desc = an error occurred when try to find container \"302fe70120923a960cd30e1284a49985fa0bc281fb2445fc7b4d827bb326ce45\": not found" Jan 23 19:05:56.389844 kubelet[3338]: I0123 19:05:56.389731 3338 scope.go:117] "RemoveContainer" containerID="eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b" Jan 23 19:05:56.390163 containerd[1988]: time="2026-01-23T19:05:56.390122753Z" level=error msg="ContainerStatus for \"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b\": not found" Jan 23 19:05:56.390524 kubelet[3338]: E0123 19:05:56.390406 3338 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b\": not found" containerID="eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b" Jan 23 19:05:56.390743 kubelet[3338]: I0123 19:05:56.390702 3338 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b"} err="failed to get container status \"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"eee9a91ee90bf8e30ec521bd824490f8df3f1a8014b02c29f51d5dacb459fc7b\": not found" Jan 23 19:05:56.697714 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e-shm.mount: Deactivated successfully. Jan 23 19:05:56.697830 systemd[1]: var-lib-kubelet-pods-491d016d\x2d1bd1\x2d4ee6\x2da195\x2d336105b15bbf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbntg4.mount: Deactivated successfully. Jan 23 19:05:56.697898 systemd[1]: var-lib-kubelet-pods-1e522511\x2d4190\x2d44ff\x2d9b14\x2d179f7e0f284e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp8fb4.mount: Deactivated successfully. Jan 23 19:05:56.697959 systemd[1]: var-lib-kubelet-pods-1e522511\x2d4190\x2d44ff\x2d9b14\x2d179f7e0f284e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 19:05:56.698013 systemd[1]: var-lib-kubelet-pods-1e522511\x2d4190\x2d44ff\x2d9b14\x2d179f7e0f284e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 19:05:56.744490 kubelet[3338]: I0123 19:05:56.744441 3338 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e522511-4190-44ff-9b14-179f7e0f284e" path="/var/lib/kubelet/pods/1e522511-4190-44ff-9b14-179f7e0f284e/volumes" Jan 23 19:05:56.745433 kubelet[3338]: I0123 19:05:56.745264 3338 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="491d016d-1bd1-4ee6-a195-336105b15bbf" path="/var/lib/kubelet/pods/491d016d-1bd1-4ee6-a195-336105b15bbf/volumes" Jan 23 19:05:57.656424 sshd[5078]: Connection closed by 68.220.241.50 port 39332 Jan 23 19:05:57.656982 sshd-session[5075]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:57.660530 systemd[1]: sshd@21-172.31.18.6:22-68.220.241.50:39332.service: Deactivated successfully. Jan 23 19:05:57.662793 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 19:05:57.665656 systemd-logind[1959]: Session 22 logged out. Waiting for processes to exit. Jan 23 19:05:57.666752 systemd-logind[1959]: Removed session 22. Jan 23 19:05:57.745347 systemd[1]: Started sshd@22-172.31.18.6:22-68.220.241.50:39342.service - OpenSSH per-connection server daemon (68.220.241.50:39342). Jan 23 19:05:58.169538 ntpd[2225]: Deleting 10 lxc_health, [fe80::98ab:faff:fe53:6fc5%8]:123, stats: received=0, sent=0, dropped=0, active_time=72 secs Jan 23 19:05:58.169932 ntpd[2225]: 23 Jan 19:05:58 ntpd[2225]: Deleting 10 lxc_health, [fe80::98ab:faff:fe53:6fc5%8]:123, stats: received=0, sent=0, dropped=0, active_time=72 secs Jan 23 19:05:58.235430 sshd[5226]: Accepted publickey for core from 68.220.241.50 port 39342 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:58.236921 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:58.242371 systemd-logind[1959]: New session 23 of user core. Jan 23 19:05:58.256353 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 19:05:58.902315 kubelet[3338]: E0123 19:05:58.902201 3338 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 19:05:59.102294 systemd[1]: Created slice kubepods-burstable-pod3a067fab_f6aa_4916_af43_d6c694989efa.slice - libcontainer container kubepods-burstable-pod3a067fab_f6aa_4916_af43_d6c694989efa.slice. Jan 23 19:05:59.124041 sshd[5229]: Connection closed by 68.220.241.50 port 39342 Jan 23 19:05:59.125277 sshd-session[5226]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:59.129810 systemd[1]: sshd@22-172.31.18.6:22-68.220.241.50:39342.service: Deactivated successfully. Jan 23 19:05:59.132273 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 19:05:59.133335 systemd-logind[1959]: Session 23 logged out. Waiting for processes to exit. Jan 23 19:05:59.134758 systemd-logind[1959]: Removed session 23. Jan 23 19:05:59.213491 systemd[1]: Started sshd@23-172.31.18.6:22-68.220.241.50:39346.service - OpenSSH per-connection server daemon (68.220.241.50:39346). Jan 23 19:05:59.224518 kubelet[3338]: I0123 19:05:59.224175 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a067fab-f6aa-4916-af43-d6c694989efa-xtables-lock\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.224518 kubelet[3338]: I0123 19:05:59.224218 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a067fab-f6aa-4916-af43-d6c694989efa-bpf-maps\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.224518 kubelet[3338]: I0123 19:05:59.224238 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a067fab-f6aa-4916-af43-d6c694989efa-lib-modules\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.224518 kubelet[3338]: I0123 19:05:59.224259 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a067fab-f6aa-4916-af43-d6c694989efa-cilium-config-path\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.224518 kubelet[3338]: I0123 19:05:59.224288 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a067fab-f6aa-4916-af43-d6c694989efa-host-proc-sys-kernel\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.224518 kubelet[3338]: I0123 19:05:59.224303 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a067fab-f6aa-4916-af43-d6c694989efa-cilium-run\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.224827 kubelet[3338]: I0123 19:05:59.224318 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a067fab-f6aa-4916-af43-d6c694989efa-hostproc\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.224827 kubelet[3338]: I0123 19:05:59.224331 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a067fab-f6aa-4916-af43-d6c694989efa-cni-path\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.224827 kubelet[3338]: I0123 19:05:59.224348 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a067fab-f6aa-4916-af43-d6c694989efa-clustermesh-secrets\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.224827 kubelet[3338]: I0123 19:05:59.224362 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a067fab-f6aa-4916-af43-d6c694989efa-hubble-tls\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.224827 kubelet[3338]: I0123 19:05:59.224377 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgf6c\" (UniqueName: \"kubernetes.io/projected/3a067fab-f6aa-4916-af43-d6c694989efa-kube-api-access-fgf6c\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.224827 kubelet[3338]: I0123 19:05:59.224396 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a067fab-f6aa-4916-af43-d6c694989efa-cilium-cgroup\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.225064 kubelet[3338]: I0123 19:05:59.224412 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3a067fab-f6aa-4916-af43-d6c694989efa-cilium-ipsec-secrets\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.225064 kubelet[3338]: I0123 19:05:59.224426 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a067fab-f6aa-4916-af43-d6c694989efa-etc-cni-netd\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.225064 kubelet[3338]: I0123 19:05:59.224439 3338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a067fab-f6aa-4916-af43-d6c694989efa-host-proc-sys-net\") pod \"cilium-68tgb\" (UID: \"3a067fab-f6aa-4916-af43-d6c694989efa\") " pod="kube-system/cilium-68tgb" Jan 23 19:05:59.408759 containerd[1988]: time="2026-01-23T19:05:59.408717157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-68tgb,Uid:3a067fab-f6aa-4916-af43-d6c694989efa,Namespace:kube-system,Attempt:0,}" Jan 23 19:05:59.436213 containerd[1988]: time="2026-01-23T19:05:59.436136478Z" level=info msg="connecting to shim 6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f" address="unix:///run/containerd/s/9a072953ccebba78ecf4ae38b0c0357c4acbeeeea72208beba33974d20990a97" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:05:59.465341 systemd[1]: Started cri-containerd-6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f.scope - libcontainer container 6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f. Jan 23 19:05:59.504695 containerd[1988]: time="2026-01-23T19:05:59.504647226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-68tgb,Uid:3a067fab-f6aa-4916-af43-d6c694989efa,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f\"" Jan 23 19:05:59.513194 containerd[1988]: time="2026-01-23T19:05:59.513153414Z" level=info msg="CreateContainer within sandbox \"6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 19:05:59.523574 containerd[1988]: time="2026-01-23T19:05:59.523398598Z" level=info msg="Container 9fc35a975340e942702a17a6f130a6b3b8e8b01419fb740c2c7c03211666659e: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:05:59.533769 containerd[1988]: time="2026-01-23T19:05:59.533658425Z" level=info msg="CreateContainer within sandbox \"6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9fc35a975340e942702a17a6f130a6b3b8e8b01419fb740c2c7c03211666659e\"" Jan 23 19:05:59.534658 containerd[1988]: time="2026-01-23T19:05:59.534475812Z" level=info msg="StartContainer for \"9fc35a975340e942702a17a6f130a6b3b8e8b01419fb740c2c7c03211666659e\"" Jan 23 19:05:59.536908 containerd[1988]: time="2026-01-23T19:05:59.536039772Z" level=info msg="connecting to shim 9fc35a975340e942702a17a6f130a6b3b8e8b01419fb740c2c7c03211666659e" address="unix:///run/containerd/s/9a072953ccebba78ecf4ae38b0c0357c4acbeeeea72208beba33974d20990a97" protocol=ttrpc version=3 Jan 23 19:05:59.559311 systemd[1]: Started cri-containerd-9fc35a975340e942702a17a6f130a6b3b8e8b01419fb740c2c7c03211666659e.scope - libcontainer container 9fc35a975340e942702a17a6f130a6b3b8e8b01419fb740c2c7c03211666659e. Jan 23 19:05:59.598594 containerd[1988]: time="2026-01-23T19:05:59.598561457Z" level=info msg="StartContainer for \"9fc35a975340e942702a17a6f130a6b3b8e8b01419fb740c2c7c03211666659e\" returns successfully" Jan 23 19:05:59.617473 systemd[1]: cri-containerd-9fc35a975340e942702a17a6f130a6b3b8e8b01419fb740c2c7c03211666659e.scope: Deactivated successfully. Jan 23 19:05:59.617880 systemd[1]: cri-containerd-9fc35a975340e942702a17a6f130a6b3b8e8b01419fb740c2c7c03211666659e.scope: Consumed 26ms CPU time, 9.6M memory peak, 3.2M read from disk. Jan 23 19:05:59.621566 containerd[1988]: time="2026-01-23T19:05:59.621530947Z" level=info msg="received container exit event container_id:\"9fc35a975340e942702a17a6f130a6b3b8e8b01419fb740c2c7c03211666659e\" id:\"9fc35a975340e942702a17a6f130a6b3b8e8b01419fb740c2c7c03211666659e\" pid:5304 exited_at:{seconds:1769195159 nanos:620855408}" Jan 23 19:05:59.703797 sshd[5239]: Accepted publickey for core from 68.220.241.50 port 39346 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:05:59.705124 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:59.710634 systemd-logind[1959]: New session 24 of user core. Jan 23 19:05:59.727298 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 19:06:00.050216 sshd[5338]: Connection closed by 68.220.241.50 port 39346 Jan 23 19:06:00.052487 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:00.072923 systemd[1]: sshd@23-172.31.18.6:22-68.220.241.50:39346.service: Deactivated successfully. Jan 23 19:06:00.085788 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 19:06:00.087861 systemd-logind[1959]: Session 24 logged out. Waiting for processes to exit. Jan 23 19:06:00.095151 systemd-logind[1959]: Removed session 24. Jan 23 19:06:00.141700 systemd[1]: Started sshd@24-172.31.18.6:22-68.220.241.50:39354.service - OpenSSH per-connection server daemon (68.220.241.50:39354). Jan 23 19:06:00.310929 containerd[1988]: time="2026-01-23T19:06:00.310523170Z" level=info msg="CreateContainer within sandbox \"6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 19:06:00.322668 containerd[1988]: time="2026-01-23T19:06:00.322005456Z" level=info msg="Container 5863def3a4a3e70a1dafc109dede944048bb137804911840fd0ac89eaad4c642: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:06:00.334675 containerd[1988]: time="2026-01-23T19:06:00.334629295Z" level=info msg="CreateContainer within sandbox \"6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5863def3a4a3e70a1dafc109dede944048bb137804911840fd0ac89eaad4c642\"" Jan 23 19:06:00.336040 containerd[1988]: time="2026-01-23T19:06:00.335995705Z" level=info msg="StartContainer for \"5863def3a4a3e70a1dafc109dede944048bb137804911840fd0ac89eaad4c642\"" Jan 23 19:06:00.336895 containerd[1988]: time="2026-01-23T19:06:00.336866975Z" level=info msg="connecting to shim 5863def3a4a3e70a1dafc109dede944048bb137804911840fd0ac89eaad4c642" address="unix:///run/containerd/s/9a072953ccebba78ecf4ae38b0c0357c4acbeeeea72208beba33974d20990a97" protocol=ttrpc version=3 Jan 23 19:06:00.371339 systemd[1]: Started cri-containerd-5863def3a4a3e70a1dafc109dede944048bb137804911840fd0ac89eaad4c642.scope - libcontainer container 5863def3a4a3e70a1dafc109dede944048bb137804911840fd0ac89eaad4c642. Jan 23 19:06:00.419116 containerd[1988]: time="2026-01-23T19:06:00.419061655Z" level=info msg="StartContainer for \"5863def3a4a3e70a1dafc109dede944048bb137804911840fd0ac89eaad4c642\" returns successfully" Jan 23 19:06:00.430251 systemd[1]: cri-containerd-5863def3a4a3e70a1dafc109dede944048bb137804911840fd0ac89eaad4c642.scope: Deactivated successfully. Jan 23 19:06:00.430611 systemd[1]: cri-containerd-5863def3a4a3e70a1dafc109dede944048bb137804911840fd0ac89eaad4c642.scope: Consumed 26ms CPU time, 7.4M memory peak, 2.1M read from disk. Jan 23 19:06:00.431787 containerd[1988]: time="2026-01-23T19:06:00.431754140Z" level=info msg="received container exit event container_id:\"5863def3a4a3e70a1dafc109dede944048bb137804911840fd0ac89eaad4c642\" id:\"5863def3a4a3e70a1dafc109dede944048bb137804911840fd0ac89eaad4c642\" pid:5362 exited_at:{seconds:1769195160 nanos:431032708}" Jan 23 19:06:00.461846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5863def3a4a3e70a1dafc109dede944048bb137804911840fd0ac89eaad4c642-rootfs.mount: Deactivated successfully. Jan 23 19:06:00.638889 sshd[5345]: Accepted publickey for core from 68.220.241.50 port 39354 ssh2: RSA SHA256:WvM+01aJokQXAGkSVostoE7hyFzaYBJme8gxK1ahyMI Jan 23 19:06:00.640481 sshd-session[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:00.645747 systemd-logind[1959]: New session 25 of user core. Jan 23 19:06:00.654305 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 19:06:00.776328 kubelet[3338]: I0123 19:06:00.776112 3338 setters.go:618] "Node became not ready" node="ip-172-31-18-6" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T19:06:00Z","lastTransitionTime":"2026-01-23T19:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 19:06:01.364125 containerd[1988]: time="2026-01-23T19:06:01.356481824Z" level=info msg="CreateContainer within sandbox \"6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 19:06:01.431761 containerd[1988]: time="2026-01-23T19:06:01.431712649Z" level=info msg="Container 8c2ee93fa197d5cd966d53fb480756991c69be14668abcf12abe3bffc1700c10: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:06:01.474233 containerd[1988]: time="2026-01-23T19:06:01.474176990Z" level=info msg="CreateContainer within sandbox \"6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8c2ee93fa197d5cd966d53fb480756991c69be14668abcf12abe3bffc1700c10\"" Jan 23 19:06:01.479596 containerd[1988]: time="2026-01-23T19:06:01.479554884Z" level=info msg="StartContainer for \"8c2ee93fa197d5cd966d53fb480756991c69be14668abcf12abe3bffc1700c10\"" Jan 23 19:06:01.485216 containerd[1988]: time="2026-01-23T19:06:01.484893342Z" level=info msg="connecting to shim 8c2ee93fa197d5cd966d53fb480756991c69be14668abcf12abe3bffc1700c10" address="unix:///run/containerd/s/9a072953ccebba78ecf4ae38b0c0357c4acbeeeea72208beba33974d20990a97" protocol=ttrpc version=3 Jan 23 19:06:01.567728 systemd[1]: Started cri-containerd-8c2ee93fa197d5cd966d53fb480756991c69be14668abcf12abe3bffc1700c10.scope - libcontainer container 8c2ee93fa197d5cd966d53fb480756991c69be14668abcf12abe3bffc1700c10. Jan 23 19:06:02.213525 systemd[1]: cri-containerd-8c2ee93fa197d5cd966d53fb480756991c69be14668abcf12abe3bffc1700c10.scope: Deactivated successfully. Jan 23 19:06:02.221148 systemd[1]: cri-containerd-8c2ee93fa197d5cd966d53fb480756991c69be14668abcf12abe3bffc1700c10.scope: Consumed 64ms CPU time, 4.3M memory peak, 1.3M read from disk. Jan 23 19:06:02.223113 containerd[1988]: time="2026-01-23T19:06:02.222975592Z" level=info msg="received container exit event container_id:\"8c2ee93fa197d5cd966d53fb480756991c69be14668abcf12abe3bffc1700c10\" id:\"8c2ee93fa197d5cd966d53fb480756991c69be14668abcf12abe3bffc1700c10\" pid:5411 exited_at:{seconds:1769195162 nanos:222695462}" Jan 23 19:06:02.236171 containerd[1988]: time="2026-01-23T19:06:02.236053057Z" level=info msg="StartContainer for \"8c2ee93fa197d5cd966d53fb480756991c69be14668abcf12abe3bffc1700c10\" returns successfully" Jan 23 19:06:02.545255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c2ee93fa197d5cd966d53fb480756991c69be14668abcf12abe3bffc1700c10-rootfs.mount: Deactivated successfully. Jan 23 19:06:03.441310 containerd[1988]: time="2026-01-23T19:06:03.441259623Z" level=info msg="CreateContainer within sandbox \"6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 19:06:03.456325 containerd[1988]: time="2026-01-23T19:06:03.454356051Z" level=info msg="Container 9296a6a9f310cbca001efea18838932b18e8183781e005f61c5c1ca1ee4d5261: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:06:03.476029 containerd[1988]: time="2026-01-23T19:06:03.475977325Z" level=info msg="CreateContainer within sandbox \"6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9296a6a9f310cbca001efea18838932b18e8183781e005f61c5c1ca1ee4d5261\"" Jan 23 19:06:03.478383 containerd[1988]: time="2026-01-23T19:06:03.478128379Z" level=info msg="StartContainer for \"9296a6a9f310cbca001efea18838932b18e8183781e005f61c5c1ca1ee4d5261\"" Jan 23 19:06:03.479765 containerd[1988]: time="2026-01-23T19:06:03.479719253Z" level=info msg="connecting to shim 9296a6a9f310cbca001efea18838932b18e8183781e005f61c5c1ca1ee4d5261" address="unix:///run/containerd/s/9a072953ccebba78ecf4ae38b0c0357c4acbeeeea72208beba33974d20990a97" protocol=ttrpc version=3 Jan 23 19:06:03.519335 systemd[1]: Started cri-containerd-9296a6a9f310cbca001efea18838932b18e8183781e005f61c5c1ca1ee4d5261.scope - libcontainer container 9296a6a9f310cbca001efea18838932b18e8183781e005f61c5c1ca1ee4d5261. Jan 23 19:06:03.559015 systemd[1]: cri-containerd-9296a6a9f310cbca001efea18838932b18e8183781e005f61c5c1ca1ee4d5261.scope: Deactivated successfully. Jan 23 19:06:03.563082 containerd[1988]: time="2026-01-23T19:06:03.563016868Z" level=info msg="received container exit event container_id:\"9296a6a9f310cbca001efea18838932b18e8183781e005f61c5c1ca1ee4d5261\" id:\"9296a6a9f310cbca001efea18838932b18e8183781e005f61c5c1ca1ee4d5261\" pid:5453 exited_at:{seconds:1769195163 nanos:561589942}" Jan 23 19:06:03.573785 containerd[1988]: time="2026-01-23T19:06:03.573670079Z" level=info msg="StartContainer for \"9296a6a9f310cbca001efea18838932b18e8183781e005f61c5c1ca1ee4d5261\" returns successfully" Jan 23 19:06:03.593433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9296a6a9f310cbca001efea18838932b18e8183781e005f61c5c1ca1ee4d5261-rootfs.mount: Deactivated successfully. Jan 23 19:06:03.904513 kubelet[3338]: E0123 19:06:03.904449 3338 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 19:06:04.447721 containerd[1988]: time="2026-01-23T19:06:04.447635602Z" level=info msg="CreateContainer within sandbox \"6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 19:06:04.476119 containerd[1988]: time="2026-01-23T19:06:04.474284905Z" level=info msg="Container b977490604bb72a96b507b7488189c09bcaa9668b122aa067bd9d32342ef97bb: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:06:04.481096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3962218417.mount: Deactivated successfully. Jan 23 19:06:04.506607 containerd[1988]: time="2026-01-23T19:06:04.506551469Z" level=info msg="CreateContainer within sandbox \"6ca927778da9a25a510600e280dbfc93a8dd0a4e5edb06e5ce78d6d32d65c01f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b977490604bb72a96b507b7488189c09bcaa9668b122aa067bd9d32342ef97bb\"" Jan 23 19:06:04.509419 containerd[1988]: time="2026-01-23T19:06:04.509384821Z" level=info msg="StartContainer for \"b977490604bb72a96b507b7488189c09bcaa9668b122aa067bd9d32342ef97bb\"" Jan 23 19:06:04.513365 containerd[1988]: time="2026-01-23T19:06:04.513245929Z" level=info msg="connecting to shim b977490604bb72a96b507b7488189c09bcaa9668b122aa067bd9d32342ef97bb" address="unix:///run/containerd/s/9a072953ccebba78ecf4ae38b0c0357c4acbeeeea72208beba33974d20990a97" protocol=ttrpc version=3 Jan 23 19:06:04.557230 systemd[1]: Started cri-containerd-b977490604bb72a96b507b7488189c09bcaa9668b122aa067bd9d32342ef97bb.scope - libcontainer container b977490604bb72a96b507b7488189c09bcaa9668b122aa067bd9d32342ef97bb. Jan 23 19:06:04.661611 containerd[1988]: time="2026-01-23T19:06:04.661578344Z" level=info msg="StartContainer for \"b977490604bb72a96b507b7488189c09bcaa9668b122aa067bd9d32342ef97bb\" returns successfully" Jan 23 19:06:05.484211 kubelet[3338]: I0123 19:06:05.481990 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-68tgb" podStartSLOduration=6.481970249 podStartE2EDuration="6.481970249s" podCreationTimestamp="2026-01-23 19:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:06:05.481653474 +0000 UTC m=+106.999019267" watchObservedRunningTime="2026-01-23 19:06:05.481970249 +0000 UTC m=+106.999336045" Jan 23 19:06:05.495140 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 23 19:06:08.639572 systemd-networkd[1809]: lxc_health: Link UP Jan 23 19:06:08.639794 systemd-networkd[1809]: lxc_health: Gained carrier Jan 23 19:06:08.641257 (udev-worker)[6033]: Network interface NamePolicy= disabled on kernel command line. Jan 23 19:06:10.515420 systemd-networkd[1809]: lxc_health: Gained IPv6LL Jan 23 19:06:13.545646 kubelet[3338]: E0123 19:06:13.545597 3338 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53910->127.0.0.1:46227: write tcp 127.0.0.1:53910->127.0.0.1:46227: write: broken pipe Jan 23 19:06:15.169574 ntpd[2225]: Listen normally on 13 lxc_health [fe80::645d:7bff:fe54:9784%14]:123 Jan 23 19:06:15.169946 ntpd[2225]: 23 Jan 19:06:15 ntpd[2225]: Listen normally on 13 lxc_health [fe80::645d:7bff:fe54:9784%14]:123 Jan 23 19:06:18.702040 containerd[1988]: time="2026-01-23T19:06:18.701934298Z" level=info msg="StopPodSandbox for \"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\"" Jan 23 19:06:18.702472 containerd[1988]: time="2026-01-23T19:06:18.702073153Z" level=info msg="TearDown network for sandbox \"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\" successfully" Jan 23 19:06:18.702472 containerd[1988]: time="2026-01-23T19:06:18.702084558Z" level=info msg="StopPodSandbox for \"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\" returns successfully" Jan 23 19:06:18.702891 containerd[1988]: time="2026-01-23T19:06:18.702864936Z" level=info msg="RemovePodSandbox for \"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\"" Jan 23 19:06:18.704987 containerd[1988]: time="2026-01-23T19:06:18.704944821Z" level=info msg="Forcibly stopping sandbox \"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\"" Jan 23 19:06:18.705102 containerd[1988]: time="2026-01-23T19:06:18.705067983Z" level=info msg="TearDown network for sandbox \"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\" successfully" Jan 23 19:06:18.706138 containerd[1988]: time="2026-01-23T19:06:18.706108054Z" level=info msg="Ensure that sandbox 65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113 in task-service has been cleanup successfully" Jan 23 19:06:18.713022 containerd[1988]: time="2026-01-23T19:06:18.712987517Z" level=info msg="RemovePodSandbox \"65c39da0254b84baed9178297bf7a61f9c6aef8ef070293fdf95f82cfbb5f113\" returns successfully" Jan 23 19:06:18.713614 containerd[1988]: time="2026-01-23T19:06:18.713578168Z" level=info msg="StopPodSandbox for \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\"" Jan 23 19:06:18.713746 containerd[1988]: time="2026-01-23T19:06:18.713718253Z" level=info msg="TearDown network for sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" successfully" Jan 23 19:06:18.713746 containerd[1988]: time="2026-01-23T19:06:18.713740366Z" level=info msg="StopPodSandbox for \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" returns successfully" Jan 23 19:06:18.714143 containerd[1988]: time="2026-01-23T19:06:18.714117314Z" level=info msg="RemovePodSandbox for \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\"" Jan 23 19:06:18.714215 containerd[1988]: time="2026-01-23T19:06:18.714149155Z" level=info msg="Forcibly stopping sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\"" Jan 23 19:06:18.714255 containerd[1988]: time="2026-01-23T19:06:18.714238828Z" level=info msg="TearDown network for sandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" successfully" Jan 23 19:06:18.715443 containerd[1988]: time="2026-01-23T19:06:18.715402466Z" level=info msg="Ensure that sandbox e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e in task-service has been cleanup successfully" Jan 23 19:06:18.721961 containerd[1988]: time="2026-01-23T19:06:18.721901296Z" level=info msg="RemovePodSandbox \"e9dd9020b4a174fd7211ceff4e2c7c3bd1e53a205a7ceb7df74417a49a0e611e\" returns successfully" Jan 23 19:06:20.150925 kubelet[3338]: E0123 19:06:20.150794 3338 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40116->127.0.0.1:46227: write tcp 127.0.0.1:40116->127.0.0.1:46227: write: connection reset by peer Jan 23 19:06:20.429390 sshd[5393]: Connection closed by 68.220.241.50 port 39354 Jan 23 19:06:20.431192 sshd-session[5345]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:20.435951 systemd-logind[1959]: Session 25 logged out. Waiting for processes to exit. Jan 23 19:06:20.436436 systemd[1]: sshd@24-172.31.18.6:22-68.220.241.50:39354.service: Deactivated successfully. Jan 23 19:06:20.438496 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 19:06:20.440730 systemd-logind[1959]: Removed session 25. Jan 23 19:06:34.545544 systemd[1]: cri-containerd-e02eb9880f773fdd86dcc2aea79465c478850930856081a153307936cc1bf6ca.scope: Deactivated successfully. Jan 23 19:06:34.546239 systemd[1]: cri-containerd-e02eb9880f773fdd86dcc2aea79465c478850930856081a153307936cc1bf6ca.scope: Consumed 4.099s CPU time, 86.7M memory peak, 40.4M read from disk. Jan 23 19:06:34.547612 containerd[1988]: time="2026-01-23T19:06:34.547478197Z" level=info msg="received container exit event container_id:\"e02eb9880f773fdd86dcc2aea79465c478850930856081a153307936cc1bf6ca\" id:\"e02eb9880f773fdd86dcc2aea79465c478850930856081a153307936cc1bf6ca\" pid:3181 exit_status:1 exited_at:{seconds:1769195194 nanos:547035636}" Jan 23 19:06:34.580730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e02eb9880f773fdd86dcc2aea79465c478850930856081a153307936cc1bf6ca-rootfs.mount: Deactivated successfully. Jan 23 19:06:35.537483 kubelet[3338]: I0123 19:06:35.537365 3338 scope.go:117] "RemoveContainer" containerID="e02eb9880f773fdd86dcc2aea79465c478850930856081a153307936cc1bf6ca" Jan 23 19:06:35.541198 containerd[1988]: time="2026-01-23T19:06:35.541157693Z" level=info msg="CreateContainer within sandbox \"ebf332a63a0a9807e7366081f10ed4467f95532fa7beb5ef26c126e41cc34c9f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 19:06:35.558116 containerd[1988]: time="2026-01-23T19:06:35.557715803Z" level=info msg="Container 7355f208897828a3edfd434830c7471542460a7f0feae87856d843974ccc872e: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:06:35.562004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806587724.mount: Deactivated successfully. Jan 23 19:06:35.574157 containerd[1988]: time="2026-01-23T19:06:35.574083501Z" level=info msg="CreateContainer within sandbox \"ebf332a63a0a9807e7366081f10ed4467f95532fa7beb5ef26c126e41cc34c9f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7355f208897828a3edfd434830c7471542460a7f0feae87856d843974ccc872e\"" Jan 23 19:06:35.574899 containerd[1988]: time="2026-01-23T19:06:35.574828362Z" level=info msg="StartContainer for \"7355f208897828a3edfd434830c7471542460a7f0feae87856d843974ccc872e\"" Jan 23 19:06:35.576520 containerd[1988]: time="2026-01-23T19:06:35.576456711Z" level=info msg="connecting to shim 7355f208897828a3edfd434830c7471542460a7f0feae87856d843974ccc872e" address="unix:///run/containerd/s/e7be4b74e0dac4875d1f7c13d2f664f3b2a0f63649d071920d50671414848f87" protocol=ttrpc version=3 Jan 23 19:06:35.603603 systemd[1]: Started cri-containerd-7355f208897828a3edfd434830c7471542460a7f0feae87856d843974ccc872e.scope - libcontainer container 7355f208897828a3edfd434830c7471542460a7f0feae87856d843974ccc872e. Jan 23 19:06:35.667435 containerd[1988]: time="2026-01-23T19:06:35.667398934Z" level=info msg="StartContainer for \"7355f208897828a3edfd434830c7471542460a7f0feae87856d843974ccc872e\" returns successfully" Jan 23 19:06:40.439581 systemd[1]: cri-containerd-fe3244c103ba3b8e561fcb45ee98aedeaeae6d62b3e5baf9750bc790ccca5b0a.scope: Deactivated successfully. Jan 23 19:06:40.440592 systemd[1]: cri-containerd-fe3244c103ba3b8e561fcb45ee98aedeaeae6d62b3e5baf9750bc790ccca5b0a.scope: Consumed 2.388s CPU time, 29.8M memory peak, 12.1M read from disk. Jan 23 19:06:40.443782 containerd[1988]: time="2026-01-23T19:06:40.443735259Z" level=info msg="received container exit event container_id:\"fe3244c103ba3b8e561fcb45ee98aedeaeae6d62b3e5baf9750bc790ccca5b0a\" id:\"fe3244c103ba3b8e561fcb45ee98aedeaeae6d62b3e5baf9750bc790ccca5b0a\" pid:3162 exit_status:1 exited_at:{seconds:1769195200 nanos:442744873}" Jan 23 19:06:40.476632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe3244c103ba3b8e561fcb45ee98aedeaeae6d62b3e5baf9750bc790ccca5b0a-rootfs.mount: Deactivated successfully. Jan 23 19:06:40.557777 kubelet[3338]: I0123 19:06:40.557733 3338 scope.go:117] "RemoveContainer" containerID="fe3244c103ba3b8e561fcb45ee98aedeaeae6d62b3e5baf9750bc790ccca5b0a" Jan 23 19:06:40.560485 containerd[1988]: time="2026-01-23T19:06:40.560423631Z" level=info msg="CreateContainer within sandbox \"da5bd2f500408d2e14458e5a9a189d84d99a9c4c93b4d37d7f065381a63929ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 19:06:40.581549 containerd[1988]: time="2026-01-23T19:06:40.581508671Z" level=info msg="Container bd8525fb7121f3115fc336b964d4e9e8ca06b3d458b50888b11c123f6f7f1463: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:06:40.585931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1998034579.mount: Deactivated successfully. Jan 23 19:06:40.594795 containerd[1988]: time="2026-01-23T19:06:40.594745007Z" level=info msg="CreateContainer within sandbox \"da5bd2f500408d2e14458e5a9a189d84d99a9c4c93b4d37d7f065381a63929ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"bd8525fb7121f3115fc336b964d4e9e8ca06b3d458b50888b11c123f6f7f1463\"" Jan 23 19:06:40.595779 containerd[1988]: time="2026-01-23T19:06:40.595695174Z" level=info msg="StartContainer for \"bd8525fb7121f3115fc336b964d4e9e8ca06b3d458b50888b11c123f6f7f1463\"" Jan 23 19:06:40.597054 containerd[1988]: time="2026-01-23T19:06:40.597008166Z" level=info msg="connecting to shim bd8525fb7121f3115fc336b964d4e9e8ca06b3d458b50888b11c123f6f7f1463" address="unix:///run/containerd/s/272a3786f5f00134c7dcdee3852659789c1fac20ab17210fa5cd71f3dff2b7f6" protocol=ttrpc version=3 Jan 23 19:06:40.634432 systemd[1]: Started cri-containerd-bd8525fb7121f3115fc336b964d4e9e8ca06b3d458b50888b11c123f6f7f1463.scope - libcontainer container bd8525fb7121f3115fc336b964d4e9e8ca06b3d458b50888b11c123f6f7f1463. Jan 23 19:06:40.703496 containerd[1988]: time="2026-01-23T19:06:40.703304211Z" level=info msg="StartContainer for \"bd8525fb7121f3115fc336b964d4e9e8ca06b3d458b50888b11c123f6f7f1463\" returns successfully" Jan 23 19:06:41.565887 kubelet[3338]: E0123 19:06:41.565848 3338 request.go:1360] "Unexpected error when reading response body" err="context deadline exceeded" Jan 23 19:06:41.568266 kubelet[3338]: E0123 19:06:41.568058 3338 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: context deadline exceeded" Jan 23 19:06:51.569487 kubelet[3338]: E0123 19:06:51.569113 3338 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-6?timeout=10s\": context deadline exceeded"