Dec 16 13:13:14.892127 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:13:14.892170 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:13:14.892190 kernel: BIOS-provided physical RAM map: Dec 16 13:13:14.892203 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:13:14.892215 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Dec 16 13:13:14.892227 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 13:13:14.892242 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 13:13:14.892256 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 13:13:14.892270 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 13:13:14.892282 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 13:13:14.892296 kernel: NX (Execute Disable) protection: active Dec 16 13:13:14.892312 kernel: APIC: Static calls initialized Dec 16 13:13:14.892324 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Dec 16 13:13:14.892338 kernel: extended physical RAM map: Dec 16 13:13:14.892354 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:13:14.892368 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Dec 16 13:13:14.892386 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Dec 16 13:13:14.892399 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Dec 16 13:13:14.892413 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 13:13:14.892428 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 13:13:14.892442 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 13:13:14.892457 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 13:13:14.892471 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 13:13:14.892486 kernel: efi: EFI v2.7 by EDK II Dec 16 13:13:14.892498 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Dec 16 13:13:14.892509 kernel: secureboot: Secure boot disabled Dec 16 13:13:14.892520 kernel: SMBIOS 2.7 present. Dec 16 13:13:14.892536 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 16 13:13:14.892550 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:13:14.892563 kernel: Hypervisor detected: KVM Dec 16 13:13:14.892577 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 13:13:14.892591 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:13:14.892606 kernel: kvm-clock: using sched offset of 5433966555 cycles Dec 16 13:13:14.892621 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:13:14.892636 kernel: tsc: Detected 2500.004 MHz processor Dec 16 13:13:14.892651 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:13:14.892665 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:13:14.892683 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 13:13:14.892697 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:13:14.892712 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:13:14.892732 kernel: Using GB pages for direct mapping Dec 16 13:13:14.892748 kernel: ACPI: Early table checksum verification disabled Dec 16 13:13:14.892764 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Dec 16 13:13:14.892779 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Dec 16 13:13:14.892799 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 16 13:13:14.892815 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 16 13:13:14.892830 kernel: ACPI: FACS 0x00000000789D0000 000040 Dec 16 13:13:14.892843 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 16 13:13:14.892856 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 16 13:13:14.892869 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 16 13:13:14.894964 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 16 13:13:14.894995 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 16 13:13:14.895015 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 13:13:14.895029 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 13:13:14.895042 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Dec 16 13:13:14.895056 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Dec 16 13:13:14.895070 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Dec 16 13:13:14.895083 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Dec 16 13:13:14.895096 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Dec 16 13:13:14.895110 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Dec 16 13:13:14.895127 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Dec 16 13:13:14.895140 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Dec 16 13:13:14.895154 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Dec 16 13:13:14.895167 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Dec 16 13:13:14.895180 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Dec 16 13:13:14.895192 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Dec 16 13:13:14.895208 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 16 13:13:14.895222 kernel: NUMA: Initialized distance table, cnt=1 Dec 16 13:13:14.895236 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Dec 16 13:13:14.895250 kernel: Zone ranges: Dec 16 13:13:14.895267 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:13:14.895281 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Dec 16 13:13:14.895296 kernel: Normal empty Dec 16 13:13:14.895310 kernel: Device empty Dec 16 13:13:14.895323 kernel: Movable zone start for each node Dec 16 13:13:14.895336 kernel: Early memory node ranges Dec 16 13:13:14.895348 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:13:14.895360 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Dec 16 13:13:14.895373 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Dec 16 13:13:14.895388 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Dec 16 13:13:14.895413 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:13:14.895427 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:13:14.895441 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 16 13:13:14.895455 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Dec 16 13:13:14.895469 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 16 13:13:14.895482 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:13:14.895494 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 16 13:13:14.895513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:13:14.895530 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:13:14.895543 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:13:14.895556 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:13:14.895570 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:13:14.895584 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:13:14.895598 kernel: TSC deadline timer available Dec 16 13:13:14.895611 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:13:14.895625 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:13:14.895639 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:13:14.895653 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:13:14.895670 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:13:14.895684 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:13:14.895699 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:13:14.895716 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:13:14.895731 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Dec 16 13:13:14.895745 kernel: Booting paravirtualized kernel on KVM Dec 16 13:13:14.895760 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:13:14.895774 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:13:14.895790 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:13:14.895809 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:13:14.895821 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:13:14.895835 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:13:14.895850 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:13:14.895868 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:13:14.895883 kernel: random: crng init done Dec 16 13:13:14.895898 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:13:14.895913 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:13:14.895979 kernel: Fallback order for Node 0: 0 Dec 16 13:13:14.895994 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Dec 16 13:13:14.896008 kernel: Policy zone: DMA32 Dec 16 13:13:14.896032 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:13:14.896049 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:13:14.896066 kernel: Kernel/User page tables isolation: enabled Dec 16 13:13:14.896082 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:13:14.896096 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:13:14.896110 kernel: Dynamic Preempt: voluntary Dec 16 13:13:14.896124 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:13:14.896140 kernel: rcu: RCU event tracing is enabled. Dec 16 13:13:14.896156 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:13:14.896175 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:13:14.896192 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:13:14.896207 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:13:14.896222 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:13:14.896236 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:13:14.896255 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:13:14.896270 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:13:14.896284 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:13:14.896299 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 13:13:14.896315 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:13:14.896330 kernel: Console: colour dummy device 80x25 Dec 16 13:13:14.896346 kernel: printk: legacy console [tty0] enabled Dec 16 13:13:14.896361 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:13:14.896379 kernel: ACPI: Core revision 20240827 Dec 16 13:13:14.896395 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 16 13:13:14.896410 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:13:14.896425 kernel: x2apic enabled Dec 16 13:13:14.896440 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:13:14.896456 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Dec 16 13:13:14.896471 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Dec 16 13:13:14.896487 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 16 13:13:14.896503 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Dec 16 13:13:14.896519 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:13:14.896539 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:13:14.896554 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:13:14.896570 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:13:14.896585 kernel: RETBleed: Vulnerable Dec 16 13:13:14.896599 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:13:14.896614 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:13:14.896629 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:13:14.896644 kernel: GDS: Unknown: Dependent on hypervisor status Dec 16 13:13:14.896660 kernel: active return thunk: its_return_thunk Dec 16 13:13:14.896675 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:13:14.896691 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:13:14.896709 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:13:14.896724 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:13:14.896738 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 16 13:13:14.896753 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 16 13:13:14.896768 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:13:14.896784 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:13:14.896800 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:13:14.896814 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 16 13:13:14.896828 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:13:14.896841 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 16 13:13:14.896860 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 16 13:13:14.896875 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 16 13:13:14.896889 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 16 13:13:14.896902 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 16 13:13:14.896915 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 16 13:13:14.900956 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 16 13:13:14.900993 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:13:14.901009 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:13:14.901024 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:13:14.901038 kernel: landlock: Up and running. Dec 16 13:13:14.901052 kernel: SELinux: Initializing. Dec 16 13:13:14.901066 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:13:14.901085 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:13:14.901099 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 16 13:13:14.901114 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 16 13:13:14.901128 kernel: signal: max sigframe size: 3632 Dec 16 13:13:14.901142 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:13:14.901157 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:13:14.901172 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:13:14.901186 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:13:14.901200 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:13:14.901213 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:13:14.901230 kernel: .... node #0, CPUs: #1 Dec 16 13:13:14.901245 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 16 13:13:14.901260 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 16 13:13:14.901273 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:13:14.901287 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Dec 16 13:13:14.901301 kernel: Memory: 1899860K/2037804K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 133380K reserved, 0K cma-reserved) Dec 16 13:13:14.901315 kernel: devtmpfs: initialized Dec 16 13:13:14.901329 kernel: x86/mm: Memory block size: 128MB Dec 16 13:13:14.901346 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Dec 16 13:13:14.901360 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:13:14.901374 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:13:14.901388 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:13:14.901401 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:13:14.901415 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:13:14.901429 kernel: audit: type=2000 audit(1765890793.262:1): state=initialized audit_enabled=0 res=1 Dec 16 13:13:14.901442 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:13:14.901456 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:13:14.901473 kernel: cpuidle: using governor menu Dec 16 13:13:14.901487 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:13:14.901500 kernel: dca service started, version 1.12.1 Dec 16 13:13:14.901514 kernel: PCI: Using configuration type 1 for base access Dec 16 13:13:14.901528 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:13:14.901542 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:13:14.901556 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:13:14.901570 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:13:14.901584 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:13:14.901601 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:13:14.901615 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:13:14.901628 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:13:14.901642 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 16 13:13:14.901656 kernel: ACPI: Interpreter enabled Dec 16 13:13:14.901669 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:13:14.901682 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:13:14.901695 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:13:14.901708 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:13:14.901726 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 16 13:13:14.901742 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:13:14.902018 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:13:14.902162 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 13:13:14.902293 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 13:13:14.902312 kernel: acpiphp: Slot [3] registered Dec 16 13:13:14.902329 kernel: acpiphp: Slot [4] registered Dec 16 13:13:14.902349 kernel: acpiphp: Slot [5] registered Dec 16 13:13:14.902364 kernel: acpiphp: Slot [6] registered Dec 16 13:13:14.902380 kernel: acpiphp: Slot [7] registered Dec 16 13:13:14.902397 kernel: acpiphp: Slot [8] registered Dec 16 13:13:14.902412 kernel: acpiphp: Slot [9] registered Dec 16 13:13:14.902428 kernel: acpiphp: Slot [10] registered Dec 16 13:13:14.902444 kernel: acpiphp: Slot [11] registered Dec 16 13:13:14.902460 kernel: acpiphp: Slot [12] registered Dec 16 13:13:14.902476 kernel: acpiphp: Slot [13] registered Dec 16 13:13:14.902495 kernel: acpiphp: Slot [14] registered Dec 16 13:13:14.902511 kernel: acpiphp: Slot [15] registered Dec 16 13:13:14.902526 kernel: acpiphp: Slot [16] registered Dec 16 13:13:14.902542 kernel: acpiphp: Slot [17] registered Dec 16 13:13:14.902558 kernel: acpiphp: Slot [18] registered Dec 16 13:13:14.902573 kernel: acpiphp: Slot [19] registered Dec 16 13:13:14.902589 kernel: acpiphp: Slot [20] registered Dec 16 13:13:14.902605 kernel: acpiphp: Slot [21] registered Dec 16 13:13:14.902621 kernel: acpiphp: Slot [22] registered Dec 16 13:13:14.902636 kernel: acpiphp: Slot [23] registered Dec 16 13:13:14.902655 kernel: acpiphp: Slot [24] registered Dec 16 13:13:14.902671 kernel: acpiphp: Slot [25] registered Dec 16 13:13:14.902687 kernel: acpiphp: Slot [26] registered Dec 16 13:13:14.902703 kernel: acpiphp: Slot [27] registered Dec 16 13:13:14.902719 kernel: acpiphp: Slot [28] registered Dec 16 13:13:14.902734 kernel: acpiphp: Slot [29] registered Dec 16 13:13:14.902750 kernel: acpiphp: Slot [30] registered Dec 16 13:13:14.902766 kernel: acpiphp: Slot [31] registered Dec 16 13:13:14.902782 kernel: PCI host bridge to bus 0000:00 Dec 16 13:13:14.902916 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:13:14.905163 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:13:14.905305 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:13:14.905424 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 16 13:13:14.905551 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Dec 16 13:13:14.905668 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:13:14.905852 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:13:14.906037 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:13:14.906193 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Dec 16 13:13:14.906329 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 16 13:13:14.906467 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 16 13:13:14.906606 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 16 13:13:14.906747 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 16 13:13:14.906892 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 16 13:13:14.913161 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 16 13:13:14.913326 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 16 13:13:14.913467 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:13:14.913596 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Dec 16 13:13:14.913726 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 13:13:14.913855 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:13:14.914023 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Dec 16 13:13:14.914155 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Dec 16 13:13:14.914289 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Dec 16 13:13:14.914416 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Dec 16 13:13:14.914434 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:13:14.914448 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:13:14.914464 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:13:14.914484 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:13:14.914500 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 13:13:14.914515 kernel: iommu: Default domain type: Translated Dec 16 13:13:14.914531 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:13:14.914546 kernel: efivars: Registered efivars operations Dec 16 13:13:14.914562 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:13:14.914577 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:13:14.914593 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Dec 16 13:13:14.914607 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Dec 16 13:13:14.914625 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Dec 16 13:13:14.914782 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 16 13:13:14.914917 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 16 13:13:14.915105 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:13:14.915124 kernel: vgaarb: loaded Dec 16 13:13:14.915138 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 16 13:13:14.915150 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 16 13:13:14.915163 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:13:14.915176 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:13:14.915194 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:13:14.915208 kernel: pnp: PnP ACPI init Dec 16 13:13:14.915220 kernel: pnp: PnP ACPI: found 5 devices Dec 16 13:13:14.915234 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:13:14.915247 kernel: NET: Registered PF_INET protocol family Dec 16 13:13:14.915260 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:13:14.915276 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 13:13:14.915289 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:13:14.915303 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:13:14.915321 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 13:13:14.915335 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 13:13:14.915351 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:13:14.915364 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:13:14.915376 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:13:14.915390 kernel: NET: Registered PF_XDP protocol family Dec 16 13:13:14.915538 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:13:14.915655 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:13:14.915775 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:13:14.915899 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 16 13:13:14.916036 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Dec 16 13:13:14.916176 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 13:13:14.916195 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:13:14.916209 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 13:13:14.916225 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Dec 16 13:13:14.916240 kernel: clocksource: Switched to clocksource tsc Dec 16 13:13:14.916254 kernel: Initialise system trusted keyrings Dec 16 13:13:14.916272 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 13:13:14.916285 kernel: Key type asymmetric registered Dec 16 13:13:14.916298 kernel: Asymmetric key parser 'x509' registered Dec 16 13:13:14.916312 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:13:14.916327 kernel: io scheduler mq-deadline registered Dec 16 13:13:14.916342 kernel: io scheduler kyber registered Dec 16 13:13:14.916358 kernel: io scheduler bfq registered Dec 16 13:13:14.916374 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:13:14.916389 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:13:14.916407 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:13:14.916422 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:13:14.916437 kernel: i8042: Warning: Keylock active Dec 16 13:13:14.916452 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:13:14.916467 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:13:14.916629 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 16 13:13:14.916769 kernel: rtc_cmos 00:00: registered as rtc0 Dec 16 13:13:14.916911 kernel: rtc_cmos 00:00: setting system clock to 2025-12-16T13:13:14 UTC (1765890794) Dec 16 13:13:14.921553 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 16 13:13:14.921619 kernel: intel_pstate: CPU model not supported Dec 16 13:13:14.921641 kernel: efifb: probing for efifb Dec 16 13:13:14.921659 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Dec 16 13:13:14.921677 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Dec 16 13:13:14.921695 kernel: efifb: scrolling: redraw Dec 16 13:13:14.921713 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:13:14.921731 kernel: Console: switching to colour frame buffer device 100x37 Dec 16 13:13:14.921752 kernel: fb0: EFI VGA frame buffer device Dec 16 13:13:14.921770 kernel: pstore: Using crash dump compression: deflate Dec 16 13:13:14.921788 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:13:14.921806 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:13:14.921823 kernel: Segment Routing with IPv6 Dec 16 13:13:14.921838 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:13:14.921855 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:13:14.921872 kernel: Key type dns_resolver registered Dec 16 13:13:14.921887 kernel: IPI shorthand broadcast: enabled Dec 16 13:13:14.921907 kernel: sched_clock: Marking stable (2639001647, 145401406)->(2853758903, -69355850) Dec 16 13:13:14.921923 kernel: registered taskstats version 1 Dec 16 13:13:14.921962 kernel: Loading compiled-in X.509 certificates Dec 16 13:13:14.921977 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:13:14.921991 kernel: Demotion targets for Node 0: null Dec 16 13:13:14.922006 kernel: Key type .fscrypt registered Dec 16 13:13:14.922021 kernel: Key type fscrypt-provisioning registered Dec 16 13:13:14.922034 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:13:14.922048 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:13:14.922064 kernel: ima: No architecture policies found Dec 16 13:13:14.922083 kernel: clk: Disabling unused clocks Dec 16 13:13:14.922099 kernel: Warning: unable to open an initial console. Dec 16 13:13:14.922114 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:13:14.922130 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:13:14.922149 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:13:14.922167 kernel: Run /init as init process Dec 16 13:13:14.922183 kernel: with arguments: Dec 16 13:13:14.922199 kernel: /init Dec 16 13:13:14.922215 kernel: with environment: Dec 16 13:13:14.922232 kernel: HOME=/ Dec 16 13:13:14.922249 kernel: TERM=linux Dec 16 13:13:14.922269 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:13:14.922292 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:13:14.922315 systemd[1]: Detected virtualization amazon. Dec 16 13:13:14.922333 systemd[1]: Detected architecture x86-64. Dec 16 13:13:14.922349 systemd[1]: Running in initrd. Dec 16 13:13:14.922365 systemd[1]: No hostname configured, using default hostname. Dec 16 13:13:14.922383 systemd[1]: Hostname set to . Dec 16 13:13:14.922401 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:13:14.922419 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:13:14.922438 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:13:14.922459 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:13:14.922478 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:13:14.922500 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:13:14.922518 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:13:14.922538 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:13:14.922555 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:13:14.922573 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:13:14.922590 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:13:14.922606 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:13:14.922622 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:13:14.922638 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:13:14.922654 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:13:14.922670 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:13:14.922686 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:13:14.922702 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:13:14.922721 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:13:14.922740 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:13:14.922759 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:13:14.922778 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:13:14.922796 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:13:14.922814 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:13:14.922831 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:13:14.922849 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:13:14.922868 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:13:14.922885 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:13:14.922901 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:13:14.922917 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:13:14.922952 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:13:14.922968 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:14.922984 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:13:14.923005 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:13:14.923063 systemd-journald[188]: Collecting audit messages is disabled. Dec 16 13:13:14.923102 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:13:14.923119 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:13:14.923135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:14.923153 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:13:14.923171 systemd-journald[188]: Journal started Dec 16 13:13:14.923205 systemd-journald[188]: Runtime Journal (/run/log/journal/ec23b971469665c13c8bdda7039309f0) is 4.7M, max 38.1M, 33.3M free. Dec 16 13:13:14.925989 systemd-modules-load[189]: Inserted module 'overlay' Dec 16 13:13:14.929628 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:13:14.946628 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:13:14.955648 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:13:14.965151 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:13:14.968101 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:13:14.980960 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:13:14.978793 systemd-tmpfiles[206]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:13:14.978960 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:13:14.989056 kernel: Bridge firewalling registered Dec 16 13:13:14.987674 systemd-modules-load[189]: Inserted module 'br_netfilter' Dec 16 13:13:14.991189 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:13:14.993037 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:13:14.998195 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:13:15.006028 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:13:15.010980 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:13:15.017661 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:13:15.027155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:13:15.031138 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:13:15.091813 systemd-resolved[247]: Positive Trust Anchors: Dec 16 13:13:15.092875 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:13:15.092959 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:13:15.100490 systemd-resolved[247]: Defaulting to hostname 'linux'. Dec 16 13:13:15.103239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:13:15.104023 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:13:15.125963 kernel: SCSI subsystem initialized Dec 16 13:13:15.135966 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:13:15.146959 kernel: iscsi: registered transport (tcp) Dec 16 13:13:15.168128 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:13:15.168210 kernel: QLogic iSCSI HBA Driver Dec 16 13:13:15.187545 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:13:15.210243 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:13:15.213025 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:13:15.260668 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:13:15.262786 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:13:15.317967 kernel: raid6: avx512x4 gen() 17515 MB/s Dec 16 13:13:15.335956 kernel: raid6: avx512x2 gen() 18011 MB/s Dec 16 13:13:15.353958 kernel: raid6: avx512x1 gen() 18010 MB/s Dec 16 13:13:15.371954 kernel: raid6: avx2x4 gen() 17953 MB/s Dec 16 13:13:15.389955 kernel: raid6: avx2x2 gen() 18005 MB/s Dec 16 13:13:15.408209 kernel: raid6: avx2x1 gen() 13270 MB/s Dec 16 13:13:15.408272 kernel: raid6: using algorithm avx512x2 gen() 18011 MB/s Dec 16 13:13:15.427159 kernel: raid6: .... xor() 23711 MB/s, rmw enabled Dec 16 13:13:15.427228 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:13:15.448975 kernel: xor: automatically using best checksumming function avx Dec 16 13:13:15.618966 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:13:15.626244 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:13:15.628494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:13:15.657681 systemd-udevd[436]: Using default interface naming scheme 'v255'. Dec 16 13:13:15.664635 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:13:15.668741 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:13:15.694491 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Dec 16 13:13:15.722413 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:13:15.724449 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:13:15.788795 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:13:15.794757 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:13:15.884389 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 16 13:13:15.884649 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 16 13:13:15.891970 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 16 13:13:15.892270 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 16 13:13:15.898974 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 16 13:13:15.910231 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:13:15.913957 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:13:15.919959 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:26:7e:e5:8c:5b Dec 16 13:13:15.920245 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:13:15.922266 kernel: GPT:9289727 != 33554431 Dec 16 13:13:15.924703 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:13:15.924754 kernel: GPT:9289727 != 33554431 Dec 16 13:13:15.925755 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:13:15.929462 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:13:15.937901 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:13:15.939029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:15.941440 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:15.944279 (udev-worker)[483]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:13:15.949164 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:15.953181 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:13:15.952631 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:13:15.961481 kernel: AES CTR mode by8 optimization enabled Dec 16 13:13:15.977601 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:13:15.980000 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:15.994904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:16.020972 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:13:16.037670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:16.123658 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 16 13:13:16.125401 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 16 13:13:16.127212 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:13:16.145153 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 16 13:13:16.148879 disk-uuid[665]: Primary Header is updated. Dec 16 13:13:16.148879 disk-uuid[665]: Secondary Entries is updated. Dec 16 13:13:16.148879 disk-uuid[665]: Secondary Header is updated. Dec 16 13:13:16.159953 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:13:16.160541 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:13:16.171959 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:13:16.177900 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 13:13:16.181409 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:13:16.184026 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:13:16.185806 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:13:16.189097 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:13:16.224375 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:13:16.451472 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 16 13:13:17.178703 disk-uuid[666]: The operation has completed successfully. Dec 16 13:13:17.179769 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:13:17.299594 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:13:17.299727 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:13:17.328241 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:13:17.349560 sh[937]: Success Dec 16 13:13:17.370136 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:13:17.370217 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:13:17.370911 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:13:17.383092 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 16 13:13:17.461126 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:13:17.466060 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:13:17.481303 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:13:17.501013 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (960) Dec 16 13:13:17.504108 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:13:17.504181 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:13:17.616590 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:13:17.616659 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:13:17.616674 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:13:17.628546 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:13:17.629737 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:13:17.630416 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:13:17.631558 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:13:17.635097 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:13:17.670954 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (993) Dec 16 13:13:17.677568 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:17.677644 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:13:17.684707 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:13:17.684799 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:13:17.691995 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:17.694460 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:13:17.698101 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:13:17.750201 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:13:17.752772 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:13:17.805824 systemd-networkd[1129]: lo: Link UP Dec 16 13:13:17.805839 systemd-networkd[1129]: lo: Gained carrier Dec 16 13:13:17.807643 systemd-networkd[1129]: Enumeration completed Dec 16 13:13:17.807777 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:13:17.808586 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:13:17.808592 systemd-networkd[1129]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:13:17.813982 systemd[1]: Reached target network.target - Network. Dec 16 13:13:17.815252 systemd-networkd[1129]: eth0: Link UP Dec 16 13:13:17.815259 systemd-networkd[1129]: eth0: Gained carrier Dec 16 13:13:17.815280 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:13:17.828040 systemd-networkd[1129]: eth0: DHCPv4 address 172.31.24.237/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 13:13:18.182623 ignition[1064]: Ignition 2.22.0 Dec 16 13:13:18.182644 ignition[1064]: Stage: fetch-offline Dec 16 13:13:18.182873 ignition[1064]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:18.182886 ignition[1064]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:18.183207 ignition[1064]: Ignition finished successfully Dec 16 13:13:18.185376 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:13:18.187818 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:13:18.218665 ignition[1138]: Ignition 2.22.0 Dec 16 13:13:18.218683 ignition[1138]: Stage: fetch Dec 16 13:13:18.219060 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:18.219074 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:18.219196 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:18.227657 ignition[1138]: PUT result: OK Dec 16 13:13:18.229692 ignition[1138]: parsed url from cmdline: "" Dec 16 13:13:18.229706 ignition[1138]: no config URL provided Dec 16 13:13:18.229715 ignition[1138]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:13:18.229728 ignition[1138]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:13:18.229746 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:18.230321 ignition[1138]: PUT result: OK Dec 16 13:13:18.230364 ignition[1138]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 16 13:13:18.231092 ignition[1138]: GET result: OK Dec 16 13:13:18.231159 ignition[1138]: parsing config with SHA512: 97d3df22e011b739aa1f6c68ad3eb50f77a2c3250d893103b4481bfc0c01d1ea77546764d8bab3b39a2d27e7f4e19a7737c0be7dd9b9ac75c58c9857b5c7f86a Dec 16 13:13:18.267200 unknown[1138]: fetched base config from "system" Dec 16 13:13:18.267635 ignition[1138]: fetch: fetch complete Dec 16 13:13:18.267220 unknown[1138]: fetched base config from "system" Dec 16 13:13:18.267641 ignition[1138]: fetch: fetch passed Dec 16 13:13:18.267232 unknown[1138]: fetched user config from "aws" Dec 16 13:13:18.267688 ignition[1138]: Ignition finished successfully Dec 16 13:13:18.286748 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:13:18.289857 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:13:18.330967 ignition[1145]: Ignition 2.22.0 Dec 16 13:13:18.330982 ignition[1145]: Stage: kargs Dec 16 13:13:18.331357 ignition[1145]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:18.331453 ignition[1145]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:18.331565 ignition[1145]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:18.332389 ignition[1145]: PUT result: OK Dec 16 13:13:18.335141 ignition[1145]: kargs: kargs passed Dec 16 13:13:18.335218 ignition[1145]: Ignition finished successfully Dec 16 13:13:18.337118 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:13:18.339058 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:13:18.374153 ignition[1152]: Ignition 2.22.0 Dec 16 13:13:18.374167 ignition[1152]: Stage: disks Dec 16 13:13:18.374556 ignition[1152]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:18.374569 ignition[1152]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:18.374689 ignition[1152]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:18.375682 ignition[1152]: PUT result: OK Dec 16 13:13:18.378422 ignition[1152]: disks: disks passed Dec 16 13:13:18.378497 ignition[1152]: Ignition finished successfully Dec 16 13:13:18.380620 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:13:18.381289 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:13:18.381680 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:13:18.382231 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:13:18.382781 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:13:18.383511 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:13:18.385159 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:13:18.437968 systemd-fsck[1160]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 13:13:18.441304 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:13:18.443949 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:13:18.671975 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:13:18.672860 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:13:18.673795 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:13:18.676065 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:13:18.679021 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:13:18.680334 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:13:18.680381 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:13:18.681750 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:13:18.692396 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:13:18.694650 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:13:18.708018 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1179) Dec 16 13:13:18.708082 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:18.710214 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:13:18.718878 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:13:18.718965 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:13:18.720846 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:13:19.177858 initrd-setup-root[1203]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:13:19.197222 initrd-setup-root[1210]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:13:19.202329 initrd-setup-root[1217]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:13:19.207051 initrd-setup-root[1224]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:13:19.365100 systemd-networkd[1129]: eth0: Gained IPv6LL Dec 16 13:13:19.446436 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:13:19.448620 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:13:19.451094 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:13:19.466786 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:13:19.470409 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:19.493210 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:13:19.508790 ignition[1292]: INFO : Ignition 2.22.0 Dec 16 13:13:19.508790 ignition[1292]: INFO : Stage: mount Dec 16 13:13:19.510338 ignition[1292]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:19.510338 ignition[1292]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:19.510338 ignition[1292]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:19.510338 ignition[1292]: INFO : PUT result: OK Dec 16 13:13:19.512735 ignition[1292]: INFO : mount: mount passed Dec 16 13:13:19.513247 ignition[1292]: INFO : Ignition finished successfully Dec 16 13:13:19.514891 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:13:19.516682 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:13:19.674135 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:13:19.702010 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1305) Dec 16 13:13:19.709864 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:19.709968 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:13:19.716215 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:13:19.716287 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:13:19.719523 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:13:19.757212 ignition[1322]: INFO : Ignition 2.22.0 Dec 16 13:13:19.757212 ignition[1322]: INFO : Stage: files Dec 16 13:13:19.758733 ignition[1322]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:19.758733 ignition[1322]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:19.758733 ignition[1322]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:19.760218 ignition[1322]: INFO : PUT result: OK Dec 16 13:13:19.761955 ignition[1322]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:13:19.763132 ignition[1322]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:13:19.763132 ignition[1322]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:13:19.778820 ignition[1322]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:13:19.779993 ignition[1322]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:13:19.779993 ignition[1322]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:13:19.779518 unknown[1322]: wrote ssh authorized keys file for user: core Dec 16 13:13:19.782539 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:13:19.782539 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 13:13:19.842298 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:13:19.961069 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:13:19.961069 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:13:19.961069 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 16 13:13:20.012838 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 13:13:20.109144 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:13:20.110424 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:13:20.110424 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:13:20.110424 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:13:20.110424 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:13:20.110424 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:13:20.110424 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:13:20.110424 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:13:20.110424 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:13:20.117118 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:13:20.117882 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:13:20.117882 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:13:20.122046 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:13:20.122046 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:13:20.124124 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 13:13:20.499074 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 13:13:20.713615 ignition[1322]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:13:20.713615 ignition[1322]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 13:13:20.726228 ignition[1322]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:13:20.733835 ignition[1322]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:13:20.733835 ignition[1322]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 13:13:20.733835 ignition[1322]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:13:20.737651 ignition[1322]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:13:20.737651 ignition[1322]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:13:20.737651 ignition[1322]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:13:20.737651 ignition[1322]: INFO : files: files passed Dec 16 13:13:20.737651 ignition[1322]: INFO : Ignition finished successfully Dec 16 13:13:20.738274 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:13:20.742683 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:13:20.745786 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:13:20.755951 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:13:20.756082 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:13:20.765362 initrd-setup-root-after-ignition[1351]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:13:20.765362 initrd-setup-root-after-ignition[1351]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:13:20.769004 initrd-setup-root-after-ignition[1355]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:13:20.769632 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:13:20.770683 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:13:20.772662 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:13:20.828753 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:13:20.828945 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:13:20.830193 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:13:20.831306 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:13:20.832273 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:13:20.833490 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:13:20.876567 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:13:20.878742 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:13:20.901422 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:13:20.902161 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:13:20.903197 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:13:20.904187 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:13:20.904417 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:13:20.905532 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:13:20.906423 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:13:20.907215 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:13:20.908133 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:13:20.908898 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:13:20.909661 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:13:20.910483 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:13:20.911288 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:13:20.912179 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:13:20.913221 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:13:20.914072 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:13:20.914808 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:13:20.915070 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:13:20.916261 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:13:20.917115 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:13:20.917778 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:13:20.917940 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:13:20.918613 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:13:20.918832 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:13:20.919889 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:13:20.920103 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:13:20.920817 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:13:20.921044 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:13:20.924075 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:13:20.927329 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:13:20.928045 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:13:20.928304 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:13:20.931284 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:13:20.931523 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:13:20.938446 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:13:20.941020 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:13:20.965588 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:13:20.971738 ignition[1375]: INFO : Ignition 2.22.0 Dec 16 13:13:20.971738 ignition[1375]: INFO : Stage: umount Dec 16 13:13:20.973471 ignition[1375]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:20.973471 ignition[1375]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:20.973471 ignition[1375]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:20.972867 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:13:20.976336 ignition[1375]: INFO : PUT result: OK Dec 16 13:13:20.973270 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:13:20.977537 ignition[1375]: INFO : umount: umount passed Dec 16 13:13:20.979055 ignition[1375]: INFO : Ignition finished successfully Dec 16 13:13:20.979540 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:13:20.979744 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:13:20.981024 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:13:20.981096 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:13:20.981561 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:13:20.981624 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:13:20.982219 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:13:20.982279 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:13:20.982865 systemd[1]: Stopped target network.target - Network. Dec 16 13:13:20.983592 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:13:20.983659 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:13:20.984282 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:13:20.984848 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:13:20.988023 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:13:20.988402 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:13:20.989355 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:13:20.989999 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:13:20.990057 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:13:20.990635 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:13:20.990684 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:13:20.991275 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:13:20.991359 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:13:20.991958 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:13:20.992020 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:13:20.992617 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:13:20.992681 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:13:20.993430 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:13:20.994062 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:13:20.998734 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:13:20.998889 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:13:21.002460 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:13:21.002764 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:13:21.002918 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:13:21.005433 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:13:21.006314 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:13:21.007210 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:13:21.007270 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:13:21.009110 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:13:21.009638 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:13:21.009710 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:13:21.010341 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:13:21.010400 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:13:21.011058 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:13:21.011115 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:13:21.012044 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:13:21.012102 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:13:21.013070 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:13:21.020645 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:13:21.020744 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:13:21.029467 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:13:21.029676 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:13:21.031425 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:13:21.031527 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:13:21.032683 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:13:21.032736 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:13:21.034782 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:13:21.034860 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:13:21.036386 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:13:21.036453 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:13:21.037511 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:13:21.037586 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:13:21.041106 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:13:21.041742 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:13:21.041857 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:13:21.043701 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:13:21.043772 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:13:21.044621 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:13:21.044681 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:13:21.045583 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:13:21.045640 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:13:21.046794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:13:21.046852 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:21.050503 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:13:21.050575 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 13:13:21.050625 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:13:21.050678 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:13:21.051182 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:13:21.051310 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:13:21.064303 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:13:21.064450 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:13:21.065922 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:13:21.067639 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:13:21.086966 systemd[1]: Switching root. Dec 16 13:13:21.134219 systemd-journald[188]: Journal stopped Dec 16 13:13:23.188691 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Dec 16 13:13:23.188786 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:13:23.188825 kernel: SELinux: policy capability open_perms=1 Dec 16 13:13:23.188846 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:13:23.188866 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:13:23.188893 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:13:23.188914 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:13:23.188964 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:13:23.188983 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:13:23.189002 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:13:23.189022 kernel: audit: type=1403 audit(1765890801.831:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:13:23.189044 systemd[1]: Successfully loaded SELinux policy in 88.877ms. Dec 16 13:13:23.189086 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.635ms. Dec 16 13:13:23.189109 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:13:23.189132 systemd[1]: Detected virtualization amazon. Dec 16 13:13:23.189155 systemd[1]: Detected architecture x86-64. Dec 16 13:13:23.189181 systemd[1]: Detected first boot. Dec 16 13:13:23.189203 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:13:23.189226 zram_generator::config[1418]: No configuration found. Dec 16 13:13:23.189250 kernel: Guest personality initialized and is inactive Dec 16 13:13:23.189270 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:13:23.189291 kernel: Initialized host personality Dec 16 13:13:23.189313 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:13:23.189335 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:13:23.189358 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:13:23.189385 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:13:23.189407 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:13:23.189430 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:13:23.189453 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:13:23.189476 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:13:23.189497 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:13:23.189519 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:13:23.189545 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:13:23.189577 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:13:23.189599 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:13:23.189621 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:13:23.189643 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:13:23.189666 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:13:23.189687 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:13:23.189715 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:13:23.189737 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:13:23.189762 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:13:23.189784 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:13:23.189804 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:13:23.189823 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:13:23.189843 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:13:23.189865 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:13:23.189885 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:13:23.189907 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:13:23.189950 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:13:23.189973 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:13:23.189990 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:13:23.190008 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:13:23.190025 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:13:23.190042 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:13:23.190060 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:13:23.190079 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:13:23.190101 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:13:23.190120 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:13:23.190145 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:13:23.190166 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:13:23.190183 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:13:23.190203 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:13:23.190223 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:23.190243 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:13:23.190263 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:13:23.190282 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:13:23.190303 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:13:23.190326 systemd[1]: Reached target machines.target - Containers. Dec 16 13:13:23.190346 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:13:23.190366 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:13:23.190387 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:13:23.190409 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:13:23.190429 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:13:23.190448 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:13:23.190469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:13:23.190492 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:13:23.190512 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:13:23.190533 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:13:23.190553 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:13:23.190573 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:13:23.190593 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:13:23.190613 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:13:23.190636 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:13:23.190660 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:13:23.190680 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:13:23.190704 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:13:23.190724 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:13:23.190744 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:13:23.190769 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:13:23.190789 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:13:23.190810 systemd[1]: Stopped verity-setup.service. Dec 16 13:13:23.190831 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:23.190854 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:13:23.190875 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:13:23.190897 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:13:23.190917 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:13:23.190951 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:13:23.190972 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:13:23.190993 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:13:23.191013 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:13:23.191034 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:13:23.191054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:13:23.191077 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:13:23.191097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:13:23.191118 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:13:23.191139 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:13:23.191160 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:13:23.191181 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:13:23.191202 kernel: loop: module loaded Dec 16 13:13:23.191222 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:13:23.191243 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:13:23.191267 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:13:23.191288 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:13:23.191311 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:13:23.191337 kernel: fuse: init (API version 7.41) Dec 16 13:13:23.191398 systemd-journald[1501]: Collecting audit messages is disabled. Dec 16 13:13:23.191433 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:13:23.191454 systemd-journald[1501]: Journal started Dec 16 13:13:23.191495 systemd-journald[1501]: Runtime Journal (/run/log/journal/ec23b971469665c13c8bdda7039309f0) is 4.7M, max 38.1M, 33.3M free. Dec 16 13:13:22.815304 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:13:22.828343 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:13:22.828854 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:13:23.204753 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:13:23.204850 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:13:23.221658 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:13:23.229967 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:13:23.243955 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:13:23.251663 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:13:23.255148 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:13:23.256031 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:13:23.257670 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:13:23.262138 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:13:23.263305 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:13:23.266449 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:13:23.269218 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:13:23.295016 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:13:23.311969 kernel: loop0: detected capacity change from 0 to 128560 Dec 16 13:13:23.304637 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:13:23.315062 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:13:23.319059 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:13:23.332469 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:13:23.334419 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:13:23.340293 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:13:23.346139 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:13:23.361682 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:13:23.366685 systemd-tmpfiles[1530]: ACLs are not supported, ignoring. Dec 16 13:13:23.366714 systemd-tmpfiles[1530]: ACLs are not supported, ignoring. Dec 16 13:13:23.381036 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:13:23.387192 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:13:23.433457 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:13:23.451730 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:13:23.463111 systemd-journald[1501]: Time spent on flushing to /var/log/journal/ec23b971469665c13c8bdda7039309f0 is 64.035ms for 1033 entries. Dec 16 13:13:23.463111 systemd-journald[1501]: System Journal (/var/log/journal/ec23b971469665c13c8bdda7039309f0) is 8M, max 195.6M, 187.6M free. Dec 16 13:13:23.555099 systemd-journald[1501]: Received client request to flush runtime journal. Dec 16 13:13:23.555156 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:13:23.555194 kernel: ACPI: bus type drm_connector registered Dec 16 13:13:23.555218 kernel: loop1: detected capacity change from 0 to 110984 Dec 16 13:13:23.492013 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:13:23.499249 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:13:23.499549 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:13:23.560168 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:13:23.561494 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:13:23.566118 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:13:23.599169 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Dec 16 13:13:23.599551 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Dec 16 13:13:23.605447 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:13:23.650960 kernel: loop2: detected capacity change from 0 to 72368 Dec 16 13:13:23.763978 kernel: loop3: detected capacity change from 0 to 224512 Dec 16 13:13:23.830845 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:13:23.907995 kernel: loop4: detected capacity change from 0 to 128560 Dec 16 13:13:23.929971 kernel: loop5: detected capacity change from 0 to 110984 Dec 16 13:13:23.954956 kernel: loop6: detected capacity change from 0 to 72368 Dec 16 13:13:23.975972 kernel: loop7: detected capacity change from 0 to 224512 Dec 16 13:13:23.998125 (sd-merge)[1579]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 16 13:13:24.000986 (sd-merge)[1579]: Merged extensions into '/usr'. Dec 16 13:13:24.008164 systemd[1]: Reload requested from client PID 1525 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:13:24.008187 systemd[1]: Reloading... Dec 16 13:13:24.104954 zram_generator::config[1604]: No configuration found. Dec 16 13:13:24.422534 systemd[1]: Reloading finished in 413 ms. Dec 16 13:13:24.438505 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:13:24.439546 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:13:24.450763 systemd[1]: Starting ensure-sysext.service... Dec 16 13:13:24.455103 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:13:24.461104 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:13:24.488997 systemd[1]: Reload requested from client PID 1657 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:13:24.489155 systemd[1]: Reloading... Dec 16 13:13:24.491525 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:13:24.492029 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:13:24.492433 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:13:24.493250 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:13:24.496497 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:13:24.496957 systemd-tmpfiles[1658]: ACLs are not supported, ignoring. Dec 16 13:13:24.497050 systemd-tmpfiles[1658]: ACLs are not supported, ignoring. Dec 16 13:13:24.509164 systemd-tmpfiles[1658]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:13:24.509179 systemd-tmpfiles[1658]: Skipping /boot Dec 16 13:13:24.523685 systemd-tmpfiles[1658]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:13:24.523707 systemd-tmpfiles[1658]: Skipping /boot Dec 16 13:13:24.594528 systemd-udevd[1659]: Using default interface naming scheme 'v255'. Dec 16 13:13:24.600958 zram_generator::config[1683]: No configuration found. Dec 16 13:13:24.971294 (udev-worker)[1732]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:13:25.067769 ldconfig[1521]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:13:25.101968 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 16 13:13:25.104963 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:13:25.106952 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:13:25.119224 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:13:25.124842 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:13:25.125651 systemd[1]: Reloading finished in 635 ms. Dec 16 13:13:25.129570 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 16 13:13:25.129642 kernel: ACPI: button: Sleep Button [SLPF] Dec 16 13:13:25.138909 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:13:25.142561 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:13:25.143734 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:13:25.185812 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:13:25.190316 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:13:25.194193 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:13:25.199190 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:13:25.210528 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:13:25.225168 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:13:25.235752 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:13:25.240332 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:25.240649 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:13:25.243786 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:13:25.251202 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:13:25.261875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:13:25.264490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:13:25.264814 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:13:25.265251 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:25.270868 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:25.271511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:13:25.271751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:13:25.271872 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:13:25.272324 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:25.296035 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:13:25.299009 systemd[1]: Finished ensure-sysext.service. Dec 16 13:13:25.300809 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:13:25.302016 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:13:25.307277 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:25.308362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:13:25.316151 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:13:25.317615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:13:25.317682 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:13:25.317757 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:13:25.317817 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:13:25.319083 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:25.326587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:13:25.330192 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:13:25.351838 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:13:25.359774 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:13:25.362005 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:13:25.366119 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:13:25.367262 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:13:25.373517 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:13:25.374803 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:13:25.396455 augenrules[1872]: No rules Dec 16 13:13:25.397440 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:13:25.398365 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:13:25.400791 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:13:25.461219 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:13:25.462684 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:13:25.495617 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 13:13:25.499230 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:13:25.565018 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:13:25.594302 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:25.604723 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:13:25.614340 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:13:25.616173 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:25.624910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:25.786751 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:25.834728 systemd-networkd[1790]: lo: Link UP Dec 16 13:13:25.835100 systemd-networkd[1790]: lo: Gained carrier Dec 16 13:13:25.837105 systemd-networkd[1790]: Enumeration completed Dec 16 13:13:25.837364 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:13:25.838603 systemd-networkd[1790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:13:25.840075 systemd-networkd[1790]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:13:25.841812 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:13:25.845091 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:13:25.846543 systemd-resolved[1793]: Positive Trust Anchors: Dec 16 13:13:25.846557 systemd-resolved[1793]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:13:25.846625 systemd-resolved[1793]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:13:25.847557 systemd-networkd[1790]: eth0: Link UP Dec 16 13:13:25.849247 systemd-networkd[1790]: eth0: Gained carrier Dec 16 13:13:25.849286 systemd-networkd[1790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:13:25.857858 systemd-resolved[1793]: Defaulting to hostname 'linux'. Dec 16 13:13:25.859029 systemd-networkd[1790]: eth0: DHCPv4 address 172.31.24.237/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 13:13:25.859696 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:13:25.860357 systemd[1]: Reached target network.target - Network. Dec 16 13:13:25.860805 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:13:25.862145 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:13:25.862771 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:13:25.863364 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:13:25.863883 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:13:25.864577 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:13:25.865317 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:13:25.865865 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:13:25.866401 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:13:25.866437 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:13:25.866982 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:13:25.870172 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:13:25.873046 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:13:25.876578 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:13:25.877361 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:13:25.877997 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:13:25.890751 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:13:25.892272 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:13:25.893799 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:13:25.894439 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:13:25.896348 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:13:25.896768 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:13:25.897230 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:13:25.897264 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:13:25.898486 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:13:25.901205 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:13:25.903075 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:13:25.907118 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:13:25.911041 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:13:25.913956 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:13:25.914365 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:13:25.916180 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:13:25.919156 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:13:25.923158 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:13:25.927626 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:13:25.933599 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 13:13:25.939766 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:13:25.942167 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:13:25.953290 jq[1945]: false Dec 16 13:13:25.966895 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:13:25.969820 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:13:25.971119 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:13:25.972628 oslogin_cache_refresh[1947]: Refreshing passwd entry cache Dec 16 13:13:25.973288 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Refreshing passwd entry cache Dec 16 13:13:25.978084 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:13:25.988467 extend-filesystems[1946]: Found /dev/nvme0n1p6 Dec 16 13:13:25.985346 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:13:25.989752 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:13:25.991313 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:13:25.991491 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:13:26.000396 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:13:26.013082 update_engine[1962]: I20251216 13:13:26.009332 1962 main.cc:92] Flatcar Update Engine starting Dec 16 13:13:26.005565 oslogin_cache_refresh[1947]: Failure getting users, quitting Dec 16 13:13:26.013400 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Failure getting users, quitting Dec 16 13:13:26.013400 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:13:26.013400 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Refreshing group entry cache Dec 16 13:13:26.013400 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Failure getting groups, quitting Dec 16 13:13:26.013400 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:13:26.001210 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:13:26.005584 oslogin_cache_refresh[1947]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:13:26.003591 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:13:26.005632 oslogin_cache_refresh[1947]: Refreshing group entry cache Dec 16 13:13:26.004995 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:13:26.007550 oslogin_cache_refresh[1947]: Failure getting groups, quitting Dec 16 13:13:26.009835 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:13:26.007564 oslogin_cache_refresh[1947]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:13:26.011121 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:13:26.019885 extend-filesystems[1946]: Found /dev/nvme0n1p9 Dec 16 13:13:26.038998 extend-filesystems[1946]: Checking size of /dev/nvme0n1p9 Dec 16 13:13:26.052741 jq[1964]: true Dec 16 13:13:26.076281 jq[1988]: true Dec 16 13:13:26.084229 (ntainerd)[1987]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:13:26.085852 systemd-logind[1954]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:13:26.085871 systemd-logind[1954]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 16 13:13:26.085888 systemd-logind[1954]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:13:26.087654 systemd-logind[1954]: New seat seat0. Dec 16 13:13:26.088626 ntpd[1949]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:26.089624 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:26.089624 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:26.089624 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: ---------------------------------------------------- Dec 16 13:13:26.089624 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:26.089624 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:26.089624 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: corporation. Support and training for ntp-4 are Dec 16 13:13:26.089624 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: available at https://www.nwtime.org/support Dec 16 13:13:26.089624 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: ---------------------------------------------------- Dec 16 13:13:26.088685 ntpd[1949]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:26.088693 ntpd[1949]: ---------------------------------------------------- Dec 16 13:13:26.088699 ntpd[1949]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:26.088705 ntpd[1949]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:26.088712 ntpd[1949]: corporation. Support and training for ntp-4 are Dec 16 13:13:26.088719 ntpd[1949]: available at https://www.nwtime.org/support Dec 16 13:13:26.088726 ntpd[1949]: ---------------------------------------------------- Dec 16 13:13:26.095900 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:13:26.100425 ntpd[1949]: proto: precision = 0.056 usec (-24) Dec 16 13:13:26.101249 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: proto: precision = 0.056 usec (-24) Dec 16 13:13:26.119235 kernel: ntpd[1949]: segfault at 24 ip 000055b3e942aaeb sp 00007ffc51029b00 error 4 in ntpd[68aeb,55b3e93c8000+80000] likely on CPU 1 (core 0, socket 0) Dec 16 13:13:26.120529 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 16 13:13:26.120587 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: basedate set to 2025-11-30 Dec 16 13:13:26.120587 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:26.120587 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:26.120587 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:26.120587 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:26.120587 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: Listen normally on 3 eth0 172.31.24.237:123 Dec 16 13:13:26.120587 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:26.120587 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: bind(21) AF_INET6 [fe80::426:7eff:fee5:8c5b%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:13:26.120587 ntpd[1949]: 16 Dec 13:13:26 ntpd[1949]: unable to create socket on eth0 (5) for [fe80::426:7eff:fee5:8c5b%2]:123 Dec 16 13:13:26.116877 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 13:13:26.109128 ntpd[1949]: basedate set to 2025-11-30 Dec 16 13:13:26.120890 coreos-metadata[1942]: Dec 16 13:13:26.117 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 13:13:26.120890 coreos-metadata[1942]: Dec 16 13:13:26.119 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 16 13:13:26.109145 ntpd[1949]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:26.121149 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:13:26.110526 ntpd[1949]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:26.110553 ntpd[1949]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:26.110718 ntpd[1949]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:26.110738 ntpd[1949]: Listen normally on 3 eth0 172.31.24.237:123 Dec 16 13:13:26.110758 ntpd[1949]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:26.110783 ntpd[1949]: bind(21) AF_INET6 [fe80::426:7eff:fee5:8c5b%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:13:26.110799 ntpd[1949]: unable to create socket on eth0 (5) for [fe80::426:7eff:fee5:8c5b%2]:123 Dec 16 13:13:26.120987 dbus-daemon[1943]: [system] SELinux support is enabled Dec 16 13:13:26.121794 coreos-metadata[1942]: Dec 16 13:13:26.121 INFO Fetch successful Dec 16 13:13:26.121794 coreos-metadata[1942]: Dec 16 13:13:26.121 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 16 13:13:26.123394 coreos-metadata[1942]: Dec 16 13:13:26.123 INFO Fetch successful Dec 16 13:13:26.123394 coreos-metadata[1942]: Dec 16 13:13:26.123 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 16 13:13:26.124148 coreos-metadata[1942]: Dec 16 13:13:26.124 INFO Fetch successful Dec 16 13:13:26.124148 coreos-metadata[1942]: Dec 16 13:13:26.124 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 16 13:13:26.124157 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:13:26.124182 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:13:26.124671 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:13:26.125694 coreos-metadata[1942]: Dec 16 13:13:26.125 INFO Fetch successful Dec 16 13:13:26.125694 coreos-metadata[1942]: Dec 16 13:13:26.125 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 16 13:13:26.124696 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:13:26.126291 coreos-metadata[1942]: Dec 16 13:13:26.126 INFO Fetch failed with 404: resource not found Dec 16 13:13:26.126291 coreos-metadata[1942]: Dec 16 13:13:26.126 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 16 13:13:26.127045 coreos-metadata[1942]: Dec 16 13:13:26.126 INFO Fetch successful Dec 16 13:13:26.127045 coreos-metadata[1942]: Dec 16 13:13:26.126 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 16 13:13:26.127803 coreos-metadata[1942]: Dec 16 13:13:26.127 INFO Fetch successful Dec 16 13:13:26.127803 coreos-metadata[1942]: Dec 16 13:13:26.127 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 16 13:13:26.130021 coreos-metadata[1942]: Dec 16 13:13:26.129 INFO Fetch successful Dec 16 13:13:26.130021 coreos-metadata[1942]: Dec 16 13:13:26.129 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 16 13:13:26.130783 coreos-metadata[1942]: Dec 16 13:13:26.130 INFO Fetch successful Dec 16 13:13:26.130783 coreos-metadata[1942]: Dec 16 13:13:26.130 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 16 13:13:26.131870 coreos-metadata[1942]: Dec 16 13:13:26.131 INFO Fetch successful Dec 16 13:13:26.131786 systemd-coredump[2003]: Process 1949 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 13:13:26.133764 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Dec 16 13:13:26.134122 extend-filesystems[1946]: Resized partition /dev/nvme0n1p9 Dec 16 13:13:26.140722 tar[1970]: linux-amd64/LICENSE Dec 16 13:13:26.140722 tar[1970]: linux-amd64/helm Dec 16 13:13:26.140133 systemd[1]: Started systemd-coredump@0-2003-0.service - Process Core Dump (PID 2003/UID 0). Dec 16 13:13:26.143299 dbus-daemon[1943]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1790 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 13:13:26.153150 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 13:13:26.156919 extend-filesystems[2021]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:13:26.162948 update_engine[1962]: I20251216 13:13:26.158286 1962 update_check_scheduler.cc:74] Next update check in 3m38s Dec 16 13:13:26.158514 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:13:26.174446 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Dec 16 13:13:26.184771 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:13:26.230407 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:13:26.231230 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:13:26.280080 bash[2018]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:13:26.280676 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:13:26.285788 systemd[1]: Starting sshkeys.service... Dec 16 13:13:26.321735 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 13:13:26.328607 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Dec 16 13:13:26.325437 dbus-daemon[1943]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 13:13:26.340208 dbus-daemon[1943]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2022 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 13:13:26.350251 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 13:13:26.364604 extend-filesystems[2021]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 13:13:26.364604 extend-filesystems[2021]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 13:13:26.364604 extend-filesystems[2021]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Dec 16 13:13:26.370298 extend-filesystems[1946]: Resized filesystem in /dev/nvme0n1p9 Dec 16 13:13:26.371237 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:13:26.373581 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:13:26.383798 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:13:26.392202 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:13:26.480203 systemd-coredump[2016]: Process 1949 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1949: #0 0x000055b3e942aaeb n/a (ntpd + 0x68aeb) #1 0x000055b3e93d3cdf n/a (ntpd + 0x11cdf) #2 0x000055b3e93d4575 n/a (ntpd + 0x12575) #3 0x000055b3e93cfd8a n/a (ntpd + 0xdd8a) #4 0x000055b3e93d15d3 n/a (ntpd + 0xf5d3) #5 0x000055b3e93d9fd1 n/a (ntpd + 0x17fd1) #6 0x000055b3e93cac2d n/a (ntpd + 0x8c2d) #7 0x00007ff4c194a16c n/a (libc.so.6 + 0x2716c) #8 0x00007ff4c194a229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055b3e93cac55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 16 13:13:26.484587 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 13:13:26.484788 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 13:13:26.500366 systemd[1]: systemd-coredump@0-2003-0.service: Deactivated successfully. Dec 16 13:13:26.593381 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Dec 16 13:13:26.603771 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:13:26.668983 coreos-metadata[2073]: Dec 16 13:13:26.658 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 13:13:26.668983 coreos-metadata[2073]: Dec 16 13:13:26.659 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 16 13:13:26.676388 coreos-metadata[2073]: Dec 16 13:13:26.676 INFO Fetch successful Dec 16 13:13:26.676388 coreos-metadata[2073]: Dec 16 13:13:26.676 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 13:13:26.683969 coreos-metadata[2073]: Dec 16 13:13:26.680 INFO Fetch successful Dec 16 13:13:26.687611 unknown[2073]: wrote ssh authorized keys file for user: core Dec 16 13:13:26.726066 ntpd[2127]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:26.727230 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:26.727230 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:26.727230 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: ---------------------------------------------------- Dec 16 13:13:26.727230 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:26.727230 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:26.727230 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: corporation. Support and training for ntp-4 are Dec 16 13:13:26.727230 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: available at https://www.nwtime.org/support Dec 16 13:13:26.727230 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: ---------------------------------------------------- Dec 16 13:13:26.727230 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: proto: precision = 0.063 usec (-24) Dec 16 13:13:26.726144 ntpd[2127]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:26.730363 kernel: ntpd[2127]: segfault at 24 ip 000055bfba274aeb sp 00007ffed9fcd6e0 error 4 in ntpd[68aeb,55bfba212000+80000] likely on CPU 1 (core 0, socket 0) Dec 16 13:13:26.730450 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: basedate set to 2025-11-30 Dec 16 13:13:26.730450 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:26.730450 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:26.730450 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:26.730450 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:26.730450 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: Listen normally on 3 eth0 172.31.24.237:123 Dec 16 13:13:26.730450 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:26.730450 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: bind(21) AF_INET6 [fe80::426:7eff:fee5:8c5b%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:13:26.730450 ntpd[2127]: 16 Dec 13:13:26 ntpd[2127]: unable to create socket on eth0 (5) for [fe80::426:7eff:fee5:8c5b%2]:123 Dec 16 13:13:26.726155 ntpd[2127]: ---------------------------------------------------- Dec 16 13:13:26.726165 ntpd[2127]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:26.726174 ntpd[2127]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:26.726183 ntpd[2127]: corporation. Support and training for ntp-4 are Dec 16 13:13:26.734537 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 16 13:13:26.726193 ntpd[2127]: available at https://www.nwtime.org/support Dec 16 13:13:26.726202 ntpd[2127]: ---------------------------------------------------- Dec 16 13:13:26.726991 ntpd[2127]: proto: precision = 0.063 usec (-24) Dec 16 13:13:26.727260 ntpd[2127]: basedate set to 2025-11-30 Dec 16 13:13:26.727274 ntpd[2127]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:26.727383 ntpd[2127]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:26.727412 ntpd[2127]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:26.727608 ntpd[2127]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:26.727635 ntpd[2127]: Listen normally on 3 eth0 172.31.24.237:123 Dec 16 13:13:26.727661 ntpd[2127]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:26.727691 ntpd[2127]: bind(21) AF_INET6 [fe80::426:7eff:fee5:8c5b%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:13:26.727711 ntpd[2127]: unable to create socket on eth0 (5) for [fe80::426:7eff:fee5:8c5b%2]:123 Dec 16 13:13:26.765975 update-ssh-keys[2144]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:13:26.762221 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:13:26.763370 systemd-coredump[2150]: Process 2127 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 13:13:26.774005 systemd[1]: Finished sshkeys.service. Dec 16 13:13:26.801380 systemd[1]: Started systemd-coredump@1-2150-0.service - Process Core Dump (PID 2150/UID 0). Dec 16 13:13:26.848081 locksmithd[2023]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:13:26.956848 containerd[1987]: time="2025-12-16T13:13:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:13:26.962156 containerd[1987]: time="2025-12-16T13:13:26.962100998Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:13:26.987403 sshd_keygen[1996]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:13:27.023169 containerd[1987]: time="2025-12-16T13:13:27.023103183Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.077µs" Dec 16 13:13:27.023169 containerd[1987]: time="2025-12-16T13:13:27.023154070Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:13:27.023322 containerd[1987]: time="2025-12-16T13:13:27.023179282Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:13:27.023399 containerd[1987]: time="2025-12-16T13:13:27.023375495Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:13:27.023444 containerd[1987]: time="2025-12-16T13:13:27.023409148Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:13:27.023482 containerd[1987]: time="2025-12-16T13:13:27.023444530Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:13:27.023544 containerd[1987]: time="2025-12-16T13:13:27.023524488Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:13:27.023587 containerd[1987]: time="2025-12-16T13:13:27.023546292Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:13:27.023874 containerd[1987]: time="2025-12-16T13:13:27.023834040Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:13:27.023874 containerd[1987]: time="2025-12-16T13:13:27.023867273Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:13:27.024003 containerd[1987]: time="2025-12-16T13:13:27.023884077Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:13:27.024003 containerd[1987]: time="2025-12-16T13:13:27.023895646Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:13:27.026951 containerd[1987]: time="2025-12-16T13:13:27.026083415Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:13:27.026951 containerd[1987]: time="2025-12-16T13:13:27.026364262Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:13:27.026951 containerd[1987]: time="2025-12-16T13:13:27.026407307Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:13:27.026951 containerd[1987]: time="2025-12-16T13:13:27.026424773Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:13:27.029189 containerd[1987]: time="2025-12-16T13:13:27.029147129Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:13:27.029513 containerd[1987]: time="2025-12-16T13:13:27.029487994Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:13:27.029622 containerd[1987]: time="2025-12-16T13:13:27.029599562Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:13:27.035684 polkitd[2069]: Started polkitd version 126 Dec 16 13:13:27.037589 containerd[1987]: time="2025-12-16T13:13:27.037541582Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:13:27.037945 containerd[1987]: time="2025-12-16T13:13:27.037904693Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:13:27.038005 containerd[1987]: time="2025-12-16T13:13:27.037987667Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:13:27.038058 containerd[1987]: time="2025-12-16T13:13:27.038012759Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:13:27.038058 containerd[1987]: time="2025-12-16T13:13:27.038036593Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:13:27.038058 containerd[1987]: time="2025-12-16T13:13:27.038053125Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:13:27.038161 containerd[1987]: time="2025-12-16T13:13:27.038073707Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:13:27.038161 containerd[1987]: time="2025-12-16T13:13:27.038091313Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:13:27.038161 containerd[1987]: time="2025-12-16T13:13:27.038108867Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:13:27.038161 containerd[1987]: time="2025-12-16T13:13:27.038150892Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:13:27.038293 containerd[1987]: time="2025-12-16T13:13:27.038166457Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:13:27.038293 containerd[1987]: time="2025-12-16T13:13:27.038193594Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:13:27.038379 containerd[1987]: time="2025-12-16T13:13:27.038349444Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:13:27.038422 containerd[1987]: time="2025-12-16T13:13:27.038386195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:13:27.038422 containerd[1987]: time="2025-12-16T13:13:27.038409728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:13:27.038496 containerd[1987]: time="2025-12-16T13:13:27.038434970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:13:27.038496 containerd[1987]: time="2025-12-16T13:13:27.038452362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:13:27.038496 containerd[1987]: time="2025-12-16T13:13:27.038468144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:13:27.038496 containerd[1987]: time="2025-12-16T13:13:27.038485088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:13:27.038638 containerd[1987]: time="2025-12-16T13:13:27.038499842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:13:27.038638 containerd[1987]: time="2025-12-16T13:13:27.038517313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:13:27.038638 containerd[1987]: time="2025-12-16T13:13:27.038533467Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:13:27.038638 containerd[1987]: time="2025-12-16T13:13:27.038548941Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:13:27.038638 containerd[1987]: time="2025-12-16T13:13:27.038618732Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:13:27.038804 containerd[1987]: time="2025-12-16T13:13:27.038642494Z" level=info msg="Start snapshots syncer" Dec 16 13:13:27.039194 containerd[1987]: time="2025-12-16T13:13:27.039163758Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:13:27.040947 containerd[1987]: time="2025-12-16T13:13:27.040014574Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:13:27.040947 containerd[1987]: time="2025-12-16T13:13:27.040100084Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:13:27.041242 containerd[1987]: time="2025-12-16T13:13:27.041215310Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:13:27.041424 containerd[1987]: time="2025-12-16T13:13:27.041398251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:13:27.042954 containerd[1987]: time="2025-12-16T13:13:27.041975764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:13:27.042954 containerd[1987]: time="2025-12-16T13:13:27.042010041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:13:27.042954 containerd[1987]: time="2025-12-16T13:13:27.042033292Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:13:27.042954 containerd[1987]: time="2025-12-16T13:13:27.042051612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:13:27.042954 containerd[1987]: time="2025-12-16T13:13:27.042070528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:13:27.042954 containerd[1987]: time="2025-12-16T13:13:27.042087316Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:13:27.042954 containerd[1987]: time="2025-12-16T13:13:27.042122743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:13:27.042954 containerd[1987]: time="2025-12-16T13:13:27.042139168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:13:27.042954 containerd[1987]: time="2025-12-16T13:13:27.042155664Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:13:27.042954 containerd[1987]: time="2025-12-16T13:13:27.042762738Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:13:27.042954 containerd[1987]: time="2025-12-16T13:13:27.042938473Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:13:27.042954 containerd[1987]: time="2025-12-16T13:13:27.042958437Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:13:27.043419 containerd[1987]: time="2025-12-16T13:13:27.042976389Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:13:27.043419 containerd[1987]: time="2025-12-16T13:13:27.042988433Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:13:27.043419 containerd[1987]: time="2025-12-16T13:13:27.043009575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:13:27.043419 containerd[1987]: time="2025-12-16T13:13:27.043032895Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:13:27.043419 containerd[1987]: time="2025-12-16T13:13:27.043053644Z" level=info msg="runtime interface created" Dec 16 13:13:27.043419 containerd[1987]: time="2025-12-16T13:13:27.043061468Z" level=info msg="created NRI interface" Dec 16 13:13:27.043419 containerd[1987]: time="2025-12-16T13:13:27.043073662Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:13:27.043419 containerd[1987]: time="2025-12-16T13:13:27.043094735Z" level=info msg="Connect containerd service" Dec 16 13:13:27.043419 containerd[1987]: time="2025-12-16T13:13:27.043134203Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:13:27.046948 containerd[1987]: time="2025-12-16T13:13:27.046390214Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:13:27.050840 polkitd[2069]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 13:13:27.057791 polkitd[2069]: Loading rules from directory /run/polkit-1/rules.d Dec 16 13:13:27.057874 polkitd[2069]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:13:27.059993 polkitd[2069]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 13:13:27.060055 polkitd[2069]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:13:27.060105 polkitd[2069]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 13:13:27.062906 polkitd[2069]: Finished loading, compiling and executing 2 rules Dec 16 13:13:27.063394 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 13:13:27.069462 dbus-daemon[1943]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 13:13:27.070440 polkitd[2069]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 13:13:27.077346 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:13:27.082754 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:13:27.119553 systemd-hostnamed[2022]: Hostname set to (transient) Dec 16 13:13:27.120246 systemd-resolved[1793]: System hostname changed to 'ip-172-31-24-237'. Dec 16 13:13:27.134459 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:13:27.134993 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:13:27.140490 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:13:27.141502 systemd-coredump[2156]: Process 2127 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2127: #0 0x000055bfba274aeb n/a (ntpd + 0x68aeb) #1 0x000055bfba21dcdf n/a (ntpd + 0x11cdf) #2 0x000055bfba21e575 n/a (ntpd + 0x12575) #3 0x000055bfba219d8a n/a (ntpd + 0xdd8a) #4 0x000055bfba21b5d3 n/a (ntpd + 0xf5d3) #5 0x000055bfba223fd1 n/a (ntpd + 0x17fd1) #6 0x000055bfba214c2d n/a (ntpd + 0x8c2d) #7 0x00007fb79360e16c n/a (libc.so.6 + 0x2716c) #8 0x00007fb79360e229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055bfba214c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 16 13:13:27.144748 systemd[1]: systemd-coredump@1-2150-0.service: Deactivated successfully. Dec 16 13:13:27.156149 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 13:13:27.156547 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 13:13:27.175392 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:13:27.184177 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:13:27.189123 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:13:27.190354 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:13:27.313241 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Dec 16 13:13:27.317047 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:13:27.329287 tar[1970]: linux-amd64/README.md Dec 16 13:13:27.351859 ntpd[2204]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:27.356855 kernel: ntpd[2204]: segfault at 24 ip 000055567f25eaeb sp 00007ffc4c43a3c0 error 4 in ntpd[68aeb,55567f1fc000+80000] likely on CPU 1 (core 0, socket 0) Dec 16 13:13:27.356959 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: ---------------------------------------------------- Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: corporation. Support and training for ntp-4 are Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: available at https://www.nwtime.org/support Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: ---------------------------------------------------- Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: proto: precision = 0.098 usec (-23) Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: basedate set to 2025-11-30 Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: Listen normally on 3 eth0 172.31.24.237:123 Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: bind(21) AF_INET6 [fe80::426:7eff:fee5:8c5b%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:13:27.356987 ntpd[2204]: 16 Dec 13:13:27 ntpd[2204]: unable to create socket on eth0 (5) for [fe80::426:7eff:fee5:8c5b%2]:123 Dec 16 13:13:27.352356 ntpd[2204]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:27.355526 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:13:27.352367 ntpd[2204]: ---------------------------------------------------- Dec 16 13:13:27.352376 ntpd[2204]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:27.352386 ntpd[2204]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:27.352395 ntpd[2204]: corporation. Support and training for ntp-4 are Dec 16 13:13:27.352406 ntpd[2204]: available at https://www.nwtime.org/support Dec 16 13:13:27.352416 ntpd[2204]: ---------------------------------------------------- Dec 16 13:13:27.353157 ntpd[2204]: proto: precision = 0.098 usec (-23) Dec 16 13:13:27.353420 ntpd[2204]: basedate set to 2025-11-30 Dec 16 13:13:27.353433 ntpd[2204]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:27.353526 ntpd[2204]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:27.353555 ntpd[2204]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:27.353747 ntpd[2204]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:27.353779 ntpd[2204]: Listen normally on 3 eth0 172.31.24.237:123 Dec 16 13:13:27.353808 ntpd[2204]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:27.353836 ntpd[2204]: bind(21) AF_INET6 [fe80::426:7eff:fee5:8c5b%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:13:27.353854 ntpd[2204]: unable to create socket on eth0 (5) for [fe80::426:7eff:fee5:8c5b%2]:123 Dec 16 13:13:27.365070 systemd-networkd[1790]: eth0: Gained IPv6LL Dec 16 13:13:27.371246 systemd-coredump[2209]: Process 2204 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 13:13:27.371498 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:13:27.373740 containerd[1987]: time="2025-12-16T13:13:27.373404100Z" level=info msg="Start subscribing containerd event" Dec 16 13:13:27.373740 containerd[1987]: time="2025-12-16T13:13:27.373467756Z" level=info msg="Start recovering state" Dec 16 13:13:27.373740 containerd[1987]: time="2025-12-16T13:13:27.373587851Z" level=info msg="Start event monitor" Dec 16 13:13:27.373740 containerd[1987]: time="2025-12-16T13:13:27.373607076Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:13:27.373740 containerd[1987]: time="2025-12-16T13:13:27.373618483Z" level=info msg="Start streaming server" Dec 16 13:13:27.373740 containerd[1987]: time="2025-12-16T13:13:27.373629216Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:13:27.373740 containerd[1987]: time="2025-12-16T13:13:27.373638756Z" level=info msg="runtime interface starting up..." Dec 16 13:13:27.373740 containerd[1987]: time="2025-12-16T13:13:27.373648295Z" level=info msg="starting plugins..." Dec 16 13:13:27.373740 containerd[1987]: time="2025-12-16T13:13:27.373664193Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:13:27.376724 containerd[1987]: time="2025-12-16T13:13:27.376590780Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:13:27.377204 containerd[1987]: time="2025-12-16T13:13:27.376871657Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:13:27.378877 containerd[1987]: time="2025-12-16T13:13:27.377602868Z" level=info msg="containerd successfully booted in 0.421334s" Dec 16 13:13:27.378230 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:13:27.380421 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:13:27.384804 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 16 13:13:27.389705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:27.393978 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:13:27.401318 systemd[1]: Started systemd-coredump@2-2209-0.service - Process Core Dump (PID 2209/UID 0). Dec 16 13:13:27.435625 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:13:27.527643 systemd-coredump[2215]: Process 2204 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2204: #0 0x000055567f25eaeb n/a (ntpd + 0x68aeb) #1 0x000055567f207cdf n/a (ntpd + 0x11cdf) #2 0x000055567f208575 n/a (ntpd + 0x12575) #3 0x000055567f203d8a n/a (ntpd + 0xdd8a) #4 0x000055567f2055d3 n/a (ntpd + 0xf5d3) #5 0x000055567f20dfd1 n/a (ntpd + 0x17fd1) #6 0x000055567f1fec2d n/a (ntpd + 0x8c2d) #7 0x00007fe31ebff16c n/a (libc.so.6 + 0x2716c) #8 0x00007fe31ebff229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055567f1fec55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 16 13:13:27.532352 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 13:13:27.533238 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: Initializing new seelog logger Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: New Seelog Logger Creation Complete Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: 2025/12/16 13:13:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: 2025/12/16 13:13:27 processing appconfig overrides Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: 2025/12/16 13:13:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: 2025/12/16 13:13:27 processing appconfig overrides Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: 2025/12/16 13:13:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: 2025/12/16 13:13:27 processing appconfig overrides Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.5573 INFO Proxy environment variables: Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: 2025/12/16 13:13:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:27.565794 amazon-ssm-agent[2212]: 2025/12/16 13:13:27 processing appconfig overrides Dec 16 13:13:27.537147 systemd[1]: systemd-coredump@2-2209-0.service: Deactivated successfully. Dec 16 13:13:27.658245 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.5573 INFO https_proxy: Dec 16 13:13:27.738015 amazon-ssm-agent[2212]: 2025/12/16 13:13:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:27.738015 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:27.738015 amazon-ssm-agent[2212]: 2025/12/16 13:13:27 processing appconfig overrides Dec 16 13:13:27.756300 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.5573 INFO http_proxy: Dec 16 13:13:27.770113 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.5573 INFO no_proxy: Dec 16 13:13:27.770113 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.5579 INFO Checking if agent identity type OnPrem can be assumed Dec 16 13:13:27.770113 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.5580 INFO Checking if agent identity type EC2 can be assumed Dec 16 13:13:27.770113 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.5990 INFO Agent will take identity from EC2 Dec 16 13:13:27.770113 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.6010 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 16 13:13:27.770113 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.6010 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 16 13:13:27.770113 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.6010 INFO [amazon-ssm-agent] Starting Core Agent Dec 16 13:13:27.770113 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.6010 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 16 13:13:27.770113 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.6010 INFO [Registrar] Starting registrar module Dec 16 13:13:27.770113 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.6022 INFO [EC2Identity] Checking disk for registration info Dec 16 13:13:27.770514 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.6023 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 16 13:13:27.770514 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.6023 INFO [EC2Identity] Generating registration keypair Dec 16 13:13:27.770514 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.6948 INFO [EC2Identity] Checking write access before registering Dec 16 13:13:27.770514 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.6953 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 16 13:13:27.770514 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.7373 INFO [EC2Identity] EC2 registration was successful. Dec 16 13:13:27.770514 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.7373 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 16 13:13:27.770514 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.7374 INFO [CredentialRefresher] credentialRefresher has started Dec 16 13:13:27.770514 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.7374 INFO [CredentialRefresher] Starting credentials refresher loop Dec 16 13:13:27.770514 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.7697 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 16 13:13:27.770514 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.7700 INFO [CredentialRefresher] Credentials ready Dec 16 13:13:27.810363 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 3. Dec 16 13:13:27.812391 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:13:27.838900 ntpd[2241]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:27.839416 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:27.839601 ntpd[2241]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:27.840174 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:27.840174 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: ---------------------------------------------------- Dec 16 13:13:27.840174 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:27.840174 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:27.840174 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: corporation. Support and training for ntp-4 are Dec 16 13:13:27.840174 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: available at https://www.nwtime.org/support Dec 16 13:13:27.840174 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: ---------------------------------------------------- Dec 16 13:13:27.839616 ntpd[2241]: ---------------------------------------------------- Dec 16 13:13:27.839622 ntpd[2241]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:27.839629 ntpd[2241]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:27.839636 ntpd[2241]: corporation. Support and training for ntp-4 are Dec 16 13:13:27.839642 ntpd[2241]: available at https://www.nwtime.org/support Dec 16 13:13:27.839648 ntpd[2241]: ---------------------------------------------------- Dec 16 13:13:27.840970 ntpd[2241]: proto: precision = 0.056 usec (-24) Dec 16 13:13:27.841065 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: proto: precision = 0.056 usec (-24) Dec 16 13:13:27.841179 ntpd[2241]: basedate set to 2025-11-30 Dec 16 13:13:27.841296 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: basedate set to 2025-11-30 Dec 16 13:13:27.841296 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:27.841296 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:27.841296 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:27.841194 ntpd[2241]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:27.841261 ntpd[2241]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:27.841458 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:27.841458 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: Listen normally on 3 eth0 172.31.24.237:123 Dec 16 13:13:27.841282 ntpd[2241]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:27.841537 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:27.841537 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: Listen normally on 5 eth0 [fe80::426:7eff:fee5:8c5b%2]:123 Dec 16 13:13:27.841537 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: Listening on routing socket on fd #22 for interface updates Dec 16 13:13:27.841419 ntpd[2241]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:27.841442 ntpd[2241]: Listen normally on 3 eth0 172.31.24.237:123 Dec 16 13:13:27.841462 ntpd[2241]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:27.841481 ntpd[2241]: Listen normally on 5 eth0 [fe80::426:7eff:fee5:8c5b%2]:123 Dec 16 13:13:27.841499 ntpd[2241]: Listening on routing socket on fd #22 for interface updates Dec 16 13:13:27.842743 ntpd[2241]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:13:27.843435 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:13:27.843435 ntpd[2241]: 16 Dec 13:13:27 ntpd[2241]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:13:27.842774 ntpd[2241]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:13:27.853502 amazon-ssm-agent[2212]: 2025-12-16 13:13:27.7702 INFO [CredentialRefresher] Next credential rotation will be in 29.99999231295 minutes Dec 16 13:13:28.205366 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:13:28.207385 systemd[1]: Started sshd@0-172.31.24.237:22-139.178.68.195:42022.service - OpenSSH per-connection server daemon (139.178.68.195:42022). Dec 16 13:13:28.441370 sshd[2245]: Accepted publickey for core from 139.178.68.195 port 42022 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:28.443894 sshd-session[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:28.452616 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:13:28.454794 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:13:28.471377 systemd-logind[1954]: New session 1 of user core. Dec 16 13:13:28.485984 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:13:28.490379 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:13:28.508169 (systemd)[2250]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:13:28.511806 systemd-logind[1954]: New session c1 of user core. Dec 16 13:13:28.694406 systemd[2250]: Queued start job for default target default.target. Dec 16 13:13:28.706254 systemd[2250]: Created slice app.slice - User Application Slice. Dec 16 13:13:28.706301 systemd[2250]: Reached target paths.target - Paths. Dec 16 13:13:28.706467 systemd[2250]: Reached target timers.target - Timers. Dec 16 13:13:28.707960 systemd[2250]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:13:28.721480 systemd[2250]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:13:28.721565 systemd[2250]: Reached target sockets.target - Sockets. Dec 16 13:13:28.721627 systemd[2250]: Reached target basic.target - Basic System. Dec 16 13:13:28.721683 systemd[2250]: Reached target default.target - Main User Target. Dec 16 13:13:28.721727 systemd[2250]: Startup finished in 199ms. Dec 16 13:13:28.721861 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:13:28.731535 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:13:28.784900 amazon-ssm-agent[2212]: 2025-12-16 13:13:28.7846 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 16 13:13:28.886964 amazon-ssm-agent[2212]: 2025-12-16 13:13:28.7889 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2261) started Dec 16 13:13:28.891175 systemd[1]: Started sshd@1-172.31.24.237:22-139.178.68.195:55932.service - OpenSSH per-connection server daemon (139.178.68.195:55932). Dec 16 13:13:28.986501 amazon-ssm-agent[2212]: 2025-12-16 13:13:28.7889 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 16 13:13:29.112899 sshd[2269]: Accepted publickey for core from 139.178.68.195 port 55932 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:29.114365 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:29.120351 systemd-logind[1954]: New session 2 of user core. Dec 16 13:13:29.126164 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:13:29.246195 sshd[2278]: Connection closed by 139.178.68.195 port 55932 Dec 16 13:13:29.247048 sshd-session[2269]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:29.251537 systemd[1]: sshd@1-172.31.24.237:22-139.178.68.195:55932.service: Deactivated successfully. Dec 16 13:13:29.253242 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:13:29.262892 systemd-logind[1954]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:13:29.264840 systemd-logind[1954]: Removed session 2. Dec 16 13:13:29.284845 systemd[1]: Started sshd@2-172.31.24.237:22-139.178.68.195:55942.service - OpenSSH per-connection server daemon (139.178.68.195:55942). Dec 16 13:13:29.473231 sshd[2284]: Accepted publickey for core from 139.178.68.195 port 55942 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:29.476639 sshd-session[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:29.483004 systemd-logind[1954]: New session 3 of user core. Dec 16 13:13:29.490168 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:13:29.611950 sshd[2287]: Connection closed by 139.178.68.195 port 55942 Dec 16 13:13:29.612732 sshd-session[2284]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:29.617131 systemd[1]: sshd@2-172.31.24.237:22-139.178.68.195:55942.service: Deactivated successfully. Dec 16 13:13:29.619211 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:13:29.621496 systemd-logind[1954]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:13:29.623730 systemd-logind[1954]: Removed session 3. Dec 16 13:13:29.716014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:29.718571 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:13:29.720452 systemd[1]: Startup finished in 2.704s (kernel) + 7.160s (initrd) + 7.975s (userspace) = 17.841s. Dec 16 13:13:29.732037 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:13:30.904080 kubelet[2296]: E1216 13:13:30.904003 2296 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:13:30.906811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:13:30.907037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:13:30.907757 systemd[1]: kubelet.service: Consumed 1.072s CPU time, 266.6M memory peak. Dec 16 13:13:36.380794 systemd-resolved[1793]: Clock change detected. Flushing caches. Dec 16 13:13:41.197940 systemd[1]: Started sshd@3-172.31.24.237:22-139.178.68.195:53934.service - OpenSSH per-connection server daemon (139.178.68.195:53934). Dec 16 13:13:41.373704 sshd[2309]: Accepted publickey for core from 139.178.68.195 port 53934 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:41.375145 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:41.381498 systemd-logind[1954]: New session 4 of user core. Dec 16 13:13:41.388067 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:13:41.508663 sshd[2312]: Connection closed by 139.178.68.195 port 53934 Dec 16 13:13:41.509536 sshd-session[2309]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:41.513798 systemd[1]: sshd@3-172.31.24.237:22-139.178.68.195:53934.service: Deactivated successfully. Dec 16 13:13:41.515920 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:13:41.517013 systemd-logind[1954]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:13:41.518362 systemd-logind[1954]: Removed session 4. Dec 16 13:13:41.545457 systemd[1]: Started sshd@4-172.31.24.237:22-139.178.68.195:53944.service - OpenSSH per-connection server daemon (139.178.68.195:53944). Dec 16 13:13:41.724020 sshd[2318]: Accepted publickey for core from 139.178.68.195 port 53944 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:41.725396 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:41.739592 systemd-logind[1954]: New session 5 of user core. Dec 16 13:13:41.761872 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:13:41.876286 sshd[2321]: Connection closed by 139.178.68.195 port 53944 Dec 16 13:13:41.876851 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:41.880605 systemd[1]: sshd@4-172.31.24.237:22-139.178.68.195:53944.service: Deactivated successfully. Dec 16 13:13:41.882377 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:13:41.883304 systemd-logind[1954]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:13:41.884702 systemd-logind[1954]: Removed session 5. Dec 16 13:13:41.909645 systemd[1]: Started sshd@5-172.31.24.237:22-139.178.68.195:53956.service - OpenSSH per-connection server daemon (139.178.68.195:53956). Dec 16 13:13:42.090578 sshd[2327]: Accepted publickey for core from 139.178.68.195 port 53956 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:42.093506 sshd-session[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:42.106038 systemd-logind[1954]: New session 6 of user core. Dec 16 13:13:42.110767 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:13:42.265309 sshd[2330]: Connection closed by 139.178.68.195 port 53956 Dec 16 13:13:42.266012 sshd-session[2327]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:42.276624 systemd[1]: sshd@5-172.31.24.237:22-139.178.68.195:53956.service: Deactivated successfully. Dec 16 13:13:42.278842 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:13:42.282249 systemd-logind[1954]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:13:42.284098 systemd-logind[1954]: Removed session 6. Dec 16 13:13:42.303203 systemd[1]: Started sshd@6-172.31.24.237:22-139.178.68.195:53968.service - OpenSSH per-connection server daemon (139.178.68.195:53968). Dec 16 13:13:42.456607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:13:42.459311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:42.491263 sshd[2336]: Accepted publickey for core from 139.178.68.195 port 53968 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:42.493739 sshd-session[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:42.512382 systemd-logind[1954]: New session 7 of user core. Dec 16 13:13:42.534803 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:13:42.676598 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:13:42.676979 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:13:42.690936 sudo[2343]: pam_unix(sudo:session): session closed for user root Dec 16 13:13:42.715279 sshd[2342]: Connection closed by 139.178.68.195 port 53968 Dec 16 13:13:42.715965 sshd-session[2336]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:42.721361 systemd[1]: sshd@6-172.31.24.237:22-139.178.68.195:53968.service: Deactivated successfully. Dec 16 13:13:42.721649 systemd-logind[1954]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:13:42.725135 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:13:42.730167 systemd-logind[1954]: Removed session 7. Dec 16 13:13:42.744648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:42.750785 systemd[1]: Started sshd@7-172.31.24.237:22-139.178.68.195:53970.service - OpenSSH per-connection server daemon (139.178.68.195:53970). Dec 16 13:13:42.755015 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:13:42.805213 kubelet[2353]: E1216 13:13:42.805176 2353 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:13:42.809314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:13:42.809500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:13:42.810079 systemd[1]: kubelet.service: Consumed 184ms CPU time, 110.7M memory peak. Dec 16 13:13:42.924539 sshd[2355]: Accepted publickey for core from 139.178.68.195 port 53970 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:42.926114 sshd-session[2355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:42.931591 systemd-logind[1954]: New session 8 of user core. Dec 16 13:13:42.938776 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:13:43.037451 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:13:43.037862 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:13:43.043313 sudo[2366]: pam_unix(sudo:session): session closed for user root Dec 16 13:13:43.048878 sudo[2365]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:13:43.049239 sudo[2365]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:13:43.059429 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:13:43.098351 augenrules[2388]: No rules Dec 16 13:13:43.098983 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:13:43.099189 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:13:43.100874 sudo[2365]: pam_unix(sudo:session): session closed for user root Dec 16 13:13:43.123916 sshd[2364]: Connection closed by 139.178.68.195 port 53970 Dec 16 13:13:43.124437 sshd-session[2355]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:43.128386 systemd[1]: sshd@7-172.31.24.237:22-139.178.68.195:53970.service: Deactivated successfully. Dec 16 13:13:43.130162 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:13:43.131284 systemd-logind[1954]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:13:43.132614 systemd-logind[1954]: Removed session 8. Dec 16 13:13:43.157582 systemd[1]: Started sshd@8-172.31.24.237:22-139.178.68.195:53974.service - OpenSSH per-connection server daemon (139.178.68.195:53974). Dec 16 13:13:43.340219 sshd[2397]: Accepted publickey for core from 139.178.68.195 port 53974 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:43.341829 sshd-session[2397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:43.347730 systemd-logind[1954]: New session 9 of user core. Dec 16 13:13:43.353774 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:13:43.453250 sudo[2401]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:13:43.453539 sudo[2401]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:13:44.109510 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:13:44.121015 (dockerd)[2420]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:13:44.681157 dockerd[2420]: time="2025-12-16T13:13:44.680821794Z" level=info msg="Starting up" Dec 16 13:13:44.683537 dockerd[2420]: time="2025-12-16T13:13:44.683400111Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:13:44.695860 dockerd[2420]: time="2025-12-16T13:13:44.695703615Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:13:44.764409 dockerd[2420]: time="2025-12-16T13:13:44.764371682Z" level=info msg="Loading containers: start." Dec 16 13:13:44.787587 kernel: Initializing XFRM netlink socket Dec 16 13:13:45.048778 (udev-worker)[2441]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:13:45.109043 systemd-networkd[1790]: docker0: Link UP Dec 16 13:13:45.114867 dockerd[2420]: time="2025-12-16T13:13:45.114499901Z" level=info msg="Loading containers: done." Dec 16 13:13:45.142801 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2318205263-merged.mount: Deactivated successfully. Dec 16 13:13:45.146053 dockerd[2420]: time="2025-12-16T13:13:45.145989365Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:13:45.146188 dockerd[2420]: time="2025-12-16T13:13:45.146119431Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:13:45.146363 dockerd[2420]: time="2025-12-16T13:13:45.146243706Z" level=info msg="Initializing buildkit" Dec 16 13:13:45.188003 dockerd[2420]: time="2025-12-16T13:13:45.187951319Z" level=info msg="Completed buildkit initialization" Dec 16 13:13:45.198186 dockerd[2420]: time="2025-12-16T13:13:45.198128877Z" level=info msg="Daemon has completed initialization" Dec 16 13:13:45.199614 dockerd[2420]: time="2025-12-16T13:13:45.198336208Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:13:45.198424 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:13:46.411227 containerd[1987]: time="2025-12-16T13:13:46.411073986Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 13:13:46.978784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296763584.mount: Deactivated successfully. Dec 16 13:13:48.495669 containerd[1987]: time="2025-12-16T13:13:48.495614048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:48.496742 containerd[1987]: time="2025-12-16T13:13:48.496596354Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072183" Dec 16 13:13:48.497987 containerd[1987]: time="2025-12-16T13:13:48.497947684Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:48.501507 containerd[1987]: time="2025-12-16T13:13:48.501451035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:48.502504 containerd[1987]: time="2025-12-16T13:13:48.502455827Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 2.091225146s" Dec 16 13:13:48.502504 containerd[1987]: time="2025-12-16T13:13:48.502498737Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 13:13:48.503735 containerd[1987]: time="2025-12-16T13:13:48.503504298Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 13:13:49.994850 containerd[1987]: time="2025-12-16T13:13:49.994784745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:49.996020 containerd[1987]: time="2025-12-16T13:13:49.995682463Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992010" Dec 16 13:13:49.997061 containerd[1987]: time="2025-12-16T13:13:49.997035161Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:49.999605 containerd[1987]: time="2025-12-16T13:13:49.999570540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:50.000887 containerd[1987]: time="2025-12-16T13:13:50.000845826Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.497285288s" Dec 16 13:13:50.000992 containerd[1987]: time="2025-12-16T13:13:50.000898850Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 13:13:50.001700 containerd[1987]: time="2025-12-16T13:13:50.001671432Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 13:13:51.260267 containerd[1987]: time="2025-12-16T13:13:51.259841411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:51.270072 containerd[1987]: time="2025-12-16T13:13:51.270022930Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404248" Dec 16 13:13:51.271234 containerd[1987]: time="2025-12-16T13:13:51.270757315Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:51.274260 containerd[1987]: time="2025-12-16T13:13:51.274211307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:51.274992 containerd[1987]: time="2025-12-16T13:13:51.274935242Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.273128295s" Dec 16 13:13:51.274992 containerd[1987]: time="2025-12-16T13:13:51.274979653Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 13:13:51.275535 containerd[1987]: time="2025-12-16T13:13:51.275499025Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 13:13:52.250122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4224565899.mount: Deactivated successfully. Dec 16 13:13:52.820803 containerd[1987]: time="2025-12-16T13:13:52.820753282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:52.821946 containerd[1987]: time="2025-12-16T13:13:52.821907534Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161423" Dec 16 13:13:52.823478 containerd[1987]: time="2025-12-16T13:13:52.823425662Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:52.829441 containerd[1987]: time="2025-12-16T13:13:52.828819233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:52.829441 containerd[1987]: time="2025-12-16T13:13:52.829269617Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.552997836s" Dec 16 13:13:52.829441 containerd[1987]: time="2025-12-16T13:13:52.829299217Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 13:13:52.829916 containerd[1987]: time="2025-12-16T13:13:52.829891699Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 13:13:52.850865 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:13:52.852833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:53.170896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:53.178981 (kubelet)[2710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:13:53.227387 kubelet[2710]: E1216 13:13:53.227297 2710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:13:53.230955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:13:53.231136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:13:53.231713 systemd[1]: kubelet.service: Consumed 180ms CPU time, 108.3M memory peak. Dec 16 13:13:53.337235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238591315.mount: Deactivated successfully. Dec 16 13:13:54.361245 containerd[1987]: time="2025-12-16T13:13:54.361170861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:54.363344 containerd[1987]: time="2025-12-16T13:13:54.363294319Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Dec 16 13:13:54.366681 containerd[1987]: time="2025-12-16T13:13:54.366614245Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:54.371504 containerd[1987]: time="2025-12-16T13:13:54.371428770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:54.374155 containerd[1987]: time="2025-12-16T13:13:54.373552391Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.543606763s" Dec 16 13:13:54.374155 containerd[1987]: time="2025-12-16T13:13:54.373602677Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 13:13:54.374468 containerd[1987]: time="2025-12-16T13:13:54.374187528Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:13:54.857052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount974019694.mount: Deactivated successfully. Dec 16 13:13:54.863393 containerd[1987]: time="2025-12-16T13:13:54.863269061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:13:54.864282 containerd[1987]: time="2025-12-16T13:13:54.864178616Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:13:54.866374 containerd[1987]: time="2025-12-16T13:13:54.865396305Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:13:54.867477 containerd[1987]: time="2025-12-16T13:13:54.867444542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:13:54.868437 containerd[1987]: time="2025-12-16T13:13:54.868387544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 494.156787ms" Dec 16 13:13:54.868437 containerd[1987]: time="2025-12-16T13:13:54.868420680Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:13:54.869033 containerd[1987]: time="2025-12-16T13:13:54.869012096Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 13:13:55.448670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2473408463.mount: Deactivated successfully. Dec 16 13:13:57.993267 containerd[1987]: time="2025-12-16T13:13:57.993188525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:57.994556 containerd[1987]: time="2025-12-16T13:13:57.994385145Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Dec 16 13:13:57.996092 containerd[1987]: time="2025-12-16T13:13:57.996036385Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:57.999345 containerd[1987]: time="2025-12-16T13:13:57.999309457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:58.000604 containerd[1987]: time="2025-12-16T13:13:58.000118792Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.131074777s" Dec 16 13:13:58.000604 containerd[1987]: time="2025-12-16T13:13:58.000160625Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 13:13:58.669399 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 13:14:01.285870 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:01.286184 systemd[1]: kubelet.service: Consumed 180ms CPU time, 108.3M memory peak. Dec 16 13:14:01.299310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:14:01.375187 systemd[1]: Reload requested from client PID 2858 ('systemctl') (unit session-9.scope)... Dec 16 13:14:01.375212 systemd[1]: Reloading... Dec 16 13:14:02.230555 zram_generator::config[2902]: No configuration found. Dec 16 13:14:03.210556 systemd[1]: Reloading finished in 1833 ms. Dec 16 13:14:03.350483 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:14:03.361339 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:14:03.361792 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:03.361867 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.1M memory peak. Dec 16 13:14:03.367798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:14:04.040879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:04.062978 (kubelet)[2966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:14:04.244550 kubelet[2966]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:14:04.244550 kubelet[2966]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:14:04.244550 kubelet[2966]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:14:04.245105 kubelet[2966]: I1216 13:14:04.244683 2966 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:14:04.802103 kubelet[2966]: I1216 13:14:04.802053 2966 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:14:04.802103 kubelet[2966]: I1216 13:14:04.802089 2966 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:14:04.802499 kubelet[2966]: I1216 13:14:04.802474 2966 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:14:04.888819 kubelet[2966]: E1216 13:14:04.888734 2966 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.24.237:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.237:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:14:04.889951 kubelet[2966]: I1216 13:14:04.889910 2966 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:14:04.931332 kubelet[2966]: I1216 13:14:04.931295 2966 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:14:04.941262 kubelet[2966]: I1216 13:14:04.941223 2966 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:14:04.943713 kubelet[2966]: I1216 13:14:04.943639 2966 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:14:04.943994 kubelet[2966]: I1216 13:14:04.943706 2966 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-237","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:14:04.944952 kubelet[2966]: I1216 13:14:04.944924 2966 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:14:04.944952 kubelet[2966]: I1216 13:14:04.944957 2966 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:14:04.946775 kubelet[2966]: I1216 13:14:04.946736 2966 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:14:04.955008 kubelet[2966]: I1216 13:14:04.954962 2966 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:14:04.955177 kubelet[2966]: I1216 13:14:04.955034 2966 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:14:04.956874 kubelet[2966]: I1216 13:14:04.956830 2966 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:14:04.956874 kubelet[2966]: I1216 13:14:04.956867 2966 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:14:04.961739 kubelet[2966]: W1216 13:14:04.959936 2966 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-237&limit=500&resourceVersion=0": dial tcp 172.31.24.237:6443: connect: connection refused Dec 16 13:14:04.961739 kubelet[2966]: E1216 13:14:04.960032 2966 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-237&limit=500&resourceVersion=0\": dial tcp 172.31.24.237:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:14:04.962232 kubelet[2966]: W1216 13:14:04.962178 2966 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.237:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.237:6443: connect: connection refused Dec 16 13:14:04.962314 kubelet[2966]: E1216 13:14:04.962251 2966 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.237:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.237:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:14:04.965773 kubelet[2966]: I1216 13:14:04.965286 2966 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:14:04.977636 kubelet[2966]: I1216 13:14:04.976225 2966 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:14:04.977636 kubelet[2966]: W1216 13:14:04.976440 2966 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:14:04.977822 kubelet[2966]: I1216 13:14:04.977464 2966 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:14:04.977822 kubelet[2966]: I1216 13:14:04.977708 2966 server.go:1287] "Started kubelet" Dec 16 13:14:04.992297 kubelet[2966]: I1216 13:14:04.992003 2966 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:14:04.997155 kubelet[2966]: I1216 13:14:04.996834 2966 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:14:05.005693 kubelet[2966]: I1216 13:14:05.005598 2966 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:14:05.007284 kubelet[2966]: I1216 13:14:05.007250 2966 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:14:05.016552 kubelet[2966]: I1216 13:14:05.013165 2966 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:14:05.023359 kubelet[2966]: E1216 13:14:05.012564 2966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.237:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.237:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-237.1881b45e0f0244a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-237,UID:ip-172-31-24-237,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-237,},FirstTimestamp:2025-12-16 13:14:04.977677473 +0000 UTC m=+0.898039140,LastTimestamp:2025-12-16 13:14:04.977677473 +0000 UTC m=+0.898039140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-237,}" Dec 16 13:14:05.025152 kubelet[2966]: I1216 13:14:05.025070 2966 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:14:05.031464 kubelet[2966]: I1216 13:14:05.027494 2966 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:14:05.037843 kubelet[2966]: E1216 13:14:05.037733 2966 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-237\" not found" Dec 16 13:14:05.039914 kubelet[2966]: I1216 13:14:05.039886 2966 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:14:05.040034 kubelet[2966]: I1216 13:14:05.039981 2966 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:14:05.046077 kubelet[2966]: W1216 13:14:05.045995 2966 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.237:6443: connect: connection refused Dec 16 13:14:05.046207 kubelet[2966]: E1216 13:14:05.046091 2966 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.237:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:14:05.046207 kubelet[2966]: E1216 13:14:05.046189 2966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-237?timeout=10s\": dial tcp 172.31.24.237:6443: connect: connection refused" interval="200ms" Dec 16 13:14:05.057060 kubelet[2966]: I1216 13:14:05.056949 2966 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:14:05.060676 kubelet[2966]: I1216 13:14:05.060590 2966 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:14:05.073931 kubelet[2966]: I1216 13:14:05.073900 2966 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:14:05.093966 kubelet[2966]: E1216 13:14:05.093933 2966 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:14:05.120848 kubelet[2966]: I1216 13:14:05.120790 2966 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:14:05.126835 kubelet[2966]: I1216 13:14:05.126773 2966 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:14:05.126835 kubelet[2966]: I1216 13:14:05.126803 2966 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:14:05.127197 kubelet[2966]: I1216 13:14:05.127138 2966 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:14:05.127197 kubelet[2966]: I1216 13:14:05.127154 2966 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:14:05.135938 kubelet[2966]: E1216 13:14:05.127410 2966 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:14:05.137119 kubelet[2966]: W1216 13:14:05.137084 2966 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.237:6443: connect: connection refused Dec 16 13:14:05.140349 kubelet[2966]: E1216 13:14:05.137134 2966 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.237:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:14:05.140617 kubelet[2966]: E1216 13:14:05.140596 2966 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-237\" not found" Dec 16 13:14:05.140964 kubelet[2966]: I1216 13:14:05.140896 2966 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:14:05.140964 kubelet[2966]: I1216 13:14:05.140911 2966 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:14:05.140964 kubelet[2966]: I1216 13:14:05.140933 2966 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:14:05.144732 kubelet[2966]: I1216 13:14:05.144704 2966 policy_none.go:49] "None policy: Start" Dec 16 13:14:05.144732 kubelet[2966]: I1216 13:14:05.144734 2966 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:14:05.144886 kubelet[2966]: I1216 13:14:05.144748 2966 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:14:05.154622 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:14:05.185309 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:14:05.190407 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:14:05.207904 kubelet[2966]: I1216 13:14:05.207797 2966 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:14:05.208422 kubelet[2966]: I1216 13:14:05.208351 2966 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:14:05.208422 kubelet[2966]: I1216 13:14:05.208365 2966 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:14:05.210893 kubelet[2966]: I1216 13:14:05.210829 2966 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:14:05.215846 kubelet[2966]: E1216 13:14:05.215802 2966 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:14:05.216142 kubelet[2966]: E1216 13:14:05.216071 2966 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-237\" not found" Dec 16 13:14:05.240885 kubelet[2966]: I1216 13:14:05.240856 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b19efec98c1111e4b11df8e6ad76c5e-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-237\" (UID: \"6b19efec98c1111e4b11df8e6ad76c5e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-237" Dec 16 13:14:05.241308 kubelet[2966]: I1216 13:14:05.241184 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de0899d08d775b0ba3aa4c7cc38933db-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-237\" (UID: \"de0899d08d775b0ba3aa4c7cc38933db\") " pod="kube-system/kube-scheduler-ip-172-31-24-237" Dec 16 13:14:05.241308 kubelet[2966]: I1216 13:14:05.241247 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f6fd28659cfb27a60889c06458f5194-ca-certs\") pod \"kube-apiserver-ip-172-31-24-237\" (UID: \"2f6fd28659cfb27a60889c06458f5194\") " pod="kube-system/kube-apiserver-ip-172-31-24-237" Dec 16 13:14:05.241308 kubelet[2966]: I1216 13:14:05.241273 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f6fd28659cfb27a60889c06458f5194-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-237\" (UID: \"2f6fd28659cfb27a60889c06458f5194\") " pod="kube-system/kube-apiserver-ip-172-31-24-237" Dec 16 13:14:05.242491 kubelet[2966]: I1216 13:14:05.241409 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b19efec98c1111e4b11df8e6ad76c5e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-237\" (UID: \"6b19efec98c1111e4b11df8e6ad76c5e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-237" Dec 16 13:14:05.243049 kubelet[2966]: I1216 13:14:05.241440 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6b19efec98c1111e4b11df8e6ad76c5e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-237\" (UID: \"6b19efec98c1111e4b11df8e6ad76c5e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-237" Dec 16 13:14:05.243049 kubelet[2966]: I1216 13:14:05.242632 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b19efec98c1111e4b11df8e6ad76c5e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-237\" (UID: \"6b19efec98c1111e4b11df8e6ad76c5e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-237" Dec 16 13:14:05.243049 kubelet[2966]: I1216 13:14:05.243013 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f6fd28659cfb27a60889c06458f5194-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-237\" (UID: \"2f6fd28659cfb27a60889c06458f5194\") " pod="kube-system/kube-apiserver-ip-172-31-24-237" Dec 16 13:14:05.243316 kubelet[2966]: I1216 13:14:05.243261 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6b19efec98c1111e4b11df8e6ad76c5e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-237\" (UID: \"6b19efec98c1111e4b11df8e6ad76c5e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-237" Dec 16 13:14:05.247068 kubelet[2966]: E1216 13:14:05.246924 2966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-237?timeout=10s\": dial tcp 172.31.24.237:6443: connect: connection refused" interval="400ms" Dec 16 13:14:05.249494 systemd[1]: Created slice kubepods-burstable-pod2f6fd28659cfb27a60889c06458f5194.slice - libcontainer container kubepods-burstable-pod2f6fd28659cfb27a60889c06458f5194.slice. Dec 16 13:14:05.281855 kubelet[2966]: E1216 13:14:05.281822 2966 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-237\" not found" node="ip-172-31-24-237" Dec 16 13:14:05.287144 systemd[1]: Created slice kubepods-burstable-pod6b19efec98c1111e4b11df8e6ad76c5e.slice - libcontainer container kubepods-burstable-pod6b19efec98c1111e4b11df8e6ad76c5e.slice. Dec 16 13:14:05.297327 kubelet[2966]: E1216 13:14:05.296715 2966 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-237\" not found" node="ip-172-31-24-237" Dec 16 13:14:05.301381 systemd[1]: Created slice kubepods-burstable-podde0899d08d775b0ba3aa4c7cc38933db.slice - libcontainer container kubepods-burstable-podde0899d08d775b0ba3aa4c7cc38933db.slice. Dec 16 13:14:05.303649 kubelet[2966]: E1216 13:14:05.303553 2966 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-237\" not found" node="ip-172-31-24-237" Dec 16 13:14:05.310295 kubelet[2966]: I1216 13:14:05.310187 2966 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-237" Dec 16 13:14:05.311821 kubelet[2966]: E1216 13:14:05.311777 2966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.237:6443/api/v1/nodes\": dial tcp 172.31.24.237:6443: connect: connection refused" node="ip-172-31-24-237" Dec 16 13:14:05.519141 kubelet[2966]: I1216 13:14:05.519066 2966 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-237" Dec 16 13:14:05.521154 kubelet[2966]: E1216 13:14:05.521104 2966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.237:6443/api/v1/nodes\": dial tcp 172.31.24.237:6443: connect: connection refused" node="ip-172-31-24-237" Dec 16 13:14:05.592789 containerd[1987]: time="2025-12-16T13:14:05.586646059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-237,Uid:2f6fd28659cfb27a60889c06458f5194,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:05.606479 containerd[1987]: time="2025-12-16T13:14:05.606429201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-237,Uid:6b19efec98c1111e4b11df8e6ad76c5e,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:05.606921 containerd[1987]: time="2025-12-16T13:14:05.606893975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-237,Uid:de0899d08d775b0ba3aa4c7cc38933db,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:05.649041 kubelet[2966]: E1216 13:14:05.648982 2966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-237?timeout=10s\": dial tcp 172.31.24.237:6443: connect: connection refused" interval="800ms" Dec 16 13:14:05.761732 containerd[1987]: time="2025-12-16T13:14:05.761218545Z" level=info msg="connecting to shim 430dcb8d466d86695ebd25d9cf286ea63fffcc4fdf27e5932f3a3e6e1447559b" address="unix:///run/containerd/s/315342ffdb1323b88d3c9ad9d2c8e1f3d852356c935f92e0da376d2adf8b78ad" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:05.835283 containerd[1987]: time="2025-12-16T13:14:05.835236042Z" level=info msg="connecting to shim 47a4f47b3a05e1f1a78624ee1fe26350f916e607053dc18108c706572a4ee5d4" address="unix:///run/containerd/s/bdc6af578942c47e9ac2219c7828b7ac59f42a12e331c01e6436da2a7619cd0d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:05.835657 containerd[1987]: time="2025-12-16T13:14:05.835623570Z" level=info msg="connecting to shim d2b5b6acd159c6278beb31e3d8ef3ef2b1d1b385ac8bc353e211d00729efc9f9" address="unix:///run/containerd/s/11f2af7b1e63f09c368524b25d97b31c118f5212417829cc5b00e4bff0dea507" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:05.926429 kubelet[2966]: I1216 13:14:05.926384 2966 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-237" Dec 16 13:14:05.928223 kubelet[2966]: E1216 13:14:05.926930 2966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.237:6443/api/v1/nodes\": dial tcp 172.31.24.237:6443: connect: connection refused" node="ip-172-31-24-237" Dec 16 13:14:05.933224 kubelet[2966]: W1216 13:14:05.933100 2966 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.237:6443: connect: connection refused Dec 16 13:14:05.933224 kubelet[2966]: E1216 13:14:05.933187 2966 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.237:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:14:05.967807 systemd[1]: Started cri-containerd-430dcb8d466d86695ebd25d9cf286ea63fffcc4fdf27e5932f3a3e6e1447559b.scope - libcontainer container 430dcb8d466d86695ebd25d9cf286ea63fffcc4fdf27e5932f3a3e6e1447559b. Dec 16 13:14:05.969446 systemd[1]: Started cri-containerd-47a4f47b3a05e1f1a78624ee1fe26350f916e607053dc18108c706572a4ee5d4.scope - libcontainer container 47a4f47b3a05e1f1a78624ee1fe26350f916e607053dc18108c706572a4ee5d4. Dec 16 13:14:05.971448 systemd[1]: Started cri-containerd-d2b5b6acd159c6278beb31e3d8ef3ef2b1d1b385ac8bc353e211d00729efc9f9.scope - libcontainer container d2b5b6acd159c6278beb31e3d8ef3ef2b1d1b385ac8bc353e211d00729efc9f9. Dec 16 13:14:06.097427 containerd[1987]: time="2025-12-16T13:14:06.097298248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-237,Uid:de0899d08d775b0ba3aa4c7cc38933db,Namespace:kube-system,Attempt:0,} returns sandbox id \"47a4f47b3a05e1f1a78624ee1fe26350f916e607053dc18108c706572a4ee5d4\"" Dec 16 13:14:06.102540 containerd[1987]: time="2025-12-16T13:14:06.102420963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-237,Uid:2f6fd28659cfb27a60889c06458f5194,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2b5b6acd159c6278beb31e3d8ef3ef2b1d1b385ac8bc353e211d00729efc9f9\"" Dec 16 13:14:06.107783 containerd[1987]: time="2025-12-16T13:14:06.107513709Z" level=info msg="CreateContainer within sandbox \"47a4f47b3a05e1f1a78624ee1fe26350f916e607053dc18108c706572a4ee5d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:14:06.110017 containerd[1987]: time="2025-12-16T13:14:06.109978947Z" level=info msg="CreateContainer within sandbox \"d2b5b6acd159c6278beb31e3d8ef3ef2b1d1b385ac8bc353e211d00729efc9f9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:14:06.116171 kubelet[2966]: W1216 13:14:06.116069 2966 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-237&limit=500&resourceVersion=0": dial tcp 172.31.24.237:6443: connect: connection refused Dec 16 13:14:06.116445 kubelet[2966]: E1216 13:14:06.116394 2966 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-237&limit=500&resourceVersion=0\": dial tcp 172.31.24.237:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:14:06.121678 containerd[1987]: time="2025-12-16T13:14:06.121637324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-237,Uid:6b19efec98c1111e4b11df8e6ad76c5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"430dcb8d466d86695ebd25d9cf286ea63fffcc4fdf27e5932f3a3e6e1447559b\"" Dec 16 13:14:06.125399 containerd[1987]: time="2025-12-16T13:14:06.125351633Z" level=info msg="CreateContainer within sandbox \"430dcb8d466d86695ebd25d9cf286ea63fffcc4fdf27e5932f3a3e6e1447559b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:14:06.142032 containerd[1987]: time="2025-12-16T13:14:06.141744362Z" level=info msg="Container b353d058f97b464ae3443c8b8ccddf0c1a0d3deea98ae8a4df6eba63941f6613: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:06.142244 containerd[1987]: time="2025-12-16T13:14:06.141773544Z" level=info msg="Container 0b1ed4404dc9473cffb7ef536c9ad1905ae46a07dc5d2081d5d13e9db4a09b1e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:06.148078 containerd[1987]: time="2025-12-16T13:14:06.148029062Z" level=info msg="Container 041f5beb6d57318378ca4dad42c2b30ccac2731acef5fdedd2b76324eda2e1ac: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:06.155961 containerd[1987]: time="2025-12-16T13:14:06.155872346Z" level=info msg="CreateContainer within sandbox \"d2b5b6acd159c6278beb31e3d8ef3ef2b1d1b385ac8bc353e211d00729efc9f9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b353d058f97b464ae3443c8b8ccddf0c1a0d3deea98ae8a4df6eba63941f6613\"" Dec 16 13:14:06.156663 containerd[1987]: time="2025-12-16T13:14:06.156632877Z" level=info msg="StartContainer for \"b353d058f97b464ae3443c8b8ccddf0c1a0d3deea98ae8a4df6eba63941f6613\"" Dec 16 13:14:06.160362 containerd[1987]: time="2025-12-16T13:14:06.159069010Z" level=info msg="connecting to shim b353d058f97b464ae3443c8b8ccddf0c1a0d3deea98ae8a4df6eba63941f6613" address="unix:///run/containerd/s/11f2af7b1e63f09c368524b25d97b31c118f5212417829cc5b00e4bff0dea507" protocol=ttrpc version=3 Dec 16 13:14:06.166389 containerd[1987]: time="2025-12-16T13:14:06.166346930Z" level=info msg="CreateContainer within sandbox \"430dcb8d466d86695ebd25d9cf286ea63fffcc4fdf27e5932f3a3e6e1447559b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"041f5beb6d57318378ca4dad42c2b30ccac2731acef5fdedd2b76324eda2e1ac\"" Dec 16 13:14:06.167300 containerd[1987]: time="2025-12-16T13:14:06.166901818Z" level=info msg="StartContainer for \"041f5beb6d57318378ca4dad42c2b30ccac2731acef5fdedd2b76324eda2e1ac\"" Dec 16 13:14:06.169369 containerd[1987]: time="2025-12-16T13:14:06.169332291Z" level=info msg="connecting to shim 041f5beb6d57318378ca4dad42c2b30ccac2731acef5fdedd2b76324eda2e1ac" address="unix:///run/containerd/s/315342ffdb1323b88d3c9ad9d2c8e1f3d852356c935f92e0da376d2adf8b78ad" protocol=ttrpc version=3 Dec 16 13:14:06.172440 containerd[1987]: time="2025-12-16T13:14:06.170065468Z" level=info msg="CreateContainer within sandbox \"47a4f47b3a05e1f1a78624ee1fe26350f916e607053dc18108c706572a4ee5d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0b1ed4404dc9473cffb7ef536c9ad1905ae46a07dc5d2081d5d13e9db4a09b1e\"" Dec 16 13:14:06.173241 containerd[1987]: time="2025-12-16T13:14:06.173213607Z" level=info msg="StartContainer for \"0b1ed4404dc9473cffb7ef536c9ad1905ae46a07dc5d2081d5d13e9db4a09b1e\"" Dec 16 13:14:06.176217 containerd[1987]: time="2025-12-16T13:14:06.176126805Z" level=info msg="connecting to shim 0b1ed4404dc9473cffb7ef536c9ad1905ae46a07dc5d2081d5d13e9db4a09b1e" address="unix:///run/containerd/s/bdc6af578942c47e9ac2219c7828b7ac59f42a12e331c01e6436da2a7619cd0d" protocol=ttrpc version=3 Dec 16 13:14:06.192093 systemd[1]: Started cri-containerd-b353d058f97b464ae3443c8b8ccddf0c1a0d3deea98ae8a4df6eba63941f6613.scope - libcontainer container b353d058f97b464ae3443c8b8ccddf0c1a0d3deea98ae8a4df6eba63941f6613. Dec 16 13:14:06.212310 systemd[1]: Started cri-containerd-041f5beb6d57318378ca4dad42c2b30ccac2731acef5fdedd2b76324eda2e1ac.scope - libcontainer container 041f5beb6d57318378ca4dad42c2b30ccac2731acef5fdedd2b76324eda2e1ac. Dec 16 13:14:06.230837 systemd[1]: Started cri-containerd-0b1ed4404dc9473cffb7ef536c9ad1905ae46a07dc5d2081d5d13e9db4a09b1e.scope - libcontainer container 0b1ed4404dc9473cffb7ef536c9ad1905ae46a07dc5d2081d5d13e9db4a09b1e. Dec 16 13:14:06.308976 containerd[1987]: time="2025-12-16T13:14:06.308934326Z" level=info msg="StartContainer for \"b353d058f97b464ae3443c8b8ccddf0c1a0d3deea98ae8a4df6eba63941f6613\" returns successfully" Dec 16 13:14:06.346214 kubelet[2966]: W1216 13:14:06.346139 2966 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.237:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.237:6443: connect: connection refused Dec 16 13:14:06.347686 kubelet[2966]: E1216 13:14:06.346227 2966 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.237:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.237:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:14:06.351421 containerd[1987]: time="2025-12-16T13:14:06.351355377Z" level=info msg="StartContainer for \"041f5beb6d57318378ca4dad42c2b30ccac2731acef5fdedd2b76324eda2e1ac\" returns successfully" Dec 16 13:14:06.354837 containerd[1987]: time="2025-12-16T13:14:06.354800112Z" level=info msg="StartContainer for \"0b1ed4404dc9473cffb7ef536c9ad1905ae46a07dc5d2081d5d13e9db4a09b1e\" returns successfully" Dec 16 13:14:06.450614 kubelet[2966]: E1216 13:14:06.450350 2966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-237?timeout=10s\": dial tcp 172.31.24.237:6443: connect: connection refused" interval="1.6s" Dec 16 13:14:06.542107 kubelet[2966]: W1216 13:14:06.541643 2966 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.237:6443: connect: connection refused Dec 16 13:14:06.542107 kubelet[2966]: E1216 13:14:06.541721 2966 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.237:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:14:06.733726 kubelet[2966]: I1216 13:14:06.733436 2966 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-237" Dec 16 13:14:06.735812 kubelet[2966]: E1216 13:14:06.735758 2966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.237:6443/api/v1/nodes\": dial tcp 172.31.24.237:6443: connect: connection refused" node="ip-172-31-24-237" Dec 16 13:14:07.171338 kubelet[2966]: E1216 13:14:07.171302 2966 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-237\" not found" node="ip-172-31-24-237" Dec 16 13:14:07.179677 kubelet[2966]: E1216 13:14:07.179645 2966 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-237\" not found" node="ip-172-31-24-237" Dec 16 13:14:07.185435 kubelet[2966]: E1216 13:14:07.185401 2966 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-237\" not found" node="ip-172-31-24-237" Dec 16 13:14:08.190016 kubelet[2966]: E1216 13:14:08.189979 2966 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-237\" not found" node="ip-172-31-24-237" Dec 16 13:14:08.190993 kubelet[2966]: E1216 13:14:08.190967 2966 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-237\" not found" node="ip-172-31-24-237" Dec 16 13:14:08.191456 kubelet[2966]: E1216 13:14:08.191437 2966 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-237\" not found" node="ip-172-31-24-237" Dec 16 13:14:08.339979 kubelet[2966]: I1216 13:14:08.339948 2966 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-237" Dec 16 13:14:09.189227 kubelet[2966]: E1216 13:14:09.189197 2966 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-237\" not found" node="ip-172-31-24-237" Dec 16 13:14:09.191481 kubelet[2966]: E1216 13:14:09.191232 2966 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-237\" not found" node="ip-172-31-24-237" Dec 16 13:14:09.717886 kubelet[2966]: E1216 13:14:09.717830 2966 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-237\" not found" node="ip-172-31-24-237" Dec 16 13:14:09.719862 kubelet[2966]: I1216 13:14:09.719834 2966 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-237" Dec 16 13:14:09.719862 kubelet[2966]: E1216 13:14:09.719867 2966 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-24-237\": node \"ip-172-31-24-237\" not found" Dec 16 13:14:09.740785 kubelet[2966]: I1216 13:14:09.740737 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-237" Dec 16 13:14:09.804666 kubelet[2966]: E1216 13:14:09.804627 2966 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-237\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-237" Dec 16 13:14:09.804666 kubelet[2966]: I1216 13:14:09.804666 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-237" Dec 16 13:14:09.807046 kubelet[2966]: E1216 13:14:09.807006 2966 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-237\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-237" Dec 16 13:14:09.807046 kubelet[2966]: I1216 13:14:09.807034 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-237" Dec 16 13:14:09.808896 kubelet[2966]: E1216 13:14:09.808869 2966 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-237\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-237" Dec 16 13:14:09.965231 kubelet[2966]: I1216 13:14:09.965179 2966 apiserver.go:52] "Watching apiserver" Dec 16 13:14:10.041160 kubelet[2966]: I1216 13:14:10.041039 2966 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:14:11.502358 kubelet[2966]: I1216 13:14:11.502318 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-237" Dec 16 13:14:12.079162 systemd[1]: Reload requested from client PID 3233 ('systemctl') (unit session-9.scope)... Dec 16 13:14:12.079182 systemd[1]: Reloading... Dec 16 13:14:12.200592 zram_generator::config[3273]: No configuration found. Dec 16 13:14:12.479788 systemd[1]: Reloading finished in 400 ms. Dec 16 13:14:12.514767 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:14:12.526697 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:14:12.526976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:12.527049 systemd[1]: kubelet.service: Consumed 979ms CPU time, 129.6M memory peak. Dec 16 13:14:12.529170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:14:12.774150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:12.786051 (kubelet)[3337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:14:12.860264 kubelet[3337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:14:12.860264 kubelet[3337]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:14:12.860264 kubelet[3337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:14:12.860765 kubelet[3337]: I1216 13:14:12.860400 3337 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:14:12.870710 kubelet[3337]: I1216 13:14:12.870683 3337 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:14:12.871330 kubelet[3337]: I1216 13:14:12.870842 3337 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:14:12.871330 kubelet[3337]: I1216 13:14:12.871111 3337 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:14:12.875347 kubelet[3337]: I1216 13:14:12.875304 3337 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 13:14:12.884264 kubelet[3337]: I1216 13:14:12.884232 3337 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:14:12.887288 kubelet[3337]: I1216 13:14:12.887259 3337 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:14:12.892491 kubelet[3337]: I1216 13:14:12.891537 3337 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:14:12.892847 kubelet[3337]: I1216 13:14:12.892811 3337 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:14:12.893036 kubelet[3337]: I1216 13:14:12.892850 3337 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-237","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:14:12.893147 kubelet[3337]: I1216 13:14:12.893048 3337 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:14:12.893147 kubelet[3337]: I1216 13:14:12.893057 3337 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:14:12.893147 kubelet[3337]: I1216 13:14:12.893106 3337 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:14:12.893256 kubelet[3337]: I1216 13:14:12.893235 3337 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:14:12.893280 kubelet[3337]: I1216 13:14:12.893263 3337 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:14:12.893310 kubelet[3337]: I1216 13:14:12.893285 3337 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:14:12.893310 kubelet[3337]: I1216 13:14:12.893301 3337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:14:12.897629 kubelet[3337]: I1216 13:14:12.897574 3337 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:14:12.898053 kubelet[3337]: I1216 13:14:12.898007 3337 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:14:12.898484 kubelet[3337]: I1216 13:14:12.898454 3337 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:14:12.898594 kubelet[3337]: I1216 13:14:12.898492 3337 server.go:1287] "Started kubelet" Dec 16 13:14:12.912364 kubelet[3337]: I1216 13:14:12.912327 3337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:14:12.916396 kubelet[3337]: I1216 13:14:12.916370 3337 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:14:12.920253 kubelet[3337]: I1216 13:14:12.916564 3337 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:14:12.923633 kubelet[3337]: I1216 13:14:12.923594 3337 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:14:12.924614 kubelet[3337]: I1216 13:14:12.918639 3337 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:14:12.929545 kubelet[3337]: I1216 13:14:12.919969 3337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:14:12.929545 kubelet[3337]: I1216 13:14:12.928814 3337 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:14:12.929545 kubelet[3337]: E1216 13:14:12.917902 3337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-237\" not found" Dec 16 13:14:12.929545 kubelet[3337]: I1216 13:14:12.925738 3337 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:14:12.929545 kubelet[3337]: I1216 13:14:12.925000 3337 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:14:12.944023 kubelet[3337]: E1216 13:14:12.943995 3337 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:14:12.944138 kubelet[3337]: I1216 13:14:12.944106 3337 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:14:12.944138 kubelet[3337]: I1216 13:14:12.944114 3337 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:14:12.944234 kubelet[3337]: I1216 13:14:12.944217 3337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:14:12.958693 kubelet[3337]: I1216 13:14:12.958658 3337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:14:12.960245 kubelet[3337]: I1216 13:14:12.960105 3337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:14:12.960245 kubelet[3337]: I1216 13:14:12.960130 3337 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:14:12.960245 kubelet[3337]: I1216 13:14:12.960163 3337 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:14:12.960245 kubelet[3337]: I1216 13:14:12.960171 3337 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:14:12.960437 kubelet[3337]: E1216 13:14:12.960389 3337 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:14:13.000673 kubelet[3337]: I1216 13:14:13.000641 3337 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:14:13.000673 kubelet[3337]: I1216 13:14:13.000660 3337 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:14:13.000673 kubelet[3337]: I1216 13:14:13.000682 3337 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:14:13.000898 kubelet[3337]: I1216 13:14:13.000876 3337 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:14:13.000942 kubelet[3337]: I1216 13:14:13.000889 3337 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:14:13.000942 kubelet[3337]: I1216 13:14:13.000915 3337 policy_none.go:49] "None policy: Start" Dec 16 13:14:13.000942 kubelet[3337]: I1216 13:14:13.000928 3337 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:14:13.000942 kubelet[3337]: I1216 13:14:13.000940 3337 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:14:13.001106 kubelet[3337]: I1216 13:14:13.001083 3337 state_mem.go:75] "Updated machine memory state" Dec 16 13:14:13.006976 kubelet[3337]: I1216 13:14:13.006942 3337 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:14:13.007150 kubelet[3337]: I1216 13:14:13.007129 3337 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:14:13.007218 kubelet[3337]: I1216 13:14:13.007146 3337 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:14:13.007854 kubelet[3337]: I1216 13:14:13.007790 3337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:14:13.010609 kubelet[3337]: E1216 13:14:13.010212 3337 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:14:13.062049 kubelet[3337]: I1216 13:14:13.061703 3337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-237" Dec 16 13:14:13.067115 kubelet[3337]: I1216 13:14:13.066197 3337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-237" Dec 16 13:14:13.067115 kubelet[3337]: I1216 13:14:13.066417 3337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-237" Dec 16 13:14:13.080928 kubelet[3337]: E1216 13:14:13.080900 3337 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-237\" already exists" pod="kube-system/kube-scheduler-ip-172-31-24-237" Dec 16 13:14:13.094065 sudo[3370]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 13:14:13.095315 sudo[3370]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 13:14:13.111135 kubelet[3337]: I1216 13:14:13.111099 3337 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-237" Dec 16 13:14:13.121918 kubelet[3337]: I1216 13:14:13.121796 3337 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-237" Dec 16 13:14:13.122030 kubelet[3337]: I1216 13:14:13.122003 3337 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-237" Dec 16 13:14:13.130156 kubelet[3337]: I1216 13:14:13.129993 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f6fd28659cfb27a60889c06458f5194-ca-certs\") pod \"kube-apiserver-ip-172-31-24-237\" (UID: \"2f6fd28659cfb27a60889c06458f5194\") " pod="kube-system/kube-apiserver-ip-172-31-24-237" Dec 16 13:14:13.130156 kubelet[3337]: I1216 13:14:13.130028 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6b19efec98c1111e4b11df8e6ad76c5e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-237\" (UID: \"6b19efec98c1111e4b11df8e6ad76c5e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-237" Dec 16 13:14:13.130156 kubelet[3337]: I1216 13:14:13.130047 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b19efec98c1111e4b11df8e6ad76c5e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-237\" (UID: \"6b19efec98c1111e4b11df8e6ad76c5e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-237" Dec 16 13:14:13.130156 kubelet[3337]: I1216 13:14:13.130067 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de0899d08d775b0ba3aa4c7cc38933db-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-237\" (UID: \"de0899d08d775b0ba3aa4c7cc38933db\") " pod="kube-system/kube-scheduler-ip-172-31-24-237" Dec 16 13:14:13.130156 kubelet[3337]: I1216 13:14:13.130083 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6b19efec98c1111e4b11df8e6ad76c5e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-237\" (UID: \"6b19efec98c1111e4b11df8e6ad76c5e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-237" Dec 16 13:14:13.130760 kubelet[3337]: I1216 13:14:13.130102 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f6fd28659cfb27a60889c06458f5194-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-237\" (UID: \"2f6fd28659cfb27a60889c06458f5194\") " pod="kube-system/kube-apiserver-ip-172-31-24-237" Dec 16 13:14:13.130760 kubelet[3337]: I1216 13:14:13.130118 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f6fd28659cfb27a60889c06458f5194-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-237\" (UID: \"2f6fd28659cfb27a60889c06458f5194\") " pod="kube-system/kube-apiserver-ip-172-31-24-237" Dec 16 13:14:13.130760 kubelet[3337]: I1216 13:14:13.130133 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b19efec98c1111e4b11df8e6ad76c5e-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-237\" (UID: \"6b19efec98c1111e4b11df8e6ad76c5e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-237" Dec 16 13:14:13.130760 kubelet[3337]: I1216 13:14:13.130148 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b19efec98c1111e4b11df8e6ad76c5e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-237\" (UID: \"6b19efec98c1111e4b11df8e6ad76c5e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-237" Dec 16 13:14:13.419675 update_engine[1962]: I20251216 13:14:13.419605 1962 update_attempter.cc:509] Updating boot flags... Dec 16 13:14:13.725487 sudo[3370]: pam_unix(sudo:session): session closed for user root Dec 16 13:14:13.914429 kubelet[3337]: I1216 13:14:13.914306 3337 apiserver.go:52] "Watching apiserver" Dec 16 13:14:13.931419 kubelet[3337]: I1216 13:14:13.931329 3337 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:14:13.982172 kubelet[3337]: I1216 13:14:13.981829 3337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-237" Dec 16 13:14:13.985005 kubelet[3337]: I1216 13:14:13.984878 3337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-237" Dec 16 13:14:14.019943 kubelet[3337]: E1216 13:14:14.019885 3337 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-237\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-237" Dec 16 13:14:14.020209 kubelet[3337]: E1216 13:14:14.020189 3337 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-237\" already exists" pod="kube-system/kube-scheduler-ip-172-31-24-237" Dec 16 13:14:14.073675 kubelet[3337]: I1216 13:14:14.072316 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-237" podStartSLOduration=3.072297886 podStartE2EDuration="3.072297886s" podCreationTimestamp="2025-12-16 13:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:14.072208824 +0000 UTC m=+1.277982385" watchObservedRunningTime="2025-12-16 13:14:14.072297886 +0000 UTC m=+1.278071448" Dec 16 13:14:14.125884 kubelet[3337]: I1216 13:14:14.125816 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-237" podStartSLOduration=1.125793409 podStartE2EDuration="1.125793409s" podCreationTimestamp="2025-12-16 13:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:14.10377343 +0000 UTC m=+1.309546973" watchObservedRunningTime="2025-12-16 13:14:14.125793409 +0000 UTC m=+1.331566955" Dec 16 13:14:16.218286 sudo[2401]: pam_unix(sudo:session): session closed for user root Dec 16 13:14:16.241469 sshd[2400]: Connection closed by 139.178.68.195 port 53974 Dec 16 13:14:16.242489 sshd-session[2397]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:16.248609 systemd-logind[1954]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:14:16.249066 systemd[1]: sshd@8-172.31.24.237:22-139.178.68.195:53974.service: Deactivated successfully. Dec 16 13:14:16.252810 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:14:16.253073 systemd[1]: session-9.scope: Consumed 5.210s CPU time, 207.1M memory peak. Dec 16 13:14:16.255986 systemd-logind[1954]: Removed session 9. Dec 16 13:14:17.887743 kubelet[3337]: I1216 13:14:17.887710 3337 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:14:17.888632 kubelet[3337]: I1216 13:14:17.888301 3337 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:14:17.888682 containerd[1987]: time="2025-12-16T13:14:17.888092725Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:14:18.834889 kubelet[3337]: I1216 13:14:18.834824 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-237" podStartSLOduration=5.834798603 podStartE2EDuration="5.834798603s" podCreationTimestamp="2025-12-16 13:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:14.126192288 +0000 UTC m=+1.331965828" watchObservedRunningTime="2025-12-16 13:14:18.834798603 +0000 UTC m=+6.040572147" Dec 16 13:14:18.849493 systemd[1]: Created slice kubepods-besteffort-pod834497bb_d378_46e2_96c4_017727ad3c61.slice - libcontainer container kubepods-besteffort-pod834497bb_d378_46e2_96c4_017727ad3c61.slice. Dec 16 13:14:18.880004 kubelet[3337]: I1216 13:14:18.879956 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/834497bb-d378-46e2-96c4-017727ad3c61-kube-proxy\") pod \"kube-proxy-x9pc2\" (UID: \"834497bb-d378-46e2-96c4-017727ad3c61\") " pod="kube-system/kube-proxy-x9pc2" Dec 16 13:14:18.880240 kubelet[3337]: I1216 13:14:18.880015 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rxm8\" (UniqueName: \"kubernetes.io/projected/834497bb-d378-46e2-96c4-017727ad3c61-kube-api-access-8rxm8\") pod \"kube-proxy-x9pc2\" (UID: \"834497bb-d378-46e2-96c4-017727ad3c61\") " pod="kube-system/kube-proxy-x9pc2" Dec 16 13:14:18.880240 kubelet[3337]: I1216 13:14:18.880048 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/834497bb-d378-46e2-96c4-017727ad3c61-xtables-lock\") pod \"kube-proxy-x9pc2\" (UID: \"834497bb-d378-46e2-96c4-017727ad3c61\") " pod="kube-system/kube-proxy-x9pc2" Dec 16 13:14:18.880240 kubelet[3337]: I1216 13:14:18.880068 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/834497bb-d378-46e2-96c4-017727ad3c61-lib-modules\") pod \"kube-proxy-x9pc2\" (UID: \"834497bb-d378-46e2-96c4-017727ad3c61\") " pod="kube-system/kube-proxy-x9pc2" Dec 16 13:14:18.894826 systemd[1]: Created slice kubepods-burstable-pod521ff325_3dcd_4225_ac50_ac4f7f660cc3.slice - libcontainer container kubepods-burstable-pod521ff325_3dcd_4225_ac50_ac4f7f660cc3.slice. Dec 16 13:14:18.980912 kubelet[3337]: I1216 13:14:18.980863 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-hostproc\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:18.982413 kubelet[3337]: I1216 13:14:18.982353 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-host-proc-sys-kernel\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:18.982751 kubelet[3337]: I1216 13:14:18.982443 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-host-proc-sys-net\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:18.982751 kubelet[3337]: I1216 13:14:18.982467 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-xtables-lock\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:18.982751 kubelet[3337]: I1216 13:14:18.982562 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/521ff325-3dcd-4225-ac50-ac4f7f660cc3-hubble-tls\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:18.982751 kubelet[3337]: I1216 13:14:18.982592 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-bpf-maps\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:18.982751 kubelet[3337]: I1216 13:14:18.982628 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-etc-cni-netd\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:18.983640 kubelet[3337]: I1216 13:14:18.982929 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/521ff325-3dcd-4225-ac50-ac4f7f660cc3-clustermesh-secrets\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:18.983640 kubelet[3337]: I1216 13:14:18.982982 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cilium-config-path\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:18.983640 kubelet[3337]: I1216 13:14:18.983008 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-lib-modules\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:18.983640 kubelet[3337]: I1216 13:14:18.983159 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfxm5\" (UniqueName: \"kubernetes.io/projected/521ff325-3dcd-4225-ac50-ac4f7f660cc3-kube-api-access-vfxm5\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:18.983640 kubelet[3337]: I1216 13:14:18.983410 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cilium-cgroup\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:18.984545 kubelet[3337]: I1216 13:14:18.984193 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cilium-run\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:18.984545 kubelet[3337]: I1216 13:14:18.984249 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cni-path\") pod \"cilium-nsddt\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " pod="kube-system/cilium-nsddt" Dec 16 13:14:19.048113 systemd[1]: Created slice kubepods-besteffort-poda88922da_0504_40b6_8104_e89bd508d9f9.slice - libcontainer container kubepods-besteffort-poda88922da_0504_40b6_8104_e89bd508d9f9.slice. Dec 16 13:14:19.085423 kubelet[3337]: I1216 13:14:19.084885 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a88922da-0504-40b6-8104-e89bd508d9f9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7h8hs\" (UID: \"a88922da-0504-40b6-8104-e89bd508d9f9\") " pod="kube-system/cilium-operator-6c4d7847fc-7h8hs" Dec 16 13:14:19.085423 kubelet[3337]: I1216 13:14:19.085037 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc8lk\" (UniqueName: \"kubernetes.io/projected/a88922da-0504-40b6-8104-e89bd508d9f9-kube-api-access-tc8lk\") pod \"cilium-operator-6c4d7847fc-7h8hs\" (UID: \"a88922da-0504-40b6-8104-e89bd508d9f9\") " pod="kube-system/cilium-operator-6c4d7847fc-7h8hs" Dec 16 13:14:19.167084 containerd[1987]: time="2025-12-16T13:14:19.167015313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x9pc2,Uid:834497bb-d378-46e2-96c4-017727ad3c61,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:19.204445 containerd[1987]: time="2025-12-16T13:14:19.204388113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nsddt,Uid:521ff325-3dcd-4225-ac50-ac4f7f660cc3,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:19.217778 containerd[1987]: time="2025-12-16T13:14:19.217104153Z" level=info msg="connecting to shim 08b64228a236a03965278f548207bad504ca578e676a10945b629c24f51fd806" address="unix:///run/containerd/s/8b2fb12c231059abaf7e7bbf2a4f498c291b13ec99353f855e1a80f73e185047" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:19.248926 systemd[1]: Started cri-containerd-08b64228a236a03965278f548207bad504ca578e676a10945b629c24f51fd806.scope - libcontainer container 08b64228a236a03965278f548207bad504ca578e676a10945b629c24f51fd806. Dec 16 13:14:19.251962 containerd[1987]: time="2025-12-16T13:14:19.251280934Z" level=info msg="connecting to shim e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032" address="unix:///run/containerd/s/5b116a5cd1bd9c1d64b2da4db379f15ed0783dbb7ced8d91643af049cece3282" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:19.303779 systemd[1]: Started cri-containerd-e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032.scope - libcontainer container e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032. Dec 16 13:14:19.317538 containerd[1987]: time="2025-12-16T13:14:19.317439907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x9pc2,Uid:834497bb-d378-46e2-96c4-017727ad3c61,Namespace:kube-system,Attempt:0,} returns sandbox id \"08b64228a236a03965278f548207bad504ca578e676a10945b629c24f51fd806\"" Dec 16 13:14:19.326650 containerd[1987]: time="2025-12-16T13:14:19.326491866Z" level=info msg="CreateContainer within sandbox \"08b64228a236a03965278f548207bad504ca578e676a10945b629c24f51fd806\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:14:19.355331 containerd[1987]: time="2025-12-16T13:14:19.355240543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7h8hs,Uid:a88922da-0504-40b6-8104-e89bd508d9f9,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:19.370175 containerd[1987]: time="2025-12-16T13:14:19.369985788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nsddt,Uid:521ff325-3dcd-4225-ac50-ac4f7f660cc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\"" Dec 16 13:14:19.373627 containerd[1987]: time="2025-12-16T13:14:19.372496393Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 13:14:19.378373 containerd[1987]: time="2025-12-16T13:14:19.378311350Z" level=info msg="Container 95ed56a1100222f2a8bcec71e3dea0d2977320f493c0ceaf9671d25ceedf7b9d: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:19.395836 containerd[1987]: time="2025-12-16T13:14:19.395668785Z" level=info msg="CreateContainer within sandbox \"08b64228a236a03965278f548207bad504ca578e676a10945b629c24f51fd806\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"95ed56a1100222f2a8bcec71e3dea0d2977320f493c0ceaf9671d25ceedf7b9d\"" Dec 16 13:14:19.398196 containerd[1987]: time="2025-12-16T13:14:19.398145178Z" level=info msg="StartContainer for \"95ed56a1100222f2a8bcec71e3dea0d2977320f493c0ceaf9671d25ceedf7b9d\"" Dec 16 13:14:19.400422 containerd[1987]: time="2025-12-16T13:14:19.400372598Z" level=info msg="connecting to shim 95ed56a1100222f2a8bcec71e3dea0d2977320f493c0ceaf9671d25ceedf7b9d" address="unix:///run/containerd/s/8b2fb12c231059abaf7e7bbf2a4f498c291b13ec99353f855e1a80f73e185047" protocol=ttrpc version=3 Dec 16 13:14:19.406238 containerd[1987]: time="2025-12-16T13:14:19.406189829Z" level=info msg="connecting to shim a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f" address="unix:///run/containerd/s/f94322553ea6b382f342ad766ad32d0b280facf31db4060a5c0d9c563817a0d2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:19.432796 systemd[1]: Started cri-containerd-95ed56a1100222f2a8bcec71e3dea0d2977320f493c0ceaf9671d25ceedf7b9d.scope - libcontainer container 95ed56a1100222f2a8bcec71e3dea0d2977320f493c0ceaf9671d25ceedf7b9d. Dec 16 13:14:19.446838 systemd[1]: Started cri-containerd-a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f.scope - libcontainer container a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f. Dec 16 13:14:19.529329 containerd[1987]: time="2025-12-16T13:14:19.529288027Z" level=info msg="StartContainer for \"95ed56a1100222f2a8bcec71e3dea0d2977320f493c0ceaf9671d25ceedf7b9d\" returns successfully" Dec 16 13:14:19.532662 containerd[1987]: time="2025-12-16T13:14:19.532618337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7h8hs,Uid:a88922da-0504-40b6-8104-e89bd508d9f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\"" Dec 16 13:14:20.300790 kubelet[3337]: I1216 13:14:20.300713 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x9pc2" podStartSLOduration=2.300694066 podStartE2EDuration="2.300694066s" podCreationTimestamp="2025-12-16 13:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:20.05767303 +0000 UTC m=+7.263446569" watchObservedRunningTime="2025-12-16 13:14:20.300694066 +0000 UTC m=+7.506467607" Dec 16 13:14:26.290836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107867999.mount: Deactivated successfully. Dec 16 13:14:28.872299 containerd[1987]: time="2025-12-16T13:14:28.872125924Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:28.875704 containerd[1987]: time="2025-12-16T13:14:28.875652701Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 16 13:14:28.883949 containerd[1987]: time="2025-12-16T13:14:28.883886745Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:28.887163 containerd[1987]: time="2025-12-16T13:14:28.886955715Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.514402293s" Dec 16 13:14:28.887163 containerd[1987]: time="2025-12-16T13:14:28.887139733Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 16 13:14:28.890976 containerd[1987]: time="2025-12-16T13:14:28.888498748Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 13:14:28.892954 containerd[1987]: time="2025-12-16T13:14:28.892903779Z" level=info msg="CreateContainer within sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:14:28.937406 containerd[1987]: time="2025-12-16T13:14:28.936071804Z" level=info msg="Container 255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:28.947126 containerd[1987]: time="2025-12-16T13:14:28.947024010Z" level=info msg="CreateContainer within sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b\"" Dec 16 13:14:28.948087 containerd[1987]: time="2025-12-16T13:14:28.948050691Z" level=info msg="StartContainer for \"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b\"" Dec 16 13:14:28.949486 containerd[1987]: time="2025-12-16T13:14:28.949375057Z" level=info msg="connecting to shim 255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b" address="unix:///run/containerd/s/5b116a5cd1bd9c1d64b2da4db379f15ed0783dbb7ced8d91643af049cece3282" protocol=ttrpc version=3 Dec 16 13:14:29.025796 systemd[1]: Started cri-containerd-255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b.scope - libcontainer container 255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b. Dec 16 13:14:29.064272 containerd[1987]: time="2025-12-16T13:14:29.064225815Z" level=info msg="StartContainer for \"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b\" returns successfully" Dec 16 13:14:29.079836 systemd[1]: cri-containerd-255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b.scope: Deactivated successfully. Dec 16 13:14:29.106802 containerd[1987]: time="2025-12-16T13:14:29.106744394Z" level=info msg="received container exit event container_id:\"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b\" id:\"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b\" pid:3935 exited_at:{seconds:1765890869 nanos:84206312}" Dec 16 13:14:29.142771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b-rootfs.mount: Deactivated successfully. Dec 16 13:14:30.046830 containerd[1987]: time="2025-12-16T13:14:30.046749381Z" level=info msg="CreateContainer within sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:14:30.093456 containerd[1987]: time="2025-12-16T13:14:30.092838762Z" level=info msg="Container c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:30.100928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513971123.mount: Deactivated successfully. Dec 16 13:14:30.110441 containerd[1987]: time="2025-12-16T13:14:30.109732487Z" level=info msg="CreateContainer within sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0\"" Dec 16 13:14:30.115222 containerd[1987]: time="2025-12-16T13:14:30.114679983Z" level=info msg="StartContainer for \"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0\"" Dec 16 13:14:30.116406 containerd[1987]: time="2025-12-16T13:14:30.116289617Z" level=info msg="connecting to shim c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0" address="unix:///run/containerd/s/5b116a5cd1bd9c1d64b2da4db379f15ed0783dbb7ced8d91643af049cece3282" protocol=ttrpc version=3 Dec 16 13:14:30.161231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304320731.mount: Deactivated successfully. Dec 16 13:14:30.171810 systemd[1]: Started cri-containerd-c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0.scope - libcontainer container c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0. Dec 16 13:14:30.245373 containerd[1987]: time="2025-12-16T13:14:30.245262097Z" level=info msg="StartContainer for \"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0\" returns successfully" Dec 16 13:14:30.266856 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:14:30.267236 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:14:30.268312 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:14:30.271921 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:14:30.278631 systemd[1]: cri-containerd-c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0.scope: Deactivated successfully. Dec 16 13:14:30.280116 containerd[1987]: time="2025-12-16T13:14:30.279115216Z" level=info msg="received container exit event container_id:\"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0\" id:\"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0\" pid:3985 exited_at:{seconds:1765890870 nanos:278713990}" Dec 16 13:14:30.337396 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:14:31.030445 containerd[1987]: time="2025-12-16T13:14:31.030374074Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:31.034314 containerd[1987]: time="2025-12-16T13:14:31.034222613Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 16 13:14:31.039128 containerd[1987]: time="2025-12-16T13:14:31.039055397Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:31.042625 containerd[1987]: time="2025-12-16T13:14:31.042547924Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.151591272s" Dec 16 13:14:31.042951 containerd[1987]: time="2025-12-16T13:14:31.042598089Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 16 13:14:31.047470 containerd[1987]: time="2025-12-16T13:14:31.047428510Z" level=info msg="CreateContainer within sandbox \"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 13:14:31.059559 containerd[1987]: time="2025-12-16T13:14:31.058951255Z" level=info msg="CreateContainer within sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:14:31.078536 containerd[1987]: time="2025-12-16T13:14:31.077652667Z" level=info msg="Container e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:31.084163 containerd[1987]: time="2025-12-16T13:14:31.083679792Z" level=info msg="Container ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:31.088154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0-rootfs.mount: Deactivated successfully. Dec 16 13:14:31.096159 containerd[1987]: time="2025-12-16T13:14:31.096108552Z" level=info msg="CreateContainer within sandbox \"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\"" Dec 16 13:14:31.098210 containerd[1987]: time="2025-12-16T13:14:31.097052557Z" level=info msg="StartContainer for \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\"" Dec 16 13:14:31.098806 containerd[1987]: time="2025-12-16T13:14:31.098713220Z" level=info msg="connecting to shim e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4" address="unix:///run/containerd/s/f94322553ea6b382f342ad766ad32d0b280facf31db4060a5c0d9c563817a0d2" protocol=ttrpc version=3 Dec 16 13:14:31.117092 containerd[1987]: time="2025-12-16T13:14:31.117007840Z" level=info msg="CreateContainer within sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda\"" Dec 16 13:14:31.118054 containerd[1987]: time="2025-12-16T13:14:31.118014293Z" level=info msg="StartContainer for \"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda\"" Dec 16 13:14:31.120502 containerd[1987]: time="2025-12-16T13:14:31.120307879Z" level=info msg="connecting to shim ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda" address="unix:///run/containerd/s/5b116a5cd1bd9c1d64b2da4db379f15ed0783dbb7ced8d91643af049cece3282" protocol=ttrpc version=3 Dec 16 13:14:31.145797 systemd[1]: Started cri-containerd-e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4.scope - libcontainer container e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4. Dec 16 13:14:31.165742 systemd[1]: Started cri-containerd-ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda.scope - libcontainer container ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda. Dec 16 13:14:31.216726 containerd[1987]: time="2025-12-16T13:14:31.216018168Z" level=info msg="StartContainer for \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\" returns successfully" Dec 16 13:14:31.275820 systemd[1]: cri-containerd-ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda.scope: Deactivated successfully. Dec 16 13:14:31.276184 systemd[1]: cri-containerd-ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda.scope: Consumed 41ms CPU time, 4.3M memory peak, 1M read from disk. Dec 16 13:14:31.281051 containerd[1987]: time="2025-12-16T13:14:31.280937395Z" level=info msg="StartContainer for \"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda\" returns successfully" Dec 16 13:14:31.283618 containerd[1987]: time="2025-12-16T13:14:31.281671960Z" level=info msg="received container exit event container_id:\"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda\" id:\"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda\" pid:4062 exited_at:{seconds:1765890871 nanos:280853016}" Dec 16 13:14:31.323562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda-rootfs.mount: Deactivated successfully. Dec 16 13:14:32.079868 containerd[1987]: time="2025-12-16T13:14:32.079806493Z" level=info msg="CreateContainer within sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:14:32.101780 containerd[1987]: time="2025-12-16T13:14:32.101731195Z" level=info msg="Container 7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:32.109330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1329622792.mount: Deactivated successfully. Dec 16 13:14:32.118483 containerd[1987]: time="2025-12-16T13:14:32.118409870Z" level=info msg="CreateContainer within sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2\"" Dec 16 13:14:32.119243 containerd[1987]: time="2025-12-16T13:14:32.119210654Z" level=info msg="StartContainer for \"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2\"" Dec 16 13:14:32.121005 containerd[1987]: time="2025-12-16T13:14:32.120963869Z" level=info msg="connecting to shim 7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2" address="unix:///run/containerd/s/5b116a5cd1bd9c1d64b2da4db379f15ed0783dbb7ced8d91643af049cece3282" protocol=ttrpc version=3 Dec 16 13:14:32.167761 systemd[1]: Started cri-containerd-7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2.scope - libcontainer container 7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2. Dec 16 13:14:32.269798 containerd[1987]: time="2025-12-16T13:14:32.269754661Z" level=info msg="StartContainer for \"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2\" returns successfully" Dec 16 13:14:32.273159 systemd[1]: cri-containerd-7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2.scope: Deactivated successfully. Dec 16 13:14:32.275175 containerd[1987]: time="2025-12-16T13:14:32.275121626Z" level=info msg="received container exit event container_id:\"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2\" id:\"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2\" pid:4116 exited_at:{seconds:1765890872 nanos:274471124}" Dec 16 13:14:32.335545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2-rootfs.mount: Deactivated successfully. Dec 16 13:14:32.337330 kubelet[3337]: I1216 13:14:32.337259 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7h8hs" podStartSLOduration=2.82746134 podStartE2EDuration="14.337232346s" podCreationTimestamp="2025-12-16 13:14:18 +0000 UTC" firstStartedPulling="2025-12-16 13:14:19.534138567 +0000 UTC m=+6.739912097" lastFinishedPulling="2025-12-16 13:14:31.043909583 +0000 UTC m=+18.249683103" observedRunningTime="2025-12-16 13:14:32.185392619 +0000 UTC m=+19.391166165" watchObservedRunningTime="2025-12-16 13:14:32.337232346 +0000 UTC m=+19.543005887" Dec 16 13:14:33.109546 containerd[1987]: time="2025-12-16T13:14:33.108806199Z" level=info msg="CreateContainer within sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:14:33.137458 containerd[1987]: time="2025-12-16T13:14:33.135093027Z" level=info msg="Container 52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:33.165038 containerd[1987]: time="2025-12-16T13:14:33.164979969Z" level=info msg="CreateContainer within sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\"" Dec 16 13:14:33.165701 containerd[1987]: time="2025-12-16T13:14:33.165671660Z" level=info msg="StartContainer for \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\"" Dec 16 13:14:33.168669 containerd[1987]: time="2025-12-16T13:14:33.167648803Z" level=info msg="connecting to shim 52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20" address="unix:///run/containerd/s/5b116a5cd1bd9c1d64b2da4db379f15ed0783dbb7ced8d91643af049cece3282" protocol=ttrpc version=3 Dec 16 13:14:33.196818 systemd[1]: Started cri-containerd-52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20.scope - libcontainer container 52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20. Dec 16 13:14:33.266153 containerd[1987]: time="2025-12-16T13:14:33.266106290Z" level=info msg="StartContainer for \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\" returns successfully" Dec 16 13:14:33.488412 kubelet[3337]: I1216 13:14:33.488314 3337 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:14:33.573477 systemd[1]: Created slice kubepods-burstable-pod44385c2d_42c0_4161_a198_4b71ecff77d6.slice - libcontainer container kubepods-burstable-pod44385c2d_42c0_4161_a198_4b71ecff77d6.slice. Dec 16 13:14:33.583075 systemd[1]: Created slice kubepods-burstable-pode3117b8a_01fc_44cf_a6c8_fb02a187636b.slice - libcontainer container kubepods-burstable-pode3117b8a_01fc_44cf_a6c8_fb02a187636b.slice. Dec 16 13:14:33.727090 kubelet[3337]: I1216 13:14:33.726986 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3117b8a-01fc-44cf-a6c8-fb02a187636b-config-volume\") pod \"coredns-668d6bf9bc-9z68m\" (UID: \"e3117b8a-01fc-44cf-a6c8-fb02a187636b\") " pod="kube-system/coredns-668d6bf9bc-9z68m" Dec 16 13:14:33.727090 kubelet[3337]: I1216 13:14:33.727033 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v46c\" (UniqueName: \"kubernetes.io/projected/e3117b8a-01fc-44cf-a6c8-fb02a187636b-kube-api-access-5v46c\") pod \"coredns-668d6bf9bc-9z68m\" (UID: \"e3117b8a-01fc-44cf-a6c8-fb02a187636b\") " pod="kube-system/coredns-668d6bf9bc-9z68m" Dec 16 13:14:33.727090 kubelet[3337]: I1216 13:14:33.727065 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sqlr\" (UniqueName: \"kubernetes.io/projected/44385c2d-42c0-4161-a198-4b71ecff77d6-kube-api-access-8sqlr\") pod \"coredns-668d6bf9bc-z49cb\" (UID: \"44385c2d-42c0-4161-a198-4b71ecff77d6\") " pod="kube-system/coredns-668d6bf9bc-z49cb" Dec 16 13:14:33.727090 kubelet[3337]: I1216 13:14:33.727082 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44385c2d-42c0-4161-a198-4b71ecff77d6-config-volume\") pod \"coredns-668d6bf9bc-z49cb\" (UID: \"44385c2d-42c0-4161-a198-4b71ecff77d6\") " pod="kube-system/coredns-668d6bf9bc-z49cb" Dec 16 13:14:33.881109 containerd[1987]: time="2025-12-16T13:14:33.880832935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z49cb,Uid:44385c2d-42c0-4161-a198-4b71ecff77d6,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:33.890948 containerd[1987]: time="2025-12-16T13:14:33.890906799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9z68m,Uid:e3117b8a-01fc-44cf-a6c8-fb02a187636b,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:36.109946 (udev-worker)[4248]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:14:36.110906 systemd-networkd[1790]: cilium_host: Link UP Dec 16 13:14:36.111103 systemd-networkd[1790]: cilium_net: Link UP Dec 16 13:14:36.111318 systemd-networkd[1790]: cilium_net: Gained carrier Dec 16 13:14:36.111508 systemd-networkd[1790]: cilium_host: Gained carrier Dec 16 13:14:36.113660 (udev-worker)[4285]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:14:36.251106 systemd-networkd[1790]: cilium_vxlan: Link UP Dec 16 13:14:36.251117 systemd-networkd[1790]: cilium_vxlan: Gained carrier Dec 16 13:14:36.345847 systemd-networkd[1790]: cilium_net: Gained IPv6LL Dec 16 13:14:36.810555 systemd-networkd[1790]: cilium_host: Gained IPv6LL Dec 16 13:14:37.192549 kernel: NET: Registered PF_ALG protocol family Dec 16 13:14:37.641725 systemd-networkd[1790]: cilium_vxlan: Gained IPv6LL Dec 16 13:14:38.050882 (udev-worker)[4296]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:14:38.071207 systemd-networkd[1790]: lxc_health: Link UP Dec 16 13:14:38.085031 systemd-networkd[1790]: lxc_health: Gained carrier Dec 16 13:14:38.548039 kernel: eth0: renamed from tmpcddd9 Dec 16 13:14:38.554115 systemd-networkd[1790]: lxc7d2bf6e0e230: Link UP Dec 16 13:14:38.559822 systemd-networkd[1790]: lxc460cf372e041: Link UP Dec 16 13:14:38.563155 systemd-networkd[1790]: lxc7d2bf6e0e230: Gained carrier Dec 16 13:14:38.566719 kernel: eth0: renamed from tmp78511 Dec 16 13:14:38.571164 systemd-networkd[1790]: lxc460cf372e041: Gained carrier Dec 16 13:14:39.250599 kubelet[3337]: I1216 13:14:39.250488 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nsddt" podStartSLOduration=11.733012771 podStartE2EDuration="21.249308183s" podCreationTimestamp="2025-12-16 13:14:18 +0000 UTC" firstStartedPulling="2025-12-16 13:14:19.371913093 +0000 UTC m=+6.577686611" lastFinishedPulling="2025-12-16 13:14:28.888208488 +0000 UTC m=+16.093982023" observedRunningTime="2025-12-16 13:14:34.143991053 +0000 UTC m=+21.349764618" watchObservedRunningTime="2025-12-16 13:14:39.249308183 +0000 UTC m=+26.455081723" Dec 16 13:14:39.883670 systemd-networkd[1790]: lxc7d2bf6e0e230: Gained IPv6LL Dec 16 13:14:40.073754 systemd-networkd[1790]: lxc_health: Gained IPv6LL Dec 16 13:14:40.266152 systemd-networkd[1790]: lxc460cf372e041: Gained IPv6LL Dec 16 13:14:42.380950 ntpd[2241]: Listen normally on 6 cilium_host 192.168.0.217:123 Dec 16 13:14:42.381700 ntpd[2241]: 16 Dec 13:14:42 ntpd[2241]: Listen normally on 6 cilium_host 192.168.0.217:123 Dec 16 13:14:42.381700 ntpd[2241]: 16 Dec 13:14:42 ntpd[2241]: Listen normally on 7 cilium_net [fe80::3c5a:fbff:fe47:53b1%4]:123 Dec 16 13:14:42.381700 ntpd[2241]: 16 Dec 13:14:42 ntpd[2241]: Listen normally on 8 cilium_host [fe80::c8c0:47ff:fe50:806f%5]:123 Dec 16 13:14:42.381700 ntpd[2241]: 16 Dec 13:14:42 ntpd[2241]: Listen normally on 9 cilium_vxlan [fe80::1cc8:afff:fea8:8180%6]:123 Dec 16 13:14:42.381700 ntpd[2241]: 16 Dec 13:14:42 ntpd[2241]: Listen normally on 10 lxc_health [fe80::44c6:32ff:fef8:3597%8]:123 Dec 16 13:14:42.381700 ntpd[2241]: 16 Dec 13:14:42 ntpd[2241]: Listen normally on 11 lxc460cf372e041 [fe80::8482:eeff:fefc:11d7%10]:123 Dec 16 13:14:42.381700 ntpd[2241]: 16 Dec 13:14:42 ntpd[2241]: Listen normally on 12 lxc7d2bf6e0e230 [fe80::d011:c1ff:fec4:c032%12]:123 Dec 16 13:14:42.381025 ntpd[2241]: Listen normally on 7 cilium_net [fe80::3c5a:fbff:fe47:53b1%4]:123 Dec 16 13:14:42.381057 ntpd[2241]: Listen normally on 8 cilium_host [fe80::c8c0:47ff:fe50:806f%5]:123 Dec 16 13:14:42.381086 ntpd[2241]: Listen normally on 9 cilium_vxlan [fe80::1cc8:afff:fea8:8180%6]:123 Dec 16 13:14:42.381115 ntpd[2241]: Listen normally on 10 lxc_health [fe80::44c6:32ff:fef8:3597%8]:123 Dec 16 13:14:42.381142 ntpd[2241]: Listen normally on 11 lxc460cf372e041 [fe80::8482:eeff:fefc:11d7%10]:123 Dec 16 13:14:42.381171 ntpd[2241]: Listen normally on 12 lxc7d2bf6e0e230 [fe80::d011:c1ff:fec4:c032%12]:123 Dec 16 13:14:43.553315 containerd[1987]: time="2025-12-16T13:14:43.553269226Z" level=info msg="connecting to shim cddd9be628fc3f533530ad65a66da6b55d0d9017a113b6a7851db43e549355f9" address="unix:///run/containerd/s/4c4c74858d423e8eaa9c8f3135d88f46b826f51ee3e634cc5834a0c7f0467945" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:43.560155 containerd[1987]: time="2025-12-16T13:14:43.560087281Z" level=info msg="connecting to shim 78511b3c2c39997a1c482854cb41e65b6a2a33348ed63b3da499104962fd5778" address="unix:///run/containerd/s/40c9c61a1dea96c92f26e9eb51d6be3accca8d0f037582919a1b5b7d5127e0a5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:43.624816 systemd[1]: Started cri-containerd-cddd9be628fc3f533530ad65a66da6b55d0d9017a113b6a7851db43e549355f9.scope - libcontainer container cddd9be628fc3f533530ad65a66da6b55d0d9017a113b6a7851db43e549355f9. Dec 16 13:14:43.632241 systemd[1]: Started cri-containerd-78511b3c2c39997a1c482854cb41e65b6a2a33348ed63b3da499104962fd5778.scope - libcontainer container 78511b3c2c39997a1c482854cb41e65b6a2a33348ed63b3da499104962fd5778. Dec 16 13:14:43.748218 containerd[1987]: time="2025-12-16T13:14:43.748118971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z49cb,Uid:44385c2d-42c0-4161-a198-4b71ecff77d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"78511b3c2c39997a1c482854cb41e65b6a2a33348ed63b3da499104962fd5778\"" Dec 16 13:14:43.753375 containerd[1987]: time="2025-12-16T13:14:43.753331007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9z68m,Uid:e3117b8a-01fc-44cf-a6c8-fb02a187636b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cddd9be628fc3f533530ad65a66da6b55d0d9017a113b6a7851db43e549355f9\"" Dec 16 13:14:43.757487 containerd[1987]: time="2025-12-16T13:14:43.757445431Z" level=info msg="CreateContainer within sandbox \"78511b3c2c39997a1c482854cb41e65b6a2a33348ed63b3da499104962fd5778\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:14:43.759965 containerd[1987]: time="2025-12-16T13:14:43.759901551Z" level=info msg="CreateContainer within sandbox \"cddd9be628fc3f533530ad65a66da6b55d0d9017a113b6a7851db43e549355f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:14:43.960021 containerd[1987]: time="2025-12-16T13:14:43.959976789Z" level=info msg="Container 43a769e1855b0c3e5fc89479a55b88307c59778bac15c49eee57dc02e68298f5: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:43.960685 containerd[1987]: time="2025-12-16T13:14:43.960624228Z" level=info msg="Container 22a0bb843add97943fc42cc14a83004f58833d95fc320bb5a13adbdd635c226b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:43.966597 containerd[1987]: time="2025-12-16T13:14:43.966555473Z" level=info msg="CreateContainer within sandbox \"78511b3c2c39997a1c482854cb41e65b6a2a33348ed63b3da499104962fd5778\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"43a769e1855b0c3e5fc89479a55b88307c59778bac15c49eee57dc02e68298f5\"" Dec 16 13:14:43.967787 containerd[1987]: time="2025-12-16T13:14:43.967567919Z" level=info msg="StartContainer for \"43a769e1855b0c3e5fc89479a55b88307c59778bac15c49eee57dc02e68298f5\"" Dec 16 13:14:43.969811 containerd[1987]: time="2025-12-16T13:14:43.969729708Z" level=info msg="connecting to shim 43a769e1855b0c3e5fc89479a55b88307c59778bac15c49eee57dc02e68298f5" address="unix:///run/containerd/s/40c9c61a1dea96c92f26e9eb51d6be3accca8d0f037582919a1b5b7d5127e0a5" protocol=ttrpc version=3 Dec 16 13:14:43.973798 containerd[1987]: time="2025-12-16T13:14:43.973751775Z" level=info msg="CreateContainer within sandbox \"cddd9be628fc3f533530ad65a66da6b55d0d9017a113b6a7851db43e549355f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22a0bb843add97943fc42cc14a83004f58833d95fc320bb5a13adbdd635c226b\"" Dec 16 13:14:43.976534 containerd[1987]: time="2025-12-16T13:14:43.975798867Z" level=info msg="StartContainer for \"22a0bb843add97943fc42cc14a83004f58833d95fc320bb5a13adbdd635c226b\"" Dec 16 13:14:43.978850 containerd[1987]: time="2025-12-16T13:14:43.978814211Z" level=info msg="connecting to shim 22a0bb843add97943fc42cc14a83004f58833d95fc320bb5a13adbdd635c226b" address="unix:///run/containerd/s/4c4c74858d423e8eaa9c8f3135d88f46b826f51ee3e634cc5834a0c7f0467945" protocol=ttrpc version=3 Dec 16 13:14:44.002076 systemd[1]: Started cri-containerd-43a769e1855b0c3e5fc89479a55b88307c59778bac15c49eee57dc02e68298f5.scope - libcontainer container 43a769e1855b0c3e5fc89479a55b88307c59778bac15c49eee57dc02e68298f5. Dec 16 13:14:44.016045 systemd[1]: Started cri-containerd-22a0bb843add97943fc42cc14a83004f58833d95fc320bb5a13adbdd635c226b.scope - libcontainer container 22a0bb843add97943fc42cc14a83004f58833d95fc320bb5a13adbdd635c226b. Dec 16 13:14:44.107143 containerd[1987]: time="2025-12-16T13:14:44.107103615Z" level=info msg="StartContainer for \"22a0bb843add97943fc42cc14a83004f58833d95fc320bb5a13adbdd635c226b\" returns successfully" Dec 16 13:14:44.107555 containerd[1987]: time="2025-12-16T13:14:44.107222646Z" level=info msg="StartContainer for \"43a769e1855b0c3e5fc89479a55b88307c59778bac15c49eee57dc02e68298f5\" returns successfully" Dec 16 13:14:44.161836 kubelet[3337]: I1216 13:14:44.161388 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z49cb" podStartSLOduration=26.161370312 podStartE2EDuration="26.161370312s" podCreationTimestamp="2025-12-16 13:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:44.160091295 +0000 UTC m=+31.365864834" watchObservedRunningTime="2025-12-16 13:14:44.161370312 +0000 UTC m=+31.367143849" Dec 16 13:14:44.180525 kubelet[3337]: I1216 13:14:44.180460 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9z68m" podStartSLOduration=26.180441516 podStartE2EDuration="26.180441516s" podCreationTimestamp="2025-12-16 13:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:44.179259516 +0000 UTC m=+31.385033056" watchObservedRunningTime="2025-12-16 13:14:44.180441516 +0000 UTC m=+31.386215053" Dec 16 13:14:44.537610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2649535677.mount: Deactivated successfully. Dec 16 13:14:44.537702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1943545374.mount: Deactivated successfully. Dec 16 13:14:50.466928 systemd[1]: Started sshd@9-172.31.24.237:22-139.178.68.195:35466.service - OpenSSH per-connection server daemon (139.178.68.195:35466). Dec 16 13:14:50.689915 sshd[4825]: Accepted publickey for core from 139.178.68.195 port 35466 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:14:50.692881 sshd-session[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:50.700170 systemd-logind[1954]: New session 10 of user core. Dec 16 13:14:50.704764 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:14:51.577632 sshd[4829]: Connection closed by 139.178.68.195 port 35466 Dec 16 13:14:51.578265 sshd-session[4825]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:51.592204 systemd[1]: sshd@9-172.31.24.237:22-139.178.68.195:35466.service: Deactivated successfully. Dec 16 13:14:51.595161 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:14:51.598634 systemd-logind[1954]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:14:51.601277 systemd-logind[1954]: Removed session 10. Dec 16 13:14:56.617478 systemd[1]: Started sshd@10-172.31.24.237:22-139.178.68.195:35470.service - OpenSSH per-connection server daemon (139.178.68.195:35470). Dec 16 13:14:56.795017 sshd[4842]: Accepted publickey for core from 139.178.68.195 port 35470 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:14:56.796503 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:56.801590 systemd-logind[1954]: New session 11 of user core. Dec 16 13:14:56.811782 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:14:57.012001 sshd[4845]: Connection closed by 139.178.68.195 port 35470 Dec 16 13:14:57.012881 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:57.017766 systemd-logind[1954]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:14:57.018713 systemd[1]: sshd@10-172.31.24.237:22-139.178.68.195:35470.service: Deactivated successfully. Dec 16 13:14:57.021156 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:14:57.023919 systemd-logind[1954]: Removed session 11. Dec 16 13:15:02.054557 systemd[1]: Started sshd@11-172.31.24.237:22-139.178.68.195:42840.service - OpenSSH per-connection server daemon (139.178.68.195:42840). Dec 16 13:15:02.406949 sshd[4857]: Accepted publickey for core from 139.178.68.195 port 42840 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:02.412540 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:02.459168 systemd-logind[1954]: New session 12 of user core. Dec 16 13:15:02.473800 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:15:03.332569 sshd[4860]: Connection closed by 139.178.68.195 port 42840 Dec 16 13:15:03.333246 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:03.384438 systemd[1]: sshd@11-172.31.24.237:22-139.178.68.195:42840.service: Deactivated successfully. Dec 16 13:15:03.395880 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:15:03.412254 systemd-logind[1954]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:15:03.417713 systemd-logind[1954]: Removed session 12. Dec 16 13:15:08.364259 systemd[1]: Started sshd@12-172.31.24.237:22-139.178.68.195:42846.service - OpenSSH per-connection server daemon (139.178.68.195:42846). Dec 16 13:15:08.596336 sshd[4874]: Accepted publickey for core from 139.178.68.195 port 42846 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:08.598094 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:08.610100 systemd-logind[1954]: New session 13 of user core. Dec 16 13:15:08.630119 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:15:08.850625 sshd[4877]: Connection closed by 139.178.68.195 port 42846 Dec 16 13:15:08.855186 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:08.893721 systemd[1]: sshd@12-172.31.24.237:22-139.178.68.195:42846.service: Deactivated successfully. Dec 16 13:15:08.902660 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:15:08.908069 systemd-logind[1954]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:15:08.916029 systemd-logind[1954]: Removed session 13. Dec 16 13:15:13.883636 systemd[1]: Started sshd@13-172.31.24.237:22-139.178.68.195:49072.service - OpenSSH per-connection server daemon (139.178.68.195:49072). Dec 16 13:15:14.074883 sshd[4892]: Accepted publickey for core from 139.178.68.195 port 49072 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:14.076851 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:14.082711 systemd-logind[1954]: New session 14 of user core. Dec 16 13:15:14.088372 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:15:14.311543 sshd[4895]: Connection closed by 139.178.68.195 port 49072 Dec 16 13:15:14.312395 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:14.316860 systemd[1]: sshd@13-172.31.24.237:22-139.178.68.195:49072.service: Deactivated successfully. Dec 16 13:15:14.319911 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:15:14.321831 systemd-logind[1954]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:15:14.323406 systemd-logind[1954]: Removed session 14. Dec 16 13:15:14.348764 systemd[1]: Started sshd@14-172.31.24.237:22-139.178.68.195:49084.service - OpenSSH per-connection server daemon (139.178.68.195:49084). Dec 16 13:15:14.530959 sshd[4908]: Accepted publickey for core from 139.178.68.195 port 49084 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:14.532590 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:14.538951 systemd-logind[1954]: New session 15 of user core. Dec 16 13:15:14.544809 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:15:14.899738 sshd[4911]: Connection closed by 139.178.68.195 port 49084 Dec 16 13:15:14.901986 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:14.910468 systemd[1]: sshd@14-172.31.24.237:22-139.178.68.195:49084.service: Deactivated successfully. Dec 16 13:15:14.917754 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:15:14.920413 systemd-logind[1954]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:15:14.939399 systemd[1]: Started sshd@15-172.31.24.237:22-139.178.68.195:49094.service - OpenSSH per-connection server daemon (139.178.68.195:49094). Dec 16 13:15:14.941420 systemd-logind[1954]: Removed session 15. Dec 16 13:15:15.152958 sshd[4920]: Accepted publickey for core from 139.178.68.195 port 49094 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:15.165454 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:15.194704 systemd-logind[1954]: New session 16 of user core. Dec 16 13:15:15.202559 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:15:15.425441 sshd[4923]: Connection closed by 139.178.68.195 port 49094 Dec 16 13:15:15.426402 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:15.436117 systemd[1]: sshd@15-172.31.24.237:22-139.178.68.195:49094.service: Deactivated successfully. Dec 16 13:15:15.438380 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:15:15.440043 systemd-logind[1954]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:15:15.441807 systemd-logind[1954]: Removed session 16. Dec 16 13:15:20.472964 systemd[1]: Started sshd@16-172.31.24.237:22-139.178.68.195:53532.service - OpenSSH per-connection server daemon (139.178.68.195:53532). Dec 16 13:15:20.740583 sshd[4937]: Accepted publickey for core from 139.178.68.195 port 53532 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:20.742655 sshd-session[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:20.749989 systemd-logind[1954]: New session 17 of user core. Dec 16 13:15:20.758132 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:15:20.964950 sshd[4940]: Connection closed by 139.178.68.195 port 53532 Dec 16 13:15:20.967424 sshd-session[4937]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:20.974489 systemd[1]: sshd@16-172.31.24.237:22-139.178.68.195:53532.service: Deactivated successfully. Dec 16 13:15:20.977878 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:15:20.979589 systemd-logind[1954]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:15:20.981386 systemd-logind[1954]: Removed session 17. Dec 16 13:15:26.003034 systemd[1]: Started sshd@17-172.31.24.237:22-139.178.68.195:53540.service - OpenSSH per-connection server daemon (139.178.68.195:53540). Dec 16 13:15:26.191245 sshd[4952]: Accepted publickey for core from 139.178.68.195 port 53540 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:26.193279 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:26.199622 systemd-logind[1954]: New session 18 of user core. Dec 16 13:15:26.211762 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:15:26.419452 sshd[4955]: Connection closed by 139.178.68.195 port 53540 Dec 16 13:15:26.420035 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:26.424216 systemd[1]: sshd@17-172.31.24.237:22-139.178.68.195:53540.service: Deactivated successfully. Dec 16 13:15:26.430995 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:15:26.433814 systemd-logind[1954]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:15:26.439330 systemd-logind[1954]: Removed session 18. Dec 16 13:15:31.457988 systemd[1]: Started sshd@18-172.31.24.237:22-139.178.68.195:34540.service - OpenSSH per-connection server daemon (139.178.68.195:34540). Dec 16 13:15:31.640328 sshd[4967]: Accepted publickey for core from 139.178.68.195 port 34540 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:31.641961 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:31.649066 systemd-logind[1954]: New session 19 of user core. Dec 16 13:15:31.654814 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:15:31.849275 sshd[4971]: Connection closed by 139.178.68.195 port 34540 Dec 16 13:15:31.850646 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:31.855418 systemd[1]: sshd@18-172.31.24.237:22-139.178.68.195:34540.service: Deactivated successfully. Dec 16 13:15:31.857919 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:15:31.859920 systemd-logind[1954]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:15:31.862211 systemd-logind[1954]: Removed session 19. Dec 16 13:15:31.882571 systemd[1]: Started sshd@19-172.31.24.237:22-139.178.68.195:34544.service - OpenSSH per-connection server daemon (139.178.68.195:34544). Dec 16 13:15:32.092679 sshd[4983]: Accepted publickey for core from 139.178.68.195 port 34544 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:32.098631 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:32.105210 systemd-logind[1954]: New session 20 of user core. Dec 16 13:15:32.120886 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:15:32.920739 sshd[4986]: Connection closed by 139.178.68.195 port 34544 Dec 16 13:15:32.921594 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:32.970658 systemd[1]: sshd@19-172.31.24.237:22-139.178.68.195:34544.service: Deactivated successfully. Dec 16 13:15:32.974858 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:15:32.976094 systemd-logind[1954]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:15:32.980280 systemd[1]: Started sshd@20-172.31.24.237:22-139.178.68.195:34556.service - OpenSSH per-connection server daemon (139.178.68.195:34556). Dec 16 13:15:32.984005 systemd-logind[1954]: Removed session 20. Dec 16 13:15:33.226744 sshd[4997]: Accepted publickey for core from 139.178.68.195 port 34556 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:33.240103 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:33.257496 systemd-logind[1954]: New session 21 of user core. Dec 16 13:15:33.263262 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:15:34.407337 sshd[5000]: Connection closed by 139.178.68.195 port 34556 Dec 16 13:15:34.410092 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:34.425047 systemd[1]: sshd@20-172.31.24.237:22-139.178.68.195:34556.service: Deactivated successfully. Dec 16 13:15:34.431030 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:15:34.434752 systemd-logind[1954]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:15:34.454247 systemd[1]: Started sshd@21-172.31.24.237:22-139.178.68.195:34570.service - OpenSSH per-connection server daemon (139.178.68.195:34570). Dec 16 13:15:34.456273 systemd-logind[1954]: Removed session 21. Dec 16 13:15:34.651812 sshd[5017]: Accepted publickey for core from 139.178.68.195 port 34570 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:34.653628 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:34.663404 systemd-logind[1954]: New session 22 of user core. Dec 16 13:15:34.673939 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:15:35.160886 sshd[5020]: Connection closed by 139.178.68.195 port 34570 Dec 16 13:15:35.164120 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:35.171775 systemd-logind[1954]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:15:35.172660 systemd[1]: sshd@21-172.31.24.237:22-139.178.68.195:34570.service: Deactivated successfully. Dec 16 13:15:35.175616 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:15:35.177666 systemd-logind[1954]: Removed session 22. Dec 16 13:15:35.196753 systemd[1]: Started sshd@22-172.31.24.237:22-139.178.68.195:34582.service - OpenSSH per-connection server daemon (139.178.68.195:34582). Dec 16 13:15:35.415184 sshd[5030]: Accepted publickey for core from 139.178.68.195 port 34582 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:35.417122 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:35.426442 systemd-logind[1954]: New session 23 of user core. Dec 16 13:15:35.439706 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:15:35.728281 sshd[5033]: Connection closed by 139.178.68.195 port 34582 Dec 16 13:15:35.730629 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:35.750442 systemd[1]: sshd@22-172.31.24.237:22-139.178.68.195:34582.service: Deactivated successfully. Dec 16 13:15:35.753494 systemd-logind[1954]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:15:35.756963 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:15:35.759360 systemd-logind[1954]: Removed session 23. Dec 16 13:15:40.773586 systemd[1]: Started sshd@23-172.31.24.237:22-139.178.68.195:48742.service - OpenSSH per-connection server daemon (139.178.68.195:48742). Dec 16 13:15:40.986350 sshd[5047]: Accepted publickey for core from 139.178.68.195 port 48742 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:40.996304 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:41.010599 systemd-logind[1954]: New session 24 of user core. Dec 16 13:15:41.021085 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:15:41.339305 sshd[5050]: Connection closed by 139.178.68.195 port 48742 Dec 16 13:15:41.339758 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:41.345325 systemd[1]: sshd@23-172.31.24.237:22-139.178.68.195:48742.service: Deactivated successfully. Dec 16 13:15:41.348097 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:15:41.349664 systemd-logind[1954]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:15:41.351239 systemd-logind[1954]: Removed session 24. Dec 16 13:15:46.373509 systemd[1]: Started sshd@24-172.31.24.237:22-139.178.68.195:48758.service - OpenSSH per-connection server daemon (139.178.68.195:48758). Dec 16 13:15:46.563511 sshd[5063]: Accepted publickey for core from 139.178.68.195 port 48758 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:46.566366 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:46.586193 systemd-logind[1954]: New session 25 of user core. Dec 16 13:15:46.592991 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:15:46.822479 sshd[5066]: Connection closed by 139.178.68.195 port 48758 Dec 16 13:15:46.824654 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:46.832002 systemd[1]: sshd@24-172.31.24.237:22-139.178.68.195:48758.service: Deactivated successfully. Dec 16 13:15:46.834930 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:15:46.836072 systemd-logind[1954]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:15:46.838108 systemd-logind[1954]: Removed session 25. Dec 16 13:15:51.855173 systemd[1]: Started sshd@25-172.31.24.237:22-139.178.68.195:41324.service - OpenSSH per-connection server daemon (139.178.68.195:41324). Dec 16 13:15:52.084594 sshd[5080]: Accepted publickey for core from 139.178.68.195 port 41324 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:52.088283 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:52.098094 systemd-logind[1954]: New session 26 of user core. Dec 16 13:15:52.109824 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:15:52.320291 sshd[5083]: Connection closed by 139.178.68.195 port 41324 Dec 16 13:15:52.320964 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:52.325799 systemd[1]: sshd@25-172.31.24.237:22-139.178.68.195:41324.service: Deactivated successfully. Dec 16 13:15:52.328792 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:15:52.329990 systemd-logind[1954]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:15:52.332072 systemd-logind[1954]: Removed session 26. Dec 16 13:15:57.355882 systemd[1]: Started sshd@26-172.31.24.237:22-139.178.68.195:41326.service - OpenSSH per-connection server daemon (139.178.68.195:41326). Dec 16 13:15:57.535322 sshd[5095]: Accepted publickey for core from 139.178.68.195 port 41326 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:57.536767 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:57.544179 systemd-logind[1954]: New session 27 of user core. Dec 16 13:15:57.556814 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 13:15:57.748285 sshd[5098]: Connection closed by 139.178.68.195 port 41326 Dec 16 13:15:57.749391 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:57.754581 systemd[1]: sshd@26-172.31.24.237:22-139.178.68.195:41326.service: Deactivated successfully. Dec 16 13:15:57.757210 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 13:15:57.758584 systemd-logind[1954]: Session 27 logged out. Waiting for processes to exit. Dec 16 13:15:57.760799 systemd-logind[1954]: Removed session 27. Dec 16 13:15:57.800617 systemd[1]: Started sshd@27-172.31.24.237:22-139.178.68.195:41342.service - OpenSSH per-connection server daemon (139.178.68.195:41342). Dec 16 13:15:57.984409 sshd[5110]: Accepted publickey for core from 139.178.68.195 port 41342 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:57.986072 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:57.991593 systemd-logind[1954]: New session 28 of user core. Dec 16 13:15:57.998832 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 13:15:59.517537 containerd[1987]: time="2025-12-16T13:15:59.517127508Z" level=info msg="StopContainer for \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\" with timeout 30 (s)" Dec 16 13:15:59.521577 containerd[1987]: time="2025-12-16T13:15:59.520332832Z" level=info msg="Stop container \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\" with signal terminated" Dec 16 13:15:59.540797 systemd[1]: cri-containerd-e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4.scope: Deactivated successfully. Dec 16 13:15:59.543885 containerd[1987]: time="2025-12-16T13:15:59.543833674Z" level=info msg="received container exit event container_id:\"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\" id:\"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\" pid:4056 exited_at:{seconds:1765890959 nanos:543199321}" Dec 16 13:15:59.564548 containerd[1987]: time="2025-12-16T13:15:59.564369376Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:15:59.568871 containerd[1987]: time="2025-12-16T13:15:59.568804618Z" level=info msg="StopContainer for \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\" with timeout 2 (s)" Dec 16 13:15:59.569421 containerd[1987]: time="2025-12-16T13:15:59.569375698Z" level=info msg="Stop container \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\" with signal terminated" Dec 16 13:15:59.588401 systemd-networkd[1790]: lxc_health: Link DOWN Dec 16 13:15:59.588414 systemd-networkd[1790]: lxc_health: Lost carrier Dec 16 13:15:59.595279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4-rootfs.mount: Deactivated successfully. Dec 16 13:15:59.622007 systemd[1]: cri-containerd-52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20.scope: Deactivated successfully. Dec 16 13:15:59.622429 systemd[1]: cri-containerd-52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20.scope: Consumed 8.367s CPU time, 210.4M memory peak, 90.6M read from disk, 13.3M written to disk. Dec 16 13:15:59.625325 containerd[1987]: time="2025-12-16T13:15:59.625280325Z" level=info msg="received container exit event container_id:\"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\" id:\"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\" pid:4156 exited_at:{seconds:1765890959 nanos:624613843}" Dec 16 13:15:59.655765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20-rootfs.mount: Deactivated successfully. Dec 16 13:15:59.664255 containerd[1987]: time="2025-12-16T13:15:59.664099683Z" level=info msg="StopContainer for \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\" returns successfully" Dec 16 13:15:59.664806 containerd[1987]: time="2025-12-16T13:15:59.664769156Z" level=info msg="StopPodSandbox for \"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\"" Dec 16 13:15:59.668542 containerd[1987]: time="2025-12-16T13:15:59.668463798Z" level=info msg="Container to stop \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:15:59.671582 containerd[1987]: time="2025-12-16T13:15:59.671534538Z" level=info msg="StopContainer for \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\" returns successfully" Dec 16 13:15:59.673350 containerd[1987]: time="2025-12-16T13:15:59.672894839Z" level=info msg="StopPodSandbox for \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\"" Dec 16 13:15:59.673350 containerd[1987]: time="2025-12-16T13:15:59.672992615Z" level=info msg="Container to stop \"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:15:59.673350 containerd[1987]: time="2025-12-16T13:15:59.673010706Z" level=info msg="Container to stop \"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:15:59.673350 containerd[1987]: time="2025-12-16T13:15:59.673026280Z" level=info msg="Container to stop \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:15:59.673350 containerd[1987]: time="2025-12-16T13:15:59.673039964Z" level=info msg="Container to stop \"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:15:59.673350 containerd[1987]: time="2025-12-16T13:15:59.673056521Z" level=info msg="Container to stop \"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:15:59.681758 systemd[1]: cri-containerd-a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f.scope: Deactivated successfully. Dec 16 13:15:59.687878 systemd[1]: cri-containerd-e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032.scope: Deactivated successfully. Dec 16 13:15:59.689045 containerd[1987]: time="2025-12-16T13:15:59.688961119Z" level=info msg="received sandbox exit event container_id:\"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\" id:\"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\" exit_status:137 exited_at:{seconds:1765890959 nanos:688703728}" monitor_name=podsandbox Dec 16 13:15:59.691288 containerd[1987]: time="2025-12-16T13:15:59.691221205Z" level=info msg="received sandbox exit event container_id:\"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" id:\"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" exit_status:137 exited_at:{seconds:1765890959 nanos:690941822}" monitor_name=podsandbox Dec 16 13:15:59.726957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f-rootfs.mount: Deactivated successfully. Dec 16 13:15:59.736996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032-rootfs.mount: Deactivated successfully. Dec 16 13:15:59.740493 containerd[1987]: time="2025-12-16T13:15:59.740397263Z" level=info msg="shim disconnected" id=e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032 namespace=k8s.io Dec 16 13:15:59.740493 containerd[1987]: time="2025-12-16T13:15:59.740434498Z" level=warning msg="cleaning up after shim disconnected" id=e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032 namespace=k8s.io Dec 16 13:15:59.762299 containerd[1987]: time="2025-12-16T13:15:59.740445559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:15:59.762794 containerd[1987]: time="2025-12-16T13:15:59.740791757Z" level=info msg="shim disconnected" id=a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f namespace=k8s.io Dec 16 13:15:59.762794 containerd[1987]: time="2025-12-16T13:15:59.762527448Z" level=warning msg="cleaning up after shim disconnected" id=a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f namespace=k8s.io Dec 16 13:15:59.762794 containerd[1987]: time="2025-12-16T13:15:59.762536767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:15:59.803138 containerd[1987]: time="2025-12-16T13:15:59.802990406Z" level=info msg="received sandbox container exit event sandbox_id:\"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" exit_status:137 exited_at:{seconds:1765890959 nanos:690941822}" monitor_name=criService Dec 16 13:15:59.804409 containerd[1987]: time="2025-12-16T13:15:59.804358609Z" level=info msg="received sandbox container exit event sandbox_id:\"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\" exit_status:137 exited_at:{seconds:1765890959 nanos:688703728}" monitor_name=criService Dec 16 13:15:59.809541 containerd[1987]: time="2025-12-16T13:15:59.805588113Z" level=info msg="TearDown network for sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" successfully" Dec 16 13:15:59.809541 containerd[1987]: time="2025-12-16T13:15:59.805617379Z" level=info msg="StopPodSandbox for \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" returns successfully" Dec 16 13:15:59.810434 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032-shm.mount: Deactivated successfully. Dec 16 13:15:59.811934 containerd[1987]: time="2025-12-16T13:15:59.811790460Z" level=info msg="TearDown network for sandbox \"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\" successfully" Dec 16 13:15:59.811934 containerd[1987]: time="2025-12-16T13:15:59.811830051Z" level=info msg="StopPodSandbox for \"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\" returns successfully" Dec 16 13:15:59.889395 kubelet[3337]: I1216 13:15:59.888578 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a88922da-0504-40b6-8104-e89bd508d9f9-cilium-config-path\") pod \"a88922da-0504-40b6-8104-e89bd508d9f9\" (UID: \"a88922da-0504-40b6-8104-e89bd508d9f9\") " Dec 16 13:15:59.889395 kubelet[3337]: I1216 13:15:59.888632 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfxm5\" (UniqueName: \"kubernetes.io/projected/521ff325-3dcd-4225-ac50-ac4f7f660cc3-kube-api-access-vfxm5\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.889395 kubelet[3337]: I1216 13:15:59.888657 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-host-proc-sys-kernel\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.889395 kubelet[3337]: I1216 13:15:59.888680 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-etc-cni-netd\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.889395 kubelet[3337]: I1216 13:15:59.888703 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cilium-cgroup\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.889395 kubelet[3337]: I1216 13:15:59.888727 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cni-path\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.890129 kubelet[3337]: I1216 13:15:59.888748 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cilium-run\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.890129 kubelet[3337]: I1216 13:15:59.888772 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-host-proc-sys-net\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.890129 kubelet[3337]: I1216 13:15:59.888793 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-bpf-maps\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.890129 kubelet[3337]: I1216 13:15:59.888816 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/521ff325-3dcd-4225-ac50-ac4f7f660cc3-hubble-tls\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.890129 kubelet[3337]: I1216 13:15:59.888835 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-lib-modules\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.890129 kubelet[3337]: I1216 13:15:59.888857 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-hostproc\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.890452 kubelet[3337]: I1216 13:15:59.888880 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-xtables-lock\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.890452 kubelet[3337]: I1216 13:15:59.888904 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/521ff325-3dcd-4225-ac50-ac4f7f660cc3-clustermesh-secrets\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.890452 kubelet[3337]: I1216 13:15:59.888928 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cilium-config-path\") pod \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\" (UID: \"521ff325-3dcd-4225-ac50-ac4f7f660cc3\") " Dec 16 13:15:59.890452 kubelet[3337]: I1216 13:15:59.888951 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc8lk\" (UniqueName: \"kubernetes.io/projected/a88922da-0504-40b6-8104-e89bd508d9f9-kube-api-access-tc8lk\") pod \"a88922da-0504-40b6-8104-e89bd508d9f9\" (UID: \"a88922da-0504-40b6-8104-e89bd508d9f9\") " Dec 16 13:15:59.890452 kubelet[3337]: I1216 13:15:59.889364 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:59.894338 kubelet[3337]: I1216 13:15:59.894283 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a88922da-0504-40b6-8104-e89bd508d9f9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a88922da-0504-40b6-8104-e89bd508d9f9" (UID: "a88922da-0504-40b6-8104-e89bd508d9f9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:15:59.897555 kubelet[3337]: I1216 13:15:59.897403 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a88922da-0504-40b6-8104-e89bd508d9f9-kube-api-access-tc8lk" (OuterVolumeSpecName: "kube-api-access-tc8lk") pod "a88922da-0504-40b6-8104-e89bd508d9f9" (UID: "a88922da-0504-40b6-8104-e89bd508d9f9"). InnerVolumeSpecName "kube-api-access-tc8lk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:15:59.897555 kubelet[3337]: I1216 13:15:59.897488 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:59.898341 kubelet[3337]: I1216 13:15:59.898302 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:59.899538 kubelet[3337]: I1216 13:15:59.898489 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:59.899538 kubelet[3337]: I1216 13:15:59.898562 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:59.899538 kubelet[3337]: I1216 13:15:59.898593 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cni-path" (OuterVolumeSpecName: "cni-path") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:59.899538 kubelet[3337]: I1216 13:15:59.898617 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:59.899538 kubelet[3337]: I1216 13:15:59.898858 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521ff325-3dcd-4225-ac50-ac4f7f660cc3-kube-api-access-vfxm5" (OuterVolumeSpecName: "kube-api-access-vfxm5") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "kube-api-access-vfxm5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:15:59.899828 kubelet[3337]: I1216 13:15:59.898902 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:59.899828 kubelet[3337]: I1216 13:15:59.898926 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:59.899828 kubelet[3337]: I1216 13:15:59.898951 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-hostproc" (OuterVolumeSpecName: "hostproc") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:59.902051 kubelet[3337]: I1216 13:15:59.902003 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521ff325-3dcd-4225-ac50-ac4f7f660cc3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:15:59.904540 kubelet[3337]: I1216 13:15:59.904473 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521ff325-3dcd-4225-ac50-ac4f7f660cc3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:15:59.905645 kubelet[3337]: I1216 13:15:59.905603 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "521ff325-3dcd-4225-ac50-ac4f7f660cc3" (UID: "521ff325-3dcd-4225-ac50-ac4f7f660cc3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:15:59.990105 kubelet[3337]: I1216 13:15:59.989946 3337 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-host-proc-sys-kernel\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990105 kubelet[3337]: I1216 13:15:59.990103 3337 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-etc-cni-netd\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990337 kubelet[3337]: I1216 13:15:59.990119 3337 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cilium-cgroup\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990337 kubelet[3337]: I1216 13:15:59.990134 3337 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cni-path\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990337 kubelet[3337]: I1216 13:15:59.990145 3337 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cilium-run\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990337 kubelet[3337]: I1216 13:15:59.990155 3337 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-bpf-maps\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990337 kubelet[3337]: I1216 13:15:59.990165 3337 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/521ff325-3dcd-4225-ac50-ac4f7f660cc3-hubble-tls\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990337 kubelet[3337]: I1216 13:15:59.990175 3337 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-lib-modules\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990337 kubelet[3337]: I1216 13:15:59.990186 3337 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-host-proc-sys-net\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990337 kubelet[3337]: I1216 13:15:59.990197 3337 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-hostproc\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990775 kubelet[3337]: I1216 13:15:59.990208 3337 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/521ff325-3dcd-4225-ac50-ac4f7f660cc3-cilium-config-path\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990775 kubelet[3337]: I1216 13:15:59.990221 3337 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tc8lk\" (UniqueName: \"kubernetes.io/projected/a88922da-0504-40b6-8104-e89bd508d9f9-kube-api-access-tc8lk\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990775 kubelet[3337]: I1216 13:15:59.990234 3337 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/521ff325-3dcd-4225-ac50-ac4f7f660cc3-xtables-lock\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990775 kubelet[3337]: I1216 13:15:59.990248 3337 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/521ff325-3dcd-4225-ac50-ac4f7f660cc3-clustermesh-secrets\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990775 kubelet[3337]: I1216 13:15:59.990262 3337 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a88922da-0504-40b6-8104-e89bd508d9f9-cilium-config-path\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:15:59.990775 kubelet[3337]: I1216 13:15:59.990274 3337 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vfxm5\" (UniqueName: \"kubernetes.io/projected/521ff325-3dcd-4225-ac50-ac4f7f660cc3-kube-api-access-vfxm5\") on node \"ip-172-31-24-237\" DevicePath \"\"" Dec 16 13:16:00.493967 kubelet[3337]: I1216 13:16:00.493815 3337 scope.go:117] "RemoveContainer" containerID="e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4" Dec 16 13:16:00.499841 containerd[1987]: time="2025-12-16T13:16:00.499544214Z" level=info msg="RemoveContainer for \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\"" Dec 16 13:16:00.506380 systemd[1]: Removed slice kubepods-besteffort-poda88922da_0504_40b6_8104_e89bd508d9f9.slice - libcontainer container kubepods-besteffort-poda88922da_0504_40b6_8104_e89bd508d9f9.slice. Dec 16 13:16:00.545862 containerd[1987]: time="2025-12-16T13:16:00.545391283Z" level=info msg="RemoveContainer for \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\" returns successfully" Dec 16 13:16:00.549101 kubelet[3337]: I1216 13:16:00.548933 3337 scope.go:117] "RemoveContainer" containerID="e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4" Dec 16 13:16:00.552473 containerd[1987]: time="2025-12-16T13:16:00.552369203Z" level=error msg="ContainerStatus for \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\": not found" Dec 16 13:16:00.553740 kubelet[3337]: E1216 13:16:00.553081 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\": not found" containerID="e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4" Dec 16 13:16:00.553740 kubelet[3337]: I1216 13:16:00.553130 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4"} err="failed to get container status \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0960373d4657ee5d4ff248e7f5d5d7fad00b5d20237fd2764313d41472579d4\": not found" Dec 16 13:16:00.553740 kubelet[3337]: I1216 13:16:00.553366 3337 scope.go:117] "RemoveContainer" containerID="52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20" Dec 16 13:16:00.558762 containerd[1987]: time="2025-12-16T13:16:00.558431233Z" level=info msg="RemoveContainer for \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\"" Dec 16 13:16:00.563657 systemd[1]: Removed slice kubepods-burstable-pod521ff325_3dcd_4225_ac50_ac4f7f660cc3.slice - libcontainer container kubepods-burstable-pod521ff325_3dcd_4225_ac50_ac4f7f660cc3.slice. Dec 16 13:16:00.563820 systemd[1]: kubepods-burstable-pod521ff325_3dcd_4225_ac50_ac4f7f660cc3.slice: Consumed 8.505s CPU time, 210.7M memory peak, 91.7M read from disk, 13.3M written to disk. Dec 16 13:16:00.568066 containerd[1987]: time="2025-12-16T13:16:00.568024964Z" level=info msg="RemoveContainer for \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\" returns successfully" Dec 16 13:16:00.568652 kubelet[3337]: I1216 13:16:00.568557 3337 scope.go:117] "RemoveContainer" containerID="7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2" Dec 16 13:16:00.571952 containerd[1987]: time="2025-12-16T13:16:00.571897593Z" level=info msg="RemoveContainer for \"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2\"" Dec 16 13:16:00.583944 containerd[1987]: time="2025-12-16T13:16:00.583879521Z" level=info msg="RemoveContainer for \"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2\" returns successfully" Dec 16 13:16:00.584833 kubelet[3337]: I1216 13:16:00.584447 3337 scope.go:117] "RemoveContainer" containerID="ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda" Dec 16 13:16:00.624839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f-shm.mount: Deactivated successfully. Dec 16 13:16:00.625016 systemd[1]: var-lib-kubelet-pods-a88922da\x2d0504\x2d40b6\x2d8104\x2de89bd508d9f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtc8lk.mount: Deactivated successfully. Dec 16 13:16:00.625104 systemd[1]: var-lib-kubelet-pods-521ff325\x2d3dcd\x2d4225\x2dac50\x2dac4f7f660cc3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvfxm5.mount: Deactivated successfully. Dec 16 13:16:00.625620 systemd[1]: var-lib-kubelet-pods-521ff325\x2d3dcd\x2d4225\x2dac50\x2dac4f7f660cc3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 13:16:00.625719 systemd[1]: var-lib-kubelet-pods-521ff325\x2d3dcd\x2d4225\x2dac50\x2dac4f7f660cc3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 13:16:00.633862 containerd[1987]: time="2025-12-16T13:16:00.633819888Z" level=info msg="RemoveContainer for \"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda\"" Dec 16 13:16:00.644907 containerd[1987]: time="2025-12-16T13:16:00.644625018Z" level=info msg="RemoveContainer for \"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda\" returns successfully" Dec 16 13:16:00.645442 kubelet[3337]: I1216 13:16:00.645412 3337 scope.go:117] "RemoveContainer" containerID="c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0" Dec 16 13:16:00.648512 containerd[1987]: time="2025-12-16T13:16:00.648461028Z" level=info msg="RemoveContainer for \"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0\"" Dec 16 13:16:00.664729 containerd[1987]: time="2025-12-16T13:16:00.664240431Z" level=info msg="RemoveContainer for \"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0\" returns successfully" Dec 16 13:16:00.665091 kubelet[3337]: I1216 13:16:00.665052 3337 scope.go:117] "RemoveContainer" containerID="255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b" Dec 16 13:16:00.673350 containerd[1987]: time="2025-12-16T13:16:00.673307980Z" level=info msg="RemoveContainer for \"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b\"" Dec 16 13:16:00.694248 containerd[1987]: time="2025-12-16T13:16:00.694168398Z" level=info msg="RemoveContainer for \"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b\" returns successfully" Dec 16 13:16:00.695084 kubelet[3337]: I1216 13:16:00.694918 3337 scope.go:117] "RemoveContainer" containerID="52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20" Dec 16 13:16:00.695709 containerd[1987]: time="2025-12-16T13:16:00.695625298Z" level=error msg="ContainerStatus for \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\": not found" Dec 16 13:16:00.696128 kubelet[3337]: E1216 13:16:00.696089 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\": not found" containerID="52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20" Dec 16 13:16:00.696206 kubelet[3337]: I1216 13:16:00.696125 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20"} err="failed to get container status \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\": rpc error: code = NotFound desc = an error occurred when try to find container \"52dfd8ffeccf2a6fd4a5c79a5b14289c40de371212307e949bff53ab5959fc20\": not found" Dec 16 13:16:00.696206 kubelet[3337]: I1216 13:16:00.696154 3337 scope.go:117] "RemoveContainer" containerID="7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2" Dec 16 13:16:00.697868 containerd[1987]: time="2025-12-16T13:16:00.696499412Z" level=error msg="ContainerStatus for \"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2\": not found" Dec 16 13:16:00.698040 kubelet[3337]: E1216 13:16:00.697965 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2\": not found" containerID="7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2" Dec 16 13:16:00.698040 kubelet[3337]: I1216 13:16:00.698006 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2"} err="failed to get container status \"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f59e51eff25fb0bb26d839bb90cab49a5349225886f9e9b4c65e364695aa1f2\": not found" Dec 16 13:16:00.698040 kubelet[3337]: I1216 13:16:00.698032 3337 scope.go:117] "RemoveContainer" containerID="ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda" Dec 16 13:16:00.698326 containerd[1987]: time="2025-12-16T13:16:00.698291328Z" level=error msg="ContainerStatus for \"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda\": not found" Dec 16 13:16:00.698761 kubelet[3337]: E1216 13:16:00.698709 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda\": not found" containerID="ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda" Dec 16 13:16:00.698761 kubelet[3337]: I1216 13:16:00.698743 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda"} err="failed to get container status \"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac9c3bc86a77dbcfde12747cae2ff1a884c58441dfae798bdca31da039e4ffda\": not found" Dec 16 13:16:00.698943 kubelet[3337]: I1216 13:16:00.698771 3337 scope.go:117] "RemoveContainer" containerID="c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0" Dec 16 13:16:00.702279 containerd[1987]: time="2025-12-16T13:16:00.699161370Z" level=error msg="ContainerStatus for \"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0\": not found" Dec 16 13:16:00.706494 kubelet[3337]: E1216 13:16:00.702485 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0\": not found" containerID="c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0" Dec 16 13:16:00.706692 kubelet[3337]: I1216 13:16:00.706541 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0"} err="failed to get container status \"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"c87cf9d086f8fbf2b3ab5b20ba44f619ff46a164b64b1ca881603927758b11d0\": not found" Dec 16 13:16:00.706692 kubelet[3337]: I1216 13:16:00.706591 3337 scope.go:117] "RemoveContainer" containerID="255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b" Dec 16 13:16:00.707280 containerd[1987]: time="2025-12-16T13:16:00.707050374Z" level=error msg="ContainerStatus for \"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b\": not found" Dec 16 13:16:00.707457 kubelet[3337]: E1216 13:16:00.707414 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b\": not found" containerID="255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b" Dec 16 13:16:00.708073 kubelet[3337]: I1216 13:16:00.707452 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b"} err="failed to get container status \"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b\": rpc error: code = NotFound desc = an error occurred when try to find container \"255d79021486614cf8cc6a5c423c69500fd34ea3c503d8a6df03cdd5253a245b\": not found" Dec 16 13:16:00.965091 kubelet[3337]: I1216 13:16:00.965049 3337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="521ff325-3dcd-4225-ac50-ac4f7f660cc3" path="/var/lib/kubelet/pods/521ff325-3dcd-4225-ac50-ac4f7f660cc3/volumes" Dec 16 13:16:00.968165 kubelet[3337]: I1216 13:16:00.968130 3337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a88922da-0504-40b6-8104-e89bd508d9f9" path="/var/lib/kubelet/pods/a88922da-0504-40b6-8104-e89bd508d9f9/volumes" Dec 16 13:16:01.440621 sshd[5113]: Connection closed by 139.178.68.195 port 41342 Dec 16 13:16:01.441353 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:01.517603 systemd[1]: sshd@27-172.31.24.237:22-139.178.68.195:41342.service: Deactivated successfully. Dec 16 13:16:01.542083 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 13:16:01.551635 systemd-logind[1954]: Session 28 logged out. Waiting for processes to exit. Dec 16 13:16:01.573784 systemd[1]: Started sshd@28-172.31.24.237:22-139.178.68.195:48220.service - OpenSSH per-connection server daemon (139.178.68.195:48220). Dec 16 13:16:01.586836 systemd-logind[1954]: Removed session 28. Dec 16 13:16:02.041263 sshd[5258]: Accepted publickey for core from 139.178.68.195 port 48220 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:02.053201 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:02.094017 systemd-logind[1954]: New session 29 of user core. Dec 16 13:16:02.128710 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 16 13:16:02.381759 ntpd[2241]: Deleting 10 lxc_health, [fe80::44c6:32ff:fef8:3597%8]:123, stats: received=0, sent=0, dropped=0, active_time=80 secs Dec 16 13:16:02.382442 ntpd[2241]: 16 Dec 13:16:02 ntpd[2241]: Deleting 10 lxc_health, [fe80::44c6:32ff:fef8:3597%8]:123, stats: received=0, sent=0, dropped=0, active_time=80 secs Dec 16 13:16:03.085369 kubelet[3337]: E1216 13:16:03.085328 3337 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:16:03.961596 kubelet[3337]: E1216 13:16:03.961509 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-9z68m" podUID="e3117b8a-01fc-44cf-a6c8-fb02a187636b" Dec 16 13:16:04.910878 sshd[5261]: Connection closed by 139.178.68.195 port 48220 Dec 16 13:16:04.915470 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:04.934792 systemd-logind[1954]: Session 29 logged out. Waiting for processes to exit. Dec 16 13:16:04.940117 systemd[1]: sshd@28-172.31.24.237:22-139.178.68.195:48220.service: Deactivated successfully. Dec 16 13:16:04.945812 systemd[1]: session-29.scope: Deactivated successfully. Dec 16 13:16:04.984103 systemd[1]: Started sshd@29-172.31.24.237:22-139.178.68.195:48232.service - OpenSSH per-connection server daemon (139.178.68.195:48232). Dec 16 13:16:04.988188 systemd-logind[1954]: Removed session 29. Dec 16 13:16:04.996236 kubelet[3337]: I1216 13:16:04.996194 3337 memory_manager.go:355] "RemoveStaleState removing state" podUID="a88922da-0504-40b6-8104-e89bd508d9f9" containerName="cilium-operator" Dec 16 13:16:04.996236 kubelet[3337]: I1216 13:16:04.996231 3337 memory_manager.go:355] "RemoveStaleState removing state" podUID="521ff325-3dcd-4225-ac50-ac4f7f660cc3" containerName="cilium-agent" Dec 16 13:16:05.046294 systemd[1]: Created slice kubepods-burstable-pod0d180dc0_eb39_4071_b315_69811f6f95ff.slice - libcontainer container kubepods-burstable-pod0d180dc0_eb39_4071_b315_69811f6f95ff.slice. Dec 16 13:16:05.077418 kubelet[3337]: I1216 13:16:05.076845 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d180dc0-eb39-4071-b315-69811f6f95ff-hubble-tls\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077418 kubelet[3337]: I1216 13:16:05.076902 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d180dc0-eb39-4071-b315-69811f6f95ff-host-proc-sys-net\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077418 kubelet[3337]: I1216 13:16:05.076930 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0d180dc0-eb39-4071-b315-69811f6f95ff-cilium-ipsec-secrets\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077418 kubelet[3337]: I1216 13:16:05.076957 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d180dc0-eb39-4071-b315-69811f6f95ff-lib-modules\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077418 kubelet[3337]: I1216 13:16:05.076982 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d180dc0-eb39-4071-b315-69811f6f95ff-etc-cni-netd\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077418 kubelet[3337]: I1216 13:16:05.077005 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d180dc0-eb39-4071-b315-69811f6f95ff-hostproc\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077824 kubelet[3337]: I1216 13:16:05.077029 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d180dc0-eb39-4071-b315-69811f6f95ff-cni-path\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077824 kubelet[3337]: I1216 13:16:05.077055 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d180dc0-eb39-4071-b315-69811f6f95ff-xtables-lock\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077824 kubelet[3337]: I1216 13:16:05.077084 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbrdm\" (UniqueName: \"kubernetes.io/projected/0d180dc0-eb39-4071-b315-69811f6f95ff-kube-api-access-cbrdm\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077824 kubelet[3337]: I1216 13:16:05.077110 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d180dc0-eb39-4071-b315-69811f6f95ff-cilium-run\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077824 kubelet[3337]: I1216 13:16:05.077133 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d180dc0-eb39-4071-b315-69811f6f95ff-bpf-maps\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077824 kubelet[3337]: I1216 13:16:05.077155 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d180dc0-eb39-4071-b315-69811f6f95ff-clustermesh-secrets\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077977 kubelet[3337]: I1216 13:16:05.077185 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d180dc0-eb39-4071-b315-69811f6f95ff-cilium-cgroup\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077977 kubelet[3337]: I1216 13:16:05.077209 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d180dc0-eb39-4071-b315-69811f6f95ff-cilium-config-path\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.077977 kubelet[3337]: I1216 13:16:05.077234 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d180dc0-eb39-4071-b315-69811f6f95ff-host-proc-sys-kernel\") pod \"cilium-dbsdf\" (UID: \"0d180dc0-eb39-4071-b315-69811f6f95ff\") " pod="kube-system/cilium-dbsdf" Dec 16 13:16:05.184884 kubelet[3337]: I1216 13:16:05.184289 3337 setters.go:602] "Node became not ready" node="ip-172-31-24-237" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T13:16:05Z","lastTransitionTime":"2025-12-16T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 13:16:05.284961 sshd[5272]: Accepted publickey for core from 139.178.68.195 port 48232 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:05.297056 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:05.328602 systemd-logind[1954]: New session 30 of user core. Dec 16 13:16:05.335754 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 16 13:16:05.364952 containerd[1987]: time="2025-12-16T13:16:05.364889948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbsdf,Uid:0d180dc0-eb39-4071-b315-69811f6f95ff,Namespace:kube-system,Attempt:0,}" Dec 16 13:16:05.477027 containerd[1987]: time="2025-12-16T13:16:05.476838711Z" level=info msg="connecting to shim 126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860" address="unix:///run/containerd/s/e2c85dadbf5243eb026592bf76810629b792caadca351096be1b9a59e38bb0d7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:16:05.493873 sshd[5281]: Connection closed by 139.178.68.195 port 48232 Dec 16 13:16:05.495959 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:05.512166 systemd[1]: sshd@29-172.31.24.237:22-139.178.68.195:48232.service: Deactivated successfully. Dec 16 13:16:05.516495 systemd[1]: session-30.scope: Deactivated successfully. Dec 16 13:16:05.543878 systemd-logind[1954]: Session 30 logged out. Waiting for processes to exit. Dec 16 13:16:05.553169 systemd[1]: Started cri-containerd-126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860.scope - libcontainer container 126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860. Dec 16 13:16:05.559842 systemd[1]: Started sshd@30-172.31.24.237:22-139.178.68.195:48246.service - OpenSSH per-connection server daemon (139.178.68.195:48246). Dec 16 13:16:05.591881 systemd-logind[1954]: Removed session 30. Dec 16 13:16:05.683870 containerd[1987]: time="2025-12-16T13:16:05.683805393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbsdf,Uid:0d180dc0-eb39-4071-b315-69811f6f95ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860\"" Dec 16 13:16:05.710067 containerd[1987]: time="2025-12-16T13:16:05.709886372Z" level=info msg="CreateContainer within sandbox \"126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:16:05.726862 containerd[1987]: time="2025-12-16T13:16:05.726419893Z" level=info msg="Container 2a8467b9aa2796a9e472f613e42c9e7c1ed749921fdc283b2c8b849d01e0aadf: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:16:05.747548 containerd[1987]: time="2025-12-16T13:16:05.747143376Z" level=info msg="CreateContainer within sandbox \"126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2a8467b9aa2796a9e472f613e42c9e7c1ed749921fdc283b2c8b849d01e0aadf\"" Dec 16 13:16:05.748866 containerd[1987]: time="2025-12-16T13:16:05.748824261Z" level=info msg="StartContainer for \"2a8467b9aa2796a9e472f613e42c9e7c1ed749921fdc283b2c8b849d01e0aadf\"" Dec 16 13:16:05.750336 containerd[1987]: time="2025-12-16T13:16:05.750257023Z" level=info msg="connecting to shim 2a8467b9aa2796a9e472f613e42c9e7c1ed749921fdc283b2c8b849d01e0aadf" address="unix:///run/containerd/s/e2c85dadbf5243eb026592bf76810629b792caadca351096be1b9a59e38bb0d7" protocol=ttrpc version=3 Dec 16 13:16:05.840388 systemd[1]: Started cri-containerd-2a8467b9aa2796a9e472f613e42c9e7c1ed749921fdc283b2c8b849d01e0aadf.scope - libcontainer container 2a8467b9aa2796a9e472f613e42c9e7c1ed749921fdc283b2c8b849d01e0aadf. Dec 16 13:16:05.841727 sshd[5320]: Accepted publickey for core from 139.178.68.195 port 48246 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:05.848965 sshd-session[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:05.934157 systemd-logind[1954]: New session 31 of user core. Dec 16 13:16:05.948388 systemd[1]: Started session-31.scope - Session 31 of User core. Dec 16 13:16:05.963007 kubelet[3337]: E1216 13:16:05.962960 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-9z68m" podUID="e3117b8a-01fc-44cf-a6c8-fb02a187636b" Dec 16 13:16:06.056029 containerd[1987]: time="2025-12-16T13:16:06.055816229Z" level=info msg="StartContainer for \"2a8467b9aa2796a9e472f613e42c9e7c1ed749921fdc283b2c8b849d01e0aadf\" returns successfully" Dec 16 13:16:06.099600 systemd[1]: cri-containerd-2a8467b9aa2796a9e472f613e42c9e7c1ed749921fdc283b2c8b849d01e0aadf.scope: Deactivated successfully. Dec 16 13:16:06.101055 systemd[1]: cri-containerd-2a8467b9aa2796a9e472f613e42c9e7c1ed749921fdc283b2c8b849d01e0aadf.scope: Consumed 22ms CPU time, 9.5M memory peak, 3.3M read from disk. Dec 16 13:16:06.102399 containerd[1987]: time="2025-12-16T13:16:06.102239360Z" level=info msg="received container exit event container_id:\"2a8467b9aa2796a9e472f613e42c9e7c1ed749921fdc283b2c8b849d01e0aadf\" id:\"2a8467b9aa2796a9e472f613e42c9e7c1ed749921fdc283b2c8b849d01e0aadf\" pid:5350 exited_at:{seconds:1765890966 nanos:101049733}" Dec 16 13:16:06.649329 containerd[1987]: time="2025-12-16T13:16:06.649282630Z" level=info msg="CreateContainer within sandbox \"126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:16:06.674492 containerd[1987]: time="2025-12-16T13:16:06.673837853Z" level=info msg="Container 15b8082c052ccfa76795d124bee192a79d8c53af3b55bda7038049d6c168d3a6: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:16:06.675836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3550072221.mount: Deactivated successfully. Dec 16 13:16:06.693648 containerd[1987]: time="2025-12-16T13:16:06.693600225Z" level=info msg="CreateContainer within sandbox \"126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"15b8082c052ccfa76795d124bee192a79d8c53af3b55bda7038049d6c168d3a6\"" Dec 16 13:16:06.696654 containerd[1987]: time="2025-12-16T13:16:06.696607453Z" level=info msg="StartContainer for \"15b8082c052ccfa76795d124bee192a79d8c53af3b55bda7038049d6c168d3a6\"" Dec 16 13:16:06.698437 containerd[1987]: time="2025-12-16T13:16:06.698396504Z" level=info msg="connecting to shim 15b8082c052ccfa76795d124bee192a79d8c53af3b55bda7038049d6c168d3a6" address="unix:///run/containerd/s/e2c85dadbf5243eb026592bf76810629b792caadca351096be1b9a59e38bb0d7" protocol=ttrpc version=3 Dec 16 13:16:06.754675 systemd[1]: Started cri-containerd-15b8082c052ccfa76795d124bee192a79d8c53af3b55bda7038049d6c168d3a6.scope - libcontainer container 15b8082c052ccfa76795d124bee192a79d8c53af3b55bda7038049d6c168d3a6. Dec 16 13:16:06.829400 containerd[1987]: time="2025-12-16T13:16:06.829348461Z" level=info msg="StartContainer for \"15b8082c052ccfa76795d124bee192a79d8c53af3b55bda7038049d6c168d3a6\" returns successfully" Dec 16 13:16:06.840022 systemd[1]: cri-containerd-15b8082c052ccfa76795d124bee192a79d8c53af3b55bda7038049d6c168d3a6.scope: Deactivated successfully. Dec 16 13:16:06.840484 systemd[1]: cri-containerd-15b8082c052ccfa76795d124bee192a79d8c53af3b55bda7038049d6c168d3a6.scope: Consumed 25ms CPU time, 7.4M memory peak, 2.2M read from disk. Dec 16 13:16:06.843371 containerd[1987]: time="2025-12-16T13:16:06.843218642Z" level=info msg="received container exit event container_id:\"15b8082c052ccfa76795d124bee192a79d8c53af3b55bda7038049d6c168d3a6\" id:\"15b8082c052ccfa76795d124bee192a79d8c53af3b55bda7038049d6c168d3a6\" pid:5401 exited_at:{seconds:1765890966 nanos:842881908}" Dec 16 13:16:07.261072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15b8082c052ccfa76795d124bee192a79d8c53af3b55bda7038049d6c168d3a6-rootfs.mount: Deactivated successfully. Dec 16 13:16:07.672364 containerd[1987]: time="2025-12-16T13:16:07.671617767Z" level=info msg="CreateContainer within sandbox \"126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:16:07.706542 containerd[1987]: time="2025-12-16T13:16:07.703460889Z" level=info msg="Container cb506dda5350cdcb70374ba76ababe6b9bdaf6c3b5511722a72d9e2ce361b045: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:16:07.736983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2494608777.mount: Deactivated successfully. Dec 16 13:16:07.761652 containerd[1987]: time="2025-12-16T13:16:07.761607291Z" level=info msg="CreateContainer within sandbox \"126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cb506dda5350cdcb70374ba76ababe6b9bdaf6c3b5511722a72d9e2ce361b045\"" Dec 16 13:16:07.762417 containerd[1987]: time="2025-12-16T13:16:07.762349098Z" level=info msg="StartContainer for \"cb506dda5350cdcb70374ba76ababe6b9bdaf6c3b5511722a72d9e2ce361b045\"" Dec 16 13:16:07.764226 containerd[1987]: time="2025-12-16T13:16:07.764187628Z" level=info msg="connecting to shim cb506dda5350cdcb70374ba76ababe6b9bdaf6c3b5511722a72d9e2ce361b045" address="unix:///run/containerd/s/e2c85dadbf5243eb026592bf76810629b792caadca351096be1b9a59e38bb0d7" protocol=ttrpc version=3 Dec 16 13:16:07.799020 systemd[1]: Started cri-containerd-cb506dda5350cdcb70374ba76ababe6b9bdaf6c3b5511722a72d9e2ce361b045.scope - libcontainer container cb506dda5350cdcb70374ba76ababe6b9bdaf6c3b5511722a72d9e2ce361b045. Dec 16 13:16:07.946552 containerd[1987]: time="2025-12-16T13:16:07.946413012Z" level=info msg="StartContainer for \"cb506dda5350cdcb70374ba76ababe6b9bdaf6c3b5511722a72d9e2ce361b045\" returns successfully" Dec 16 13:16:07.960993 kubelet[3337]: E1216 13:16:07.960884 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-9z68m" podUID="e3117b8a-01fc-44cf-a6c8-fb02a187636b" Dec 16 13:16:07.960993 kubelet[3337]: E1216 13:16:07.960923 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-z49cb" podUID="44385c2d-42c0-4161-a198-4b71ecff77d6" Dec 16 13:16:08.006553 systemd[1]: cri-containerd-cb506dda5350cdcb70374ba76ababe6b9bdaf6c3b5511722a72d9e2ce361b045.scope: Deactivated successfully. Dec 16 13:16:08.012889 containerd[1987]: time="2025-12-16T13:16:08.012441410Z" level=info msg="received container exit event container_id:\"cb506dda5350cdcb70374ba76ababe6b9bdaf6c3b5511722a72d9e2ce361b045\" id:\"cb506dda5350cdcb70374ba76ababe6b9bdaf6c3b5511722a72d9e2ce361b045\" pid:5443 exited_at:{seconds:1765890968 nanos:12165558}" Dec 16 13:16:08.087641 kubelet[3337]: E1216 13:16:08.087598 3337 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:16:08.096385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb506dda5350cdcb70374ba76ababe6b9bdaf6c3b5511722a72d9e2ce361b045-rootfs.mount: Deactivated successfully. Dec 16 13:16:08.672431 containerd[1987]: time="2025-12-16T13:16:08.671763243Z" level=info msg="CreateContainer within sandbox \"126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:16:08.694610 containerd[1987]: time="2025-12-16T13:16:08.692165843Z" level=info msg="Container 239ca287bab7365ab5c8d3b3b82ea2ef32a9f4712cf73596e19c757758ba63ff: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:16:08.695430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704780513.mount: Deactivated successfully. Dec 16 13:16:08.706452 containerd[1987]: time="2025-12-16T13:16:08.706345936Z" level=info msg="CreateContainer within sandbox \"126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"239ca287bab7365ab5c8d3b3b82ea2ef32a9f4712cf73596e19c757758ba63ff\"" Dec 16 13:16:08.707337 containerd[1987]: time="2025-12-16T13:16:08.707304841Z" level=info msg="StartContainer for \"239ca287bab7365ab5c8d3b3b82ea2ef32a9f4712cf73596e19c757758ba63ff\"" Dec 16 13:16:08.708551 containerd[1987]: time="2025-12-16T13:16:08.708459966Z" level=info msg="connecting to shim 239ca287bab7365ab5c8d3b3b82ea2ef32a9f4712cf73596e19c757758ba63ff" address="unix:///run/containerd/s/e2c85dadbf5243eb026592bf76810629b792caadca351096be1b9a59e38bb0d7" protocol=ttrpc version=3 Dec 16 13:16:08.733808 systemd[1]: Started cri-containerd-239ca287bab7365ab5c8d3b3b82ea2ef32a9f4712cf73596e19c757758ba63ff.scope - libcontainer container 239ca287bab7365ab5c8d3b3b82ea2ef32a9f4712cf73596e19c757758ba63ff. Dec 16 13:16:08.768847 systemd[1]: cri-containerd-239ca287bab7365ab5c8d3b3b82ea2ef32a9f4712cf73596e19c757758ba63ff.scope: Deactivated successfully. Dec 16 13:16:08.774178 containerd[1987]: time="2025-12-16T13:16:08.773962800Z" level=info msg="received container exit event container_id:\"239ca287bab7365ab5c8d3b3b82ea2ef32a9f4712cf73596e19c757758ba63ff\" id:\"239ca287bab7365ab5c8d3b3b82ea2ef32a9f4712cf73596e19c757758ba63ff\" pid:5483 exited_at:{seconds:1765890968 nanos:773699353}" Dec 16 13:16:08.784656 containerd[1987]: time="2025-12-16T13:16:08.784600237Z" level=info msg="StartContainer for \"239ca287bab7365ab5c8d3b3b82ea2ef32a9f4712cf73596e19c757758ba63ff\" returns successfully" Dec 16 13:16:08.804092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-239ca287bab7365ab5c8d3b3b82ea2ef32a9f4712cf73596e19c757758ba63ff-rootfs.mount: Deactivated successfully. Dec 16 13:16:09.673040 containerd[1987]: time="2025-12-16T13:16:09.672973465Z" level=info msg="CreateContainer within sandbox \"126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:16:09.698736 containerd[1987]: time="2025-12-16T13:16:09.698663454Z" level=info msg="Container 6d27bb3d58400e6fd3599de4fd74dbb58084552037a8c520346968307f0f04ed: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:16:09.716552 containerd[1987]: time="2025-12-16T13:16:09.715915053Z" level=info msg="CreateContainer within sandbox \"126ce8b0d8b9628e1caefbdede58647793d1d5fe3599a072d142310d9fb83860\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d27bb3d58400e6fd3599de4fd74dbb58084552037a8c520346968307f0f04ed\"" Dec 16 13:16:09.719075 containerd[1987]: time="2025-12-16T13:16:09.719045233Z" level=info msg="StartContainer for \"6d27bb3d58400e6fd3599de4fd74dbb58084552037a8c520346968307f0f04ed\"" Dec 16 13:16:09.720723 containerd[1987]: time="2025-12-16T13:16:09.720650112Z" level=info msg="connecting to shim 6d27bb3d58400e6fd3599de4fd74dbb58084552037a8c520346968307f0f04ed" address="unix:///run/containerd/s/e2c85dadbf5243eb026592bf76810629b792caadca351096be1b9a59e38bb0d7" protocol=ttrpc version=3 Dec 16 13:16:09.749771 systemd[1]: Started cri-containerd-6d27bb3d58400e6fd3599de4fd74dbb58084552037a8c520346968307f0f04ed.scope - libcontainer container 6d27bb3d58400e6fd3599de4fd74dbb58084552037a8c520346968307f0f04ed. Dec 16 13:16:09.808210 containerd[1987]: time="2025-12-16T13:16:09.808156654Z" level=info msg="StartContainer for \"6d27bb3d58400e6fd3599de4fd74dbb58084552037a8c520346968307f0f04ed\" returns successfully" Dec 16 13:16:09.961082 kubelet[3337]: E1216 13:16:09.960890 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-z49cb" podUID="44385c2d-42c0-4161-a198-4b71ecff77d6" Dec 16 13:16:09.961982 kubelet[3337]: E1216 13:16:09.961419 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-9z68m" podUID="e3117b8a-01fc-44cf-a6c8-fb02a187636b" Dec 16 13:16:11.962024 kubelet[3337]: E1216 13:16:11.961967 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-9z68m" podUID="e3117b8a-01fc-44cf-a6c8-fb02a187636b" Dec 16 13:16:11.965755 kubelet[3337]: E1216 13:16:11.962034 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-z49cb" podUID="44385c2d-42c0-4161-a198-4b71ecff77d6" Dec 16 13:16:12.950883 containerd[1987]: time="2025-12-16T13:16:12.950838977Z" level=info msg="StopPodSandbox for \"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\"" Dec 16 13:16:12.952020 containerd[1987]: time="2025-12-16T13:16:12.951016966Z" level=info msg="TearDown network for sandbox \"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\" successfully" Dec 16 13:16:12.952020 containerd[1987]: time="2025-12-16T13:16:12.951037138Z" level=info msg="StopPodSandbox for \"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\" returns successfully" Dec 16 13:16:12.952020 containerd[1987]: time="2025-12-16T13:16:12.951583406Z" level=info msg="RemovePodSandbox for \"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\"" Dec 16 13:16:12.952020 containerd[1987]: time="2025-12-16T13:16:12.951615918Z" level=info msg="Forcibly stopping sandbox \"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\"" Dec 16 13:16:12.952020 containerd[1987]: time="2025-12-16T13:16:12.951758728Z" level=info msg="TearDown network for sandbox \"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\" successfully" Dec 16 13:16:12.953601 containerd[1987]: time="2025-12-16T13:16:12.953566322Z" level=info msg="Ensure that sandbox a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f in task-service has been cleanup successfully" Dec 16 13:16:12.961795 containerd[1987]: time="2025-12-16T13:16:12.961613650Z" level=info msg="RemovePodSandbox \"a6bdd8130db64e63a360d9fd1d94721e73c3c5d4c43d6028c0bf80855060ad1f\" returns successfully" Dec 16 13:16:12.963491 containerd[1987]: time="2025-12-16T13:16:12.963002470Z" level=info msg="StopPodSandbox for \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\"" Dec 16 13:16:12.963491 containerd[1987]: time="2025-12-16T13:16:12.963160982Z" level=info msg="TearDown network for sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" successfully" Dec 16 13:16:12.963491 containerd[1987]: time="2025-12-16T13:16:12.963176092Z" level=info msg="StopPodSandbox for \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" returns successfully" Dec 16 13:16:12.964261 containerd[1987]: time="2025-12-16T13:16:12.964152623Z" level=info msg="RemovePodSandbox for \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\"" Dec 16 13:16:12.964498 containerd[1987]: time="2025-12-16T13:16:12.964478362Z" level=info msg="Forcibly stopping sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\"" Dec 16 13:16:12.966045 containerd[1987]: time="2025-12-16T13:16:12.964702918Z" level=info msg="TearDown network for sandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" successfully" Dec 16 13:16:12.968573 containerd[1987]: time="2025-12-16T13:16:12.968533796Z" level=info msg="Ensure that sandbox e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032 in task-service has been cleanup successfully" Dec 16 13:16:12.977199 containerd[1987]: time="2025-12-16T13:16:12.977128518Z" level=info msg="RemovePodSandbox \"e62a7d02030563b50abe6c57f6822d8f16712829eab07251a32a2fcccd74a032\" returns successfully" Dec 16 13:16:13.230651 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Dec 16 13:16:17.269249 (udev-worker)[6104]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:16:17.270683 systemd-networkd[1790]: lxc_health: Link UP Dec 16 13:16:17.281507 (udev-worker)[6105]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:16:17.282163 systemd-networkd[1790]: lxc_health: Gained carrier Dec 16 13:16:17.396721 kubelet[3337]: I1216 13:16:17.396652 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dbsdf" podStartSLOduration=13.396631067 podStartE2EDuration="13.396631067s" podCreationTimestamp="2025-12-16 13:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:16:10.702115634 +0000 UTC m=+117.907889174" watchObservedRunningTime="2025-12-16 13:16:17.396631067 +0000 UTC m=+124.602404606" Dec 16 13:16:18.444747 systemd-networkd[1790]: lxc_health: Gained IPv6LL Dec 16 13:16:21.380916 ntpd[2241]: Listen normally on 13 lxc_health [fe80::415:50ff:fe78:1dc8%14]:123 Dec 16 13:16:21.381966 ntpd[2241]: 16 Dec 13:16:21 ntpd[2241]: Listen normally on 13 lxc_health [fe80::415:50ff:fe78:1dc8%14]:123 Dec 16 13:16:22.340350 sshd[5356]: Connection closed by 139.178.68.195 port 48246 Dec 16 13:16:22.341769 sshd-session[5320]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:22.386285 systemd[1]: sshd@30-172.31.24.237:22-139.178.68.195:48246.service: Deactivated successfully. Dec 16 13:16:22.391689 systemd[1]: session-31.scope: Deactivated successfully. Dec 16 13:16:22.393734 systemd-logind[1954]: Session 31 logged out. Waiting for processes to exit. Dec 16 13:16:22.398056 systemd-logind[1954]: Removed session 31.