Dec 16 13:10:49.878910 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:10:49.878935 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:10:49.878947 kernel: BIOS-provided physical RAM map: Dec 16 13:10:49.878954 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:10:49.878960 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Dec 16 13:10:49.878967 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 13:10:49.878975 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 13:10:49.878982 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 13:10:49.878989 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 13:10:49.878995 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 13:10:49.879002 kernel: NX (Execute Disable) protection: active Dec 16 13:10:49.879012 kernel: APIC: Static calls initialized Dec 16 13:10:49.879019 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Dec 16 13:10:49.879026 kernel: extended physical RAM map: Dec 16 13:10:49.879034 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:10:49.879042 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Dec 16 13:10:49.879052 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Dec 16 13:10:49.879060 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Dec 16 13:10:49.879067 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 13:10:49.879075 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 13:10:49.879083 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 13:10:49.879091 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 13:10:49.879098 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 13:10:49.879106 kernel: efi: EFI v2.7 by EDK II Dec 16 13:10:49.879114 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Dec 16 13:10:49.879121 kernel: secureboot: Secure boot disabled Dec 16 13:10:49.879129 kernel: SMBIOS 2.7 present. Dec 16 13:10:49.879138 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 16 13:10:49.879146 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:10:49.879153 kernel: Hypervisor detected: KVM Dec 16 13:10:49.879161 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 13:10:49.879168 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:10:49.879176 kernel: kvm-clock: using sched offset of 4934543163 cycles Dec 16 13:10:49.879184 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:10:49.879192 kernel: tsc: Detected 2499.996 MHz processor Dec 16 13:10:49.879200 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:10:49.879208 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:10:49.879216 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 13:10:49.879226 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:10:49.879234 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:10:49.879246 kernel: Using GB pages for direct mapping Dec 16 13:10:49.879254 kernel: ACPI: Early table checksum verification disabled Dec 16 13:10:49.879262 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Dec 16 13:10:49.879270 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Dec 16 13:10:49.879281 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 16 13:10:49.879289 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 16 13:10:49.879297 kernel: ACPI: FACS 0x00000000789D0000 000040 Dec 16 13:10:49.879316 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 16 13:10:49.879325 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 16 13:10:49.879333 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 16 13:10:49.879341 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 16 13:10:49.879349 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 16 13:10:49.879360 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 13:10:49.879368 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 13:10:49.879376 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Dec 16 13:10:49.879384 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Dec 16 13:10:49.879393 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Dec 16 13:10:49.879401 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Dec 16 13:10:49.879409 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Dec 16 13:10:49.879417 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Dec 16 13:10:49.879425 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Dec 16 13:10:49.879436 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Dec 16 13:10:49.879444 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Dec 16 13:10:49.879452 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Dec 16 13:10:49.879461 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Dec 16 13:10:49.879469 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Dec 16 13:10:49.879477 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 16 13:10:49.879485 kernel: NUMA: Initialized distance table, cnt=1 Dec 16 13:10:49.879493 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Dec 16 13:10:49.879501 kernel: Zone ranges: Dec 16 13:10:49.879512 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:10:49.879520 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Dec 16 13:10:49.879528 kernel: Normal empty Dec 16 13:10:49.879537 kernel: Device empty Dec 16 13:10:49.879545 kernel: Movable zone start for each node Dec 16 13:10:49.879553 kernel: Early memory node ranges Dec 16 13:10:49.879561 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:10:49.879569 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Dec 16 13:10:49.879578 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Dec 16 13:10:49.879588 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Dec 16 13:10:49.879596 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:10:49.879604 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:10:49.879613 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 16 13:10:49.879621 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Dec 16 13:10:49.879876 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 16 13:10:49.879889 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:10:49.879898 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 16 13:10:49.879906 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:10:49.879933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:10:49.879946 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:10:49.879959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:10:49.879971 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:10:49.879983 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:10:49.879996 kernel: TSC deadline timer available Dec 16 13:10:49.880005 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:10:49.880013 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:10:49.880021 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:10:49.880029 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:10:49.880040 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:10:49.880048 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:10:49.880057 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:10:49.880065 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:10:49.880073 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Dec 16 13:10:49.880081 kernel: Booting paravirtualized kernel on KVM Dec 16 13:10:49.880090 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:10:49.880098 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:10:49.880107 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:10:49.880117 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:10:49.880126 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:10:49.880134 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:10:49.880143 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:10:49.880153 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:10:49.880169 kernel: random: crng init done Dec 16 13:10:49.880177 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:10:49.880186 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:10:49.880196 kernel: Fallback order for Node 0: 0 Dec 16 13:10:49.880205 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Dec 16 13:10:49.880213 kernel: Policy zone: DMA32 Dec 16 13:10:49.880230 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:10:49.880241 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:10:49.880250 kernel: Kernel/User page tables isolation: enabled Dec 16 13:10:49.880259 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:10:49.880268 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:10:49.880276 kernel: Dynamic Preempt: voluntary Dec 16 13:10:49.880285 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:10:49.880295 kernel: rcu: RCU event tracing is enabled. Dec 16 13:10:49.880304 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:10:49.880316 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:10:49.880324 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:10:49.880333 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:10:49.880342 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:10:49.880351 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:10:49.880362 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:10:49.880370 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:10:49.880379 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:10:49.880388 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 13:10:49.880397 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:10:49.880406 kernel: Console: colour dummy device 80x25 Dec 16 13:10:49.880414 kernel: printk: legacy console [tty0] enabled Dec 16 13:10:49.880423 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:10:49.880432 kernel: ACPI: Core revision 20240827 Dec 16 13:10:49.880443 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 16 13:10:49.880452 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:10:49.880460 kernel: x2apic enabled Dec 16 13:10:49.880469 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:10:49.880478 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 16 13:10:49.880487 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 16 13:10:49.880495 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 16 13:10:49.880504 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Dec 16 13:10:49.880513 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:10:49.880524 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:10:49.880532 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:10:49.880541 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:10:49.880550 kernel: RETBleed: Vulnerable Dec 16 13:10:49.880558 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:10:49.880567 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:10:49.880576 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:10:49.880584 kernel: GDS: Unknown: Dependent on hypervisor status Dec 16 13:10:49.880593 kernel: active return thunk: its_return_thunk Dec 16 13:10:49.880601 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:10:49.880609 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:10:49.880620 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:10:49.880644 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:10:49.880653 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 16 13:10:49.880662 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 16 13:10:49.880670 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:10:49.880679 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:10:49.880688 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:10:49.880696 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 16 13:10:49.880705 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:10:49.880713 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 16 13:10:49.880722 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 16 13:10:49.880733 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 16 13:10:49.880742 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 16 13:10:49.880750 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 16 13:10:49.880759 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 16 13:10:49.880768 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 16 13:10:49.880776 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:10:49.880785 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:10:49.880794 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:10:49.880802 kernel: landlock: Up and running. Dec 16 13:10:49.880811 kernel: SELinux: Initializing. Dec 16 13:10:49.880820 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:10:49.880829 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:10:49.880840 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 16 13:10:49.880848 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 16 13:10:49.880857 kernel: signal: max sigframe size: 3632 Dec 16 13:10:49.880866 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:10:49.880875 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:10:49.880884 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:10:49.880893 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:10:49.880902 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:10:49.880910 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:10:49.880922 kernel: .... node #0, CPUs: #1 Dec 16 13:10:49.880931 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 16 13:10:49.880940 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 16 13:10:49.880949 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:10:49.880958 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 16 13:10:49.880967 kernel: Memory: 1899860K/2037804K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 133380K reserved, 0K cma-reserved) Dec 16 13:10:49.880975 kernel: devtmpfs: initialized Dec 16 13:10:49.880984 kernel: x86/mm: Memory block size: 128MB Dec 16 13:10:49.880995 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Dec 16 13:10:49.881004 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:10:49.881013 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:10:49.881022 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:10:49.881031 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:10:49.881040 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:10:49.881049 kernel: audit: type=2000 audit(1765890648.093:1): state=initialized audit_enabled=0 res=1 Dec 16 13:10:49.881057 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:10:49.881066 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:10:49.881077 kernel: cpuidle: using governor menu Dec 16 13:10:49.881086 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:10:49.881095 kernel: dca service started, version 1.12.1 Dec 16 13:10:49.881103 kernel: PCI: Using configuration type 1 for base access Dec 16 13:10:49.881113 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:10:49.881122 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:10:49.881130 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:10:49.881139 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:10:49.881148 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:10:49.881159 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:10:49.881168 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:10:49.881177 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:10:49.881185 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 16 13:10:49.881194 kernel: ACPI: Interpreter enabled Dec 16 13:10:49.881203 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:10:49.881212 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:10:49.881220 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:10:49.881229 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:10:49.881240 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 16 13:10:49.881249 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:10:49.881418 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:10:49.881515 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 13:10:49.881606 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 13:10:49.881618 kernel: acpiphp: Slot [3] registered Dec 16 13:10:49.881627 kernel: acpiphp: Slot [4] registered Dec 16 13:10:49.881666 kernel: acpiphp: Slot [5] registered Dec 16 13:10:49.882004 kernel: acpiphp: Slot [6] registered Dec 16 13:10:49.882017 kernel: acpiphp: Slot [7] registered Dec 16 13:10:49.882026 kernel: acpiphp: Slot [8] registered Dec 16 13:10:49.882035 kernel: acpiphp: Slot [9] registered Dec 16 13:10:49.882044 kernel: acpiphp: Slot [10] registered Dec 16 13:10:49.882053 kernel: acpiphp: Slot [11] registered Dec 16 13:10:49.882062 kernel: acpiphp: Slot [12] registered Dec 16 13:10:49.882071 kernel: acpiphp: Slot [13] registered Dec 16 13:10:49.882080 kernel: acpiphp: Slot [14] registered Dec 16 13:10:49.882093 kernel: acpiphp: Slot [15] registered Dec 16 13:10:49.882102 kernel: acpiphp: Slot [16] registered Dec 16 13:10:49.882111 kernel: acpiphp: Slot [17] registered Dec 16 13:10:49.882119 kernel: acpiphp: Slot [18] registered Dec 16 13:10:49.882128 kernel: acpiphp: Slot [19] registered Dec 16 13:10:49.882136 kernel: acpiphp: Slot [20] registered Dec 16 13:10:49.882145 kernel: acpiphp: Slot [21] registered Dec 16 13:10:49.882154 kernel: acpiphp: Slot [22] registered Dec 16 13:10:49.882163 kernel: acpiphp: Slot [23] registered Dec 16 13:10:49.882174 kernel: acpiphp: Slot [24] registered Dec 16 13:10:49.882183 kernel: acpiphp: Slot [25] registered Dec 16 13:10:49.882192 kernel: acpiphp: Slot [26] registered Dec 16 13:10:49.882201 kernel: acpiphp: Slot [27] registered Dec 16 13:10:49.882209 kernel: acpiphp: Slot [28] registered Dec 16 13:10:49.882218 kernel: acpiphp: Slot [29] registered Dec 16 13:10:49.882227 kernel: acpiphp: Slot [30] registered Dec 16 13:10:49.882235 kernel: acpiphp: Slot [31] registered Dec 16 13:10:49.882244 kernel: PCI host bridge to bus 0000:00 Dec 16 13:10:49.882368 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:10:49.882463 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:10:49.882546 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:10:49.882627 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 16 13:10:49.882723 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Dec 16 13:10:49.882803 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:10:49.882919 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:10:49.883025 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:10:49.883123 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Dec 16 13:10:49.883214 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 16 13:10:49.883305 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 16 13:10:49.883395 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 16 13:10:49.883487 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 16 13:10:49.883580 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 16 13:10:49.883757 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 16 13:10:49.883852 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 16 13:10:49.885695 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:10:49.885836 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Dec 16 13:10:49.885949 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 13:10:49.886039 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:10:49.886144 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Dec 16 13:10:49.886234 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Dec 16 13:10:49.886328 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Dec 16 13:10:49.886416 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Dec 16 13:10:49.886428 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:10:49.886437 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:10:49.886447 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:10:49.886459 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:10:49.886468 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 13:10:49.886477 kernel: iommu: Default domain type: Translated Dec 16 13:10:49.886486 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:10:49.886495 kernel: efivars: Registered efivars operations Dec 16 13:10:49.886504 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:10:49.886512 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:10:49.886521 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Dec 16 13:10:49.886530 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Dec 16 13:10:49.886541 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Dec 16 13:10:49.886627 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 16 13:10:49.886751 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 16 13:10:49.886839 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:10:49.886851 kernel: vgaarb: loaded Dec 16 13:10:49.886860 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 16 13:10:49.886869 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 16 13:10:49.886878 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:10:49.886887 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:10:49.886900 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:10:49.886909 kernel: pnp: PnP ACPI init Dec 16 13:10:49.886918 kernel: pnp: PnP ACPI: found 5 devices Dec 16 13:10:49.886927 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:10:49.886937 kernel: NET: Registered PF_INET protocol family Dec 16 13:10:49.886945 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:10:49.886955 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 13:10:49.886964 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:10:49.886975 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:10:49.886984 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 13:10:49.886993 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 13:10:49.887002 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:10:49.887011 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:10:49.887020 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:10:49.887029 kernel: NET: Registered PF_XDP protocol family Dec 16 13:10:49.887116 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:10:49.887197 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:10:49.887280 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:10:49.887361 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 16 13:10:49.887442 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Dec 16 13:10:49.887535 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 13:10:49.887547 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:10:49.887557 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 13:10:49.887566 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 16 13:10:49.887575 kernel: clocksource: Switched to clocksource tsc Dec 16 13:10:49.887586 kernel: Initialise system trusted keyrings Dec 16 13:10:49.887596 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 13:10:49.887604 kernel: Key type asymmetric registered Dec 16 13:10:49.887613 kernel: Asymmetric key parser 'x509' registered Dec 16 13:10:49.887622 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:10:49.887702 kernel: io scheduler mq-deadline registered Dec 16 13:10:49.887711 kernel: io scheduler kyber registered Dec 16 13:10:49.887720 kernel: io scheduler bfq registered Dec 16 13:10:49.887729 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:10:49.887741 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:10:49.887751 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:10:49.887760 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:10:49.887769 kernel: i8042: Warning: Keylock active Dec 16 13:10:49.887778 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:10:49.887786 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:10:49.887888 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 16 13:10:49.888006 kernel: rtc_cmos 00:00: registered as rtc0 Dec 16 13:10:49.888099 kernel: rtc_cmos 00:00: setting system clock to 2025-12-16T13:10:49 UTC (1765890649) Dec 16 13:10:49.888184 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 16 13:10:49.888213 kernel: intel_pstate: CPU model not supported Dec 16 13:10:49.888225 kernel: efifb: probing for efifb Dec 16 13:10:49.888235 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Dec 16 13:10:49.888245 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Dec 16 13:10:49.888254 kernel: efifb: scrolling: redraw Dec 16 13:10:49.888264 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:10:49.888274 kernel: Console: switching to colour frame buffer device 100x37 Dec 16 13:10:49.888286 kernel: fb0: EFI VGA frame buffer device Dec 16 13:10:49.888295 kernel: pstore: Using crash dump compression: deflate Dec 16 13:10:49.888305 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:10:49.888315 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:10:49.888324 kernel: Segment Routing with IPv6 Dec 16 13:10:49.888333 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:10:49.888343 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:10:49.888352 kernel: Key type dns_resolver registered Dec 16 13:10:49.888361 kernel: IPI shorthand broadcast: enabled Dec 16 13:10:49.888374 kernel: sched_clock: Marking stable (2568003599, 144015872)->(2781060375, -69040904) Dec 16 13:10:49.888384 kernel: registered taskstats version 1 Dec 16 13:10:49.888393 kernel: Loading compiled-in X.509 certificates Dec 16 13:10:49.888403 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:10:49.888412 kernel: Demotion targets for Node 0: null Dec 16 13:10:49.888421 kernel: Key type .fscrypt registered Dec 16 13:10:49.888430 kernel: Key type fscrypt-provisioning registered Dec 16 13:10:49.888439 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:10:49.888449 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:10:49.888460 kernel: ima: No architecture policies found Dec 16 13:10:49.888470 kernel: clk: Disabling unused clocks Dec 16 13:10:49.888479 kernel: Warning: unable to open an initial console. Dec 16 13:10:49.888489 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:10:49.888498 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:10:49.888510 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:10:49.888522 kernel: Run /init as init process Dec 16 13:10:49.888531 kernel: with arguments: Dec 16 13:10:49.888541 kernel: /init Dec 16 13:10:49.888550 kernel: with environment: Dec 16 13:10:49.888559 kernel: HOME=/ Dec 16 13:10:49.888569 kernel: TERM=linux Dec 16 13:10:49.888582 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:10:49.888596 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:10:49.888609 systemd[1]: Detected virtualization amazon. Dec 16 13:10:49.888618 systemd[1]: Detected architecture x86-64. Dec 16 13:10:49.888627 systemd[1]: Running in initrd. Dec 16 13:10:49.888651 systemd[1]: No hostname configured, using default hostname. Dec 16 13:10:49.888662 systemd[1]: Hostname set to . Dec 16 13:10:49.888671 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:10:49.888681 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:10:49.888691 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:10:49.888704 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:10:49.888716 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:10:49.888725 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:10:49.888735 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:10:49.888746 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:10:49.888756 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:10:49.888769 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:10:49.888779 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:10:49.888789 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:10:49.888799 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:10:49.888809 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:10:49.888819 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:10:49.888828 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:10:49.888838 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:10:49.888848 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:10:49.888861 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:10:49.888871 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:10:49.888881 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:10:49.888891 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:10:49.888900 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:10:49.888910 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:10:49.888920 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:10:49.888931 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:10:49.888943 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:10:49.888953 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:10:49.888963 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:10:49.888973 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:10:49.888983 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:10:49.888993 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:10:49.889003 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:10:49.889017 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:10:49.889053 systemd-journald[188]: Collecting audit messages is disabled. Dec 16 13:10:49.889080 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:10:49.889091 systemd-journald[188]: Journal started Dec 16 13:10:49.889113 systemd-journald[188]: Runtime Journal (/run/log/journal/ec2265945a989e932b5daff305586644) is 4.7M, max 38.1M, 33.3M free. Dec 16 13:10:49.894667 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:10:49.896770 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:10:49.897896 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:10:49.902759 systemd-modules-load[189]: Inserted module 'overlay' Dec 16 13:10:49.918187 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:10:49.921730 systemd-tmpfiles[200]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:10:49.922881 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:10:49.931696 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:10:49.933276 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:10:49.935986 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:10:49.944654 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:10:49.949762 systemd-modules-load[189]: Inserted module 'br_netfilter' Dec 16 13:10:49.951322 kernel: Bridge firewalling registered Dec 16 13:10:49.952057 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:10:49.956715 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:10:49.958462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:10:49.969685 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:10:49.973795 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:10:49.978077 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:10:49.989816 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:10:50.006472 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:10:50.012840 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:10:50.056169 systemd-resolved[230]: Positive Trust Anchors: Dec 16 13:10:50.057218 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:10:50.057285 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:10:50.066140 systemd-resolved[230]: Defaulting to hostname 'linux'. Dec 16 13:10:50.068623 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:10:50.069406 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:10:50.110668 kernel: SCSI subsystem initialized Dec 16 13:10:50.121664 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:10:50.132661 kernel: iscsi: registered transport (tcp) Dec 16 13:10:50.153774 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:10:50.153853 kernel: QLogic iSCSI HBA Driver Dec 16 13:10:50.173604 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:10:50.190827 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:10:50.194029 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:10:50.241896 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:10:50.244188 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:10:50.298681 kernel: raid6: avx512x4 gen() 17418 MB/s Dec 16 13:10:50.316662 kernel: raid6: avx512x2 gen() 17254 MB/s Dec 16 13:10:50.334662 kernel: raid6: avx512x1 gen() 17345 MB/s Dec 16 13:10:50.352658 kernel: raid6: avx2x4 gen() 17512 MB/s Dec 16 13:10:50.370672 kernel: raid6: avx2x2 gen() 17380 MB/s Dec 16 13:10:50.389027 kernel: raid6: avx2x1 gen() 13502 MB/s Dec 16 13:10:50.389099 kernel: raid6: using algorithm avx2x4 gen() 17512 MB/s Dec 16 13:10:50.407887 kernel: raid6: .... xor() 6942 MB/s, rmw enabled Dec 16 13:10:50.408045 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:10:50.429673 kernel: xor: automatically using best checksumming function avx Dec 16 13:10:50.599669 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:10:50.606458 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:10:50.608886 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:10:50.635617 systemd-udevd[439]: Using default interface naming scheme 'v255'. Dec 16 13:10:50.642251 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:10:50.645300 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:10:50.666004 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Dec 16 13:10:50.694122 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:10:50.696382 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:10:50.758662 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:10:50.763806 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:10:50.862797 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 16 13:10:50.863070 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 16 13:10:50.872680 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:10:50.887697 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:10:50.892657 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 16 13:10:50.892924 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 16 13:10:50.895693 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 16 13:10:50.903151 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:10:50.904014 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:07:ec:a1:dc:0d Dec 16 13:10:50.904463 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:10:50.905286 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:10:50.911731 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:10:50.911427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:10:50.913073 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:10:50.924821 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:10:50.924901 kernel: GPT:9289727 != 33554431 Dec 16 13:10:50.924915 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:10:50.926337 kernel: GPT:9289727 != 33554431 Dec 16 13:10:50.926233 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:10:50.930574 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:10:50.930599 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:10:50.926447 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:10:50.931338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:10:50.933840 (udev-worker)[485]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:10:50.941674 kernel: AES CTR mode by8 optimization enabled Dec 16 13:10:50.980670 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:10:50.983684 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:10:51.052061 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 16 13:10:51.094422 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 16 13:10:51.096408 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 16 13:10:51.115544 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 16 13:10:51.126179 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:10:51.145514 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 13:10:51.146255 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:10:51.147495 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:10:51.148802 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:10:51.150492 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:10:51.152775 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:10:51.173518 disk-uuid[673]: Primary Header is updated. Dec 16 13:10:51.173518 disk-uuid[673]: Secondary Entries is updated. Dec 16 13:10:51.173518 disk-uuid[673]: Secondary Header is updated. Dec 16 13:10:51.178545 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:10:51.183658 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:10:52.195364 disk-uuid[678]: The operation has completed successfully. Dec 16 13:10:52.196804 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:10:52.336241 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:10:52.336373 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:10:52.374576 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:10:52.389555 sh[941]: Success Dec 16 13:10:52.426961 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:10:52.427042 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:10:52.429791 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:10:52.440662 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 16 13:10:52.547430 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:10:52.550184 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:10:52.562804 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:10:52.585659 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (964) Dec 16 13:10:52.588676 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:10:52.588753 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:10:52.617585 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:10:52.617668 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:10:52.617682 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:10:52.631731 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:10:52.632845 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:10:52.633390 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:10:52.634154 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:10:52.636057 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:10:52.682716 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (998) Dec 16 13:10:52.686852 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:10:52.686917 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:10:52.704977 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:10:52.705058 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:10:52.714696 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:10:52.716237 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:10:52.719799 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:10:52.755347 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:10:52.758027 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:10:52.804982 systemd-networkd[1133]: lo: Link UP Dec 16 13:10:52.804996 systemd-networkd[1133]: lo: Gained carrier Dec 16 13:10:52.806816 systemd-networkd[1133]: Enumeration completed Dec 16 13:10:52.807247 systemd-networkd[1133]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:10:52.807253 systemd-networkd[1133]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:10:52.809743 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:10:52.811657 systemd[1]: Reached target network.target - Network. Dec 16 13:10:52.812591 systemd-networkd[1133]: eth0: Link UP Dec 16 13:10:52.812597 systemd-networkd[1133]: eth0: Gained carrier Dec 16 13:10:52.812616 systemd-networkd[1133]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:10:52.824745 systemd-networkd[1133]: eth0: DHCPv4 address 172.31.28.132/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 13:10:53.000970 ignition[1088]: Ignition 2.22.0 Dec 16 13:10:53.000986 ignition[1088]: Stage: fetch-offline Dec 16 13:10:53.001216 ignition[1088]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:53.001228 ignition[1088]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:10:53.001944 ignition[1088]: Ignition finished successfully Dec 16 13:10:53.004565 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:10:53.006153 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:10:53.039032 ignition[1142]: Ignition 2.22.0 Dec 16 13:10:53.039703 ignition[1142]: Stage: fetch Dec 16 13:10:53.040197 ignition[1142]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:53.040209 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:10:53.040315 ignition[1142]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:10:53.057656 ignition[1142]: PUT result: OK Dec 16 13:10:53.060694 ignition[1142]: parsed url from cmdline: "" Dec 16 13:10:53.060794 ignition[1142]: no config URL provided Dec 16 13:10:53.060804 ignition[1142]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:10:53.060818 ignition[1142]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:10:53.060844 ignition[1142]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:10:53.061576 ignition[1142]: PUT result: OK Dec 16 13:10:53.061671 ignition[1142]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 16 13:10:53.062701 ignition[1142]: GET result: OK Dec 16 13:10:53.062800 ignition[1142]: parsing config with SHA512: 958dabec2e6012e55146e2b72d3c8346243c74e92de14edd7192d20a711a3189146ce74701f39dd6e481ac62f9622fc770563e8b326836f019592819bd205cf1 Dec 16 13:10:53.071836 unknown[1142]: fetched base config from "system" Dec 16 13:10:53.071852 unknown[1142]: fetched base config from "system" Dec 16 13:10:53.072541 ignition[1142]: fetch: fetch complete Dec 16 13:10:53.071860 unknown[1142]: fetched user config from "aws" Dec 16 13:10:53.072548 ignition[1142]: fetch: fetch passed Dec 16 13:10:53.072611 ignition[1142]: Ignition finished successfully Dec 16 13:10:53.075849 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:10:53.077361 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:10:53.107563 ignition[1148]: Ignition 2.22.0 Dec 16 13:10:53.107581 ignition[1148]: Stage: kargs Dec 16 13:10:53.108062 ignition[1148]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:53.108075 ignition[1148]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:10:53.108671 ignition[1148]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:10:53.109532 ignition[1148]: PUT result: OK Dec 16 13:10:53.112052 ignition[1148]: kargs: kargs passed Dec 16 13:10:53.112129 ignition[1148]: Ignition finished successfully Dec 16 13:10:53.113840 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:10:53.115716 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:10:53.150656 ignition[1154]: Ignition 2.22.0 Dec 16 13:10:53.150671 ignition[1154]: Stage: disks Dec 16 13:10:53.151089 ignition[1154]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:53.151102 ignition[1154]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:10:53.151213 ignition[1154]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:10:53.152202 ignition[1154]: PUT result: OK Dec 16 13:10:53.155616 ignition[1154]: disks: disks passed Dec 16 13:10:53.155696 ignition[1154]: Ignition finished successfully Dec 16 13:10:53.158317 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:10:53.158969 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:10:53.159356 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:10:53.160124 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:10:53.160697 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:10:53.161262 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:10:53.162941 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:10:53.214475 systemd-fsck[1163]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 13:10:53.217788 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:10:53.220189 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:10:53.395655 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:10:53.396760 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:10:53.397902 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:10:53.400262 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:10:53.402762 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:10:53.406262 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:10:53.406792 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:10:53.406834 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:10:53.415045 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:10:53.417270 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:10:53.433675 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1182) Dec 16 13:10:53.438425 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:10:53.438524 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:10:53.447677 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:10:53.447745 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:10:53.450434 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:10:53.551434 initrd-setup-root[1208]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:10:53.559463 initrd-setup-root[1215]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:10:53.564178 initrd-setup-root[1222]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:10:53.568266 initrd-setup-root[1229]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:10:53.683156 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:10:53.685526 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:10:53.688788 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:10:53.703371 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:10:53.705838 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:10:53.743251 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:10:53.747346 ignition[1296]: INFO : Ignition 2.22.0 Dec 16 13:10:53.747346 ignition[1296]: INFO : Stage: mount Dec 16 13:10:53.748953 ignition[1296]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:53.748953 ignition[1296]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:10:53.748953 ignition[1296]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:10:53.750518 ignition[1296]: INFO : PUT result: OK Dec 16 13:10:53.752217 ignition[1296]: INFO : mount: mount passed Dec 16 13:10:53.752705 ignition[1296]: INFO : Ignition finished successfully Dec 16 13:10:53.754477 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:10:53.756097 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:10:53.775040 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:10:53.809706 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1308) Dec 16 13:10:53.814480 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:10:53.814548 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:10:53.822740 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:10:53.822809 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:10:53.825701 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:10:53.863249 ignition[1324]: INFO : Ignition 2.22.0 Dec 16 13:10:53.863249 ignition[1324]: INFO : Stage: files Dec 16 13:10:53.864747 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:53.864747 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:10:53.864747 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:10:53.865978 ignition[1324]: INFO : PUT result: OK Dec 16 13:10:53.875152 ignition[1324]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:10:53.876788 ignition[1324]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:10:53.876788 ignition[1324]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:10:53.884209 ignition[1324]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:10:53.885273 ignition[1324]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:10:53.886698 unknown[1324]: wrote ssh authorized keys file for user: core Dec 16 13:10:53.887257 ignition[1324]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:10:53.889608 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:10:53.890309 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:10:53.991716 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:10:54.133272 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:10:54.133272 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:10:54.135128 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 16 13:10:54.360154 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 13:10:54.475541 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:10:54.475541 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:10:54.477295 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:10:54.477295 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:10:54.477295 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:10:54.477295 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:10:54.477295 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:10:54.477295 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:10:54.477295 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:10:54.483096 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:10:54.484186 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:10:54.484186 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:10:54.486276 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:10:54.486276 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:10:54.486276 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 13:10:54.562833 systemd-networkd[1133]: eth0: Gained IPv6LL Dec 16 13:10:54.735974 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 13:10:55.009894 ignition[1324]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:10:55.009894 ignition[1324]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 13:10:55.012597 ignition[1324]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:10:55.020263 ignition[1324]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:10:55.020263 ignition[1324]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 13:10:55.020263 ignition[1324]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:10:55.025508 ignition[1324]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:10:55.025508 ignition[1324]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:10:55.025508 ignition[1324]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:10:55.025508 ignition[1324]: INFO : files: files passed Dec 16 13:10:55.025508 ignition[1324]: INFO : Ignition finished successfully Dec 16 13:10:55.024847 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:10:55.028854 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:10:55.034801 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:10:55.043750 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:10:55.044005 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:10:55.052180 initrd-setup-root-after-ignition[1355]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:10:55.052180 initrd-setup-root-after-ignition[1355]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:10:55.055586 initrd-setup-root-after-ignition[1359]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:10:55.055829 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:10:55.057594 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:10:55.059355 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:10:55.105179 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:10:55.105296 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:10:55.106722 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:10:55.107389 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:10:55.108247 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:10:55.109058 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:10:55.137583 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:10:55.140077 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:10:55.180052 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:10:55.181210 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:10:55.182123 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:10:55.182984 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:10:55.183118 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:10:55.184480 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:10:55.185353 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:10:55.185817 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:10:55.186512 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:10:55.187325 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:10:55.188299 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:10:55.189040 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:10:55.189644 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:10:55.190420 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:10:55.191434 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:10:55.192353 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:10:55.195604 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:10:55.195770 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:10:55.196901 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:10:55.197685 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:10:55.198215 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:10:55.198328 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:10:55.198965 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:10:55.199087 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:10:55.200141 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:10:55.200423 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:10:55.201155 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:10:55.201347 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:10:55.204766 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:10:55.206713 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:10:55.208723 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:10:55.208981 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:10:55.210836 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:10:55.211004 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:10:55.218622 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:10:55.220743 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:10:55.245828 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:10:55.251518 ignition[1379]: INFO : Ignition 2.22.0 Dec 16 13:10:55.253377 ignition[1379]: INFO : Stage: umount Dec 16 13:10:55.253377 ignition[1379]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:55.253377 ignition[1379]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:10:55.253377 ignition[1379]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:10:55.252917 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:10:55.257059 ignition[1379]: INFO : PUT result: OK Dec 16 13:10:55.253056 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:10:55.259215 ignition[1379]: INFO : umount: umount passed Dec 16 13:10:55.259215 ignition[1379]: INFO : Ignition finished successfully Dec 16 13:10:55.260049 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:10:55.260242 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:10:55.261131 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:10:55.261196 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:10:55.261716 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:10:55.261777 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:10:55.262352 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:10:55.262407 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:10:55.263009 systemd[1]: Stopped target network.target - Network. Dec 16 13:10:55.263580 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:10:55.263662 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:10:55.264375 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:10:55.265020 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:10:55.268735 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:10:55.269231 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:10:55.270241 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:10:55.270926 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:10:55.270988 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:10:55.271579 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:10:55.271649 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:10:55.272335 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:10:55.272410 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:10:55.273050 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:10:55.273108 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:10:55.273708 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:10:55.273774 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:10:55.274519 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:10:55.275170 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:10:55.280788 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:10:55.280929 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:10:55.284944 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:10:55.285353 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:10:55.285504 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:10:55.288087 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:10:55.289303 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:10:55.289857 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:10:55.289920 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:10:55.291845 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:10:55.293057 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:10:55.293138 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:10:55.295272 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:10:55.295343 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:10:55.297243 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:10:55.297313 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:10:55.298593 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:10:55.298699 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:10:55.299482 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:10:55.303141 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:10:55.303248 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:10:55.320578 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:10:55.320801 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:10:55.322303 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:10:55.322402 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:10:55.323835 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:10:55.323893 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:10:55.324742 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:10:55.324818 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:10:55.327817 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:10:55.327981 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:10:55.329217 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:10:55.329301 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:10:55.331479 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:10:55.334704 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:10:55.334791 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:10:55.335812 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:10:55.335970 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:10:55.338481 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:10:55.338534 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:10:55.339410 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:10:55.339453 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:10:55.340087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:10:55.340146 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:10:55.345576 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:10:55.347754 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 13:10:55.347829 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:10:55.347893 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:10:55.348428 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:10:55.348551 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:10:55.354211 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:10:55.354351 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:10:55.355670 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:10:55.358855 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:10:55.380922 systemd[1]: Switching root. Dec 16 13:10:55.408581 systemd-journald[188]: Journal stopped Dec 16 13:10:56.863691 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Dec 16 13:10:56.863774 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:10:56.863798 kernel: SELinux: policy capability open_perms=1 Dec 16 13:10:56.863820 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:10:56.863838 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:10:56.863878 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:10:56.863897 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:10:56.863916 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:10:56.863942 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:10:56.863961 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:10:56.863981 kernel: audit: type=1403 audit(1765890655.710:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:10:56.864009 systemd[1]: Successfully loaded SELinux policy in 67.240ms. Dec 16 13:10:56.864036 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.999ms. Dec 16 13:10:56.864057 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:10:56.864082 systemd[1]: Detected virtualization amazon. Dec 16 13:10:56.864102 systemd[1]: Detected architecture x86-64. Dec 16 13:10:56.864122 systemd[1]: Detected first boot. Dec 16 13:10:56.864142 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:10:56.864167 zram_generator::config[1424]: No configuration found. Dec 16 13:10:56.864195 kernel: Guest personality initialized and is inactive Dec 16 13:10:56.864215 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:10:56.864238 kernel: Initialized host personality Dec 16 13:10:56.864260 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:10:56.864279 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:10:56.864304 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:10:56.864324 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:10:56.864344 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:10:56.864361 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:10:56.864379 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:10:56.864397 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:10:56.864416 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:10:56.864437 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:10:56.864455 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:10:56.864474 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:10:56.864495 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:10:56.864518 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:10:56.864538 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:10:56.864559 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:10:56.864578 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:10:56.864597 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:10:56.864622 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:10:56.866687 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:10:56.866716 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:10:56.866736 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:10:56.866754 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:10:56.866773 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:10:56.866791 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:10:56.866815 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:10:56.866833 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:10:56.866853 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:10:56.866874 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:10:56.866893 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:10:56.866912 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:10:56.866930 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:10:56.866949 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:10:56.866968 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:10:56.866990 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:10:56.867008 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:10:56.867027 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:10:56.867045 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:10:56.867063 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:10:56.867081 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:10:56.867100 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:10:56.867118 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:10:56.867137 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:10:56.867157 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:10:56.867176 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:10:56.867195 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:10:56.867214 systemd[1]: Reached target machines.target - Containers. Dec 16 13:10:56.867233 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:10:56.867252 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:10:56.867273 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:10:56.867292 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:10:56.867310 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:10:56.867331 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:10:56.867349 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:10:56.867368 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:10:56.867386 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:10:56.867405 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:10:56.867424 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:10:56.867444 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:10:56.867463 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:10:56.867484 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:10:56.868702 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:10:56.868738 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:10:56.868758 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:10:56.868781 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:10:56.868801 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:10:56.868824 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:10:56.868844 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:10:56.868864 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:10:56.868884 systemd[1]: Stopped verity-setup.service. Dec 16 13:10:56.868906 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:10:56.868927 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:10:56.868946 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:10:56.868967 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:10:56.868986 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:10:56.869005 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:10:56.869024 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:10:56.869044 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:10:56.869064 kernel: loop: module loaded Dec 16 13:10:56.869087 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:10:56.869108 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:10:56.869128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:10:56.869148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:10:56.869168 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:10:56.869188 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:10:56.869208 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:10:56.869228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:10:56.869248 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:10:56.869272 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:10:56.869293 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:10:56.869314 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:10:56.869334 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:10:56.869354 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:10:56.869375 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:10:56.869396 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:10:56.869417 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:10:56.869481 systemd-journald[1514]: Collecting audit messages is disabled. Dec 16 13:10:56.869525 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:10:56.869546 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:10:56.869567 systemd-journald[1514]: Journal started Dec 16 13:10:56.869611 systemd-journald[1514]: Runtime Journal (/run/log/journal/ec2265945a989e932b5daff305586644) is 4.7M, max 38.1M, 33.3M free. Dec 16 13:10:56.463145 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:10:56.482939 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:10:56.483442 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:10:56.885662 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:10:56.885759 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:10:56.897655 kernel: ACPI: bus type drm_connector registered Dec 16 13:10:56.902801 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:10:56.902869 kernel: fuse: init (API version 7.41) Dec 16 13:10:56.907655 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:10:56.919457 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:10:56.926817 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:10:56.930658 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:10:56.938712 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:10:56.946994 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:10:56.948171 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:10:56.948416 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:10:56.950303 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:10:56.951017 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:10:56.955226 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:10:56.956912 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:10:56.964881 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:10:57.001200 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:10:57.005484 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:10:57.015662 kernel: loop0: detected capacity change from 0 to 128560 Dec 16 13:10:57.010535 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:10:57.040027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:10:57.060776 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:10:57.065029 systemd-tmpfiles[1539]: ACLs are not supported, ignoring. Dec 16 13:10:57.065058 systemd-tmpfiles[1539]: ACLs are not supported, ignoring. Dec 16 13:10:57.075765 systemd-journald[1514]: Time spent on flushing to /var/log/journal/ec2265945a989e932b5daff305586644 is 66.185ms for 1032 entries. Dec 16 13:10:57.075765 systemd-journald[1514]: System Journal (/var/log/journal/ec2265945a989e932b5daff305586644) is 8M, max 195.6M, 187.6M free. Dec 16 13:10:57.156149 systemd-journald[1514]: Received client request to flush runtime journal. Dec 16 13:10:57.156221 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:10:57.156247 kernel: loop1: detected capacity change from 0 to 110984 Dec 16 13:10:57.088271 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:10:57.093929 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:10:57.161802 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:10:57.180669 kernel: loop2: detected capacity change from 0 to 219144 Dec 16 13:10:57.197270 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:10:57.200550 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:10:57.246827 systemd-tmpfiles[1578]: ACLs are not supported, ignoring. Dec 16 13:10:57.247204 systemd-tmpfiles[1578]: ACLs are not supported, ignoring. Dec 16 13:10:57.289792 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:10:57.485512 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:10:57.489619 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:10:57.514972 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:10:57.589686 kernel: loop3: detected capacity change from 0 to 72368 Dec 16 13:10:57.723733 kernel: loop4: detected capacity change from 0 to 128560 Dec 16 13:10:57.760701 kernel: loop5: detected capacity change from 0 to 110984 Dec 16 13:10:57.797264 kernel: loop6: detected capacity change from 0 to 219144 Dec 16 13:10:57.865865 kernel: loop7: detected capacity change from 0 to 72368 Dec 16 13:10:57.879028 ldconfig[1534]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:10:57.879704 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:10:57.881950 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:10:57.884714 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:10:57.886341 (sd-merge)[1585]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 16 13:10:57.887190 (sd-merge)[1585]: Merged extensions into '/usr'. Dec 16 13:10:57.896983 systemd[1]: Reload requested from client PID 1538 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:10:57.897016 systemd[1]: Reloading... Dec 16 13:10:57.932518 systemd-udevd[1587]: Using default interface naming scheme 'v255'. Dec 16 13:10:57.987350 zram_generator::config[1609]: No configuration found. Dec 16 13:10:58.188804 (udev-worker)[1615]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:10:58.310789 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:10:58.327697 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:10:58.357246 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:10:58.357321 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 16 13:10:58.385675 kernel: ACPI: button: Sleep Button [SLPF] Dec 16 13:10:58.392668 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 16 13:10:58.550692 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:10:58.551402 systemd[1]: Reloading finished in 653 ms. Dec 16 13:10:58.568648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:10:58.574409 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:10:58.596890 systemd[1]: Starting ensure-sysext.service... Dec 16 13:10:58.603879 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:10:58.606788 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:10:58.633885 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:10:58.646802 systemd[1]: Reload requested from client PID 1729 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:10:58.646826 systemd[1]: Reloading... Dec 16 13:10:58.712684 systemd-tmpfiles[1731]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:10:58.712736 systemd-tmpfiles[1731]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:10:58.713161 systemd-tmpfiles[1731]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:10:58.713566 systemd-tmpfiles[1731]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:10:58.716988 systemd-tmpfiles[1731]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:10:58.717493 systemd-tmpfiles[1731]: ACLs are not supported, ignoring. Dec 16 13:10:58.717568 systemd-tmpfiles[1731]: ACLs are not supported, ignoring. Dec 16 13:10:58.726447 systemd-tmpfiles[1731]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:10:58.727674 systemd-tmpfiles[1731]: Skipping /boot Dec 16 13:10:58.744115 systemd-tmpfiles[1731]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:10:58.745694 systemd-tmpfiles[1731]: Skipping /boot Dec 16 13:10:58.845660 zram_generator::config[1810]: No configuration found. Dec 16 13:10:59.017772 systemd-networkd[1730]: lo: Link UP Dec 16 13:10:59.017786 systemd-networkd[1730]: lo: Gained carrier Dec 16 13:10:59.021822 systemd-networkd[1730]: Enumeration completed Dec 16 13:10:59.022316 systemd-networkd[1730]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:10:59.022322 systemd-networkd[1730]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:10:59.024981 systemd-networkd[1730]: eth0: Link UP Dec 16 13:10:59.025170 systemd-networkd[1730]: eth0: Gained carrier Dec 16 13:10:59.025202 systemd-networkd[1730]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:10:59.035815 systemd-networkd[1730]: eth0: DHCPv4 address 172.31.28.132/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 13:10:59.207290 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 13:10:59.208409 systemd[1]: Reloading finished in 561 ms. Dec 16 13:10:59.219818 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:10:59.220601 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:10:59.238293 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:10:59.279569 systemd[1]: Finished ensure-sysext.service. Dec 16 13:10:59.293091 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:10:59.294470 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:10:59.298817 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:10:59.299757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:10:59.301960 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:10:59.305436 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:10:59.312421 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:10:59.317963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:10:59.318864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:10:59.320503 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:10:59.321158 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:10:59.323930 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:10:59.327910 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:10:59.331551 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:10:59.340911 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:10:59.354532 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:10:59.365887 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:10:59.373864 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:10:59.375727 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:10:59.376870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:10:59.377687 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:10:59.379195 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:10:59.381340 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:10:59.404136 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:10:59.405056 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:10:59.408341 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:10:59.418565 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:10:59.418841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:10:59.420764 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:10:59.427056 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:10:59.430313 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:10:59.446513 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:10:59.449673 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:10:59.474258 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:10:59.481998 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:10:59.494383 augenrules[1929]: No rules Dec 16 13:10:59.495474 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:10:59.496071 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:10:59.504903 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:10:59.506395 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:10:59.531283 systemd-resolved[1894]: Positive Trust Anchors: Dec 16 13:10:59.531304 systemd-resolved[1894]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:10:59.531356 systemd-resolved[1894]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:10:59.536856 systemd-resolved[1894]: Defaulting to hostname 'linux'. Dec 16 13:10:59.538650 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:10:59.539303 systemd[1]: Reached target network.target - Network. Dec 16 13:10:59.539746 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:10:59.569524 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:10:59.570229 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:10:59.570778 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:10:59.571186 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:10:59.571539 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:10:59.572273 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:10:59.572719 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:10:59.573038 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:10:59.573357 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:10:59.573393 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:10:59.573708 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:10:59.576202 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:10:59.578090 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:10:59.580862 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:10:59.581724 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:10:59.582151 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:10:59.584618 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:10:59.586436 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:10:59.587536 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:10:59.588980 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:10:59.589379 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:10:59.589853 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:10:59.589889 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:10:59.590989 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:10:59.595818 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:10:59.598807 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:10:59.602705 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:10:59.606740 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:10:59.612620 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:10:59.613722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:10:59.619013 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:10:59.636974 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:10:59.642378 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:10:59.657762 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:10:59.661765 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 13:10:59.674520 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:10:59.680845 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:10:59.696078 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:10:59.698969 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:10:59.700797 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:10:59.702658 extend-filesystems[1946]: Found /dev/nvme0n1p6 Dec 16 13:10:59.702913 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:10:59.710135 ntpd[1951]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:10:59.728186 jq[1945]: false Dec 16 13:10:59.728295 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:10:59.728295 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:10:59.728295 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: ---------------------------------------------------- Dec 16 13:10:59.728295 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:10:59.728295 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:10:59.728295 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: corporation. Support and training for ntp-4 are Dec 16 13:10:59.728295 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: available at https://www.nwtime.org/support Dec 16 13:10:59.728295 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: ---------------------------------------------------- Dec 16 13:10:59.713942 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:10:59.710201 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:10:59.743498 extend-filesystems[1946]: Found /dev/nvme0n1p9 Dec 16 13:10:59.720943 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:10:59.710214 ntpd[1951]: ---------------------------------------------------- Dec 16 13:10:59.751003 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: proto: precision = 0.097 usec (-23) Dec 16 13:10:59.751051 extend-filesystems[1946]: Checking size of /dev/nvme0n1p9 Dec 16 13:10:59.767791 kernel: ntpd[1951]: segfault at 24 ip 000055a25d625aeb sp 00007ffcd62545c0 error 4 in ntpd[68aeb,55a25d5c3000+80000] likely on CPU 0 (core 0, socket 0) Dec 16 13:10:59.767916 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 16 13:10:59.767970 update_engine[1961]: I20251216 13:10:59.748555 1961 main.cc:92] Flatcar Update Engine starting Dec 16 13:10:59.721880 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:10:59.710224 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:10:59.768518 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: basedate set to 2025-11-30 Dec 16 13:10:59.768518 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: gps base set to 2025-11-30 (week 2395) Dec 16 13:10:59.768518 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:10:59.768518 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:10:59.768518 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:10:59.768518 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: Listen normally on 3 eth0 172.31.28.132:123 Dec 16 13:10:59.768518 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: Listen normally on 4 lo [::1]:123 Dec 16 13:10:59.768518 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: bind(21) AF_INET6 [fe80::407:ecff:fea1:dc0d%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:10:59.768518 ntpd[1951]: 16 Dec 13:10:59 ntpd[1951]: unable to create socket on eth0 (5) for [fe80::407:ecff:fea1:dc0d%2]:123 Dec 16 13:10:59.768869 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Refreshing passwd entry cache Dec 16 13:10:59.782595 jq[1963]: true Dec 16 13:10:59.722151 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:10:59.710234 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:10:59.783063 extend-filesystems[1946]: Resized partition /dev/nvme0n1p9 Dec 16 13:10:59.776596 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:10:59.710244 ntpd[1951]: corporation. Support and training for ntp-4 are Dec 16 13:10:59.777997 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:10:59.710254 ntpd[1951]: available at https://www.nwtime.org/support Dec 16 13:10:59.710263 ntpd[1951]: ---------------------------------------------------- Dec 16 13:10:59.746852 ntpd[1951]: proto: precision = 0.097 usec (-23) Dec 16 13:10:59.751336 oslogin_cache_refresh[1947]: Refreshing passwd entry cache Dec 16 13:10:59.754147 ntpd[1951]: basedate set to 2025-11-30 Dec 16 13:10:59.754167 ntpd[1951]: gps base set to 2025-11-30 (week 2395) Dec 16 13:10:59.754316 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:10:59.754348 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:10:59.793165 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Failure getting users, quitting Dec 16 13:10:59.793165 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:10:59.793165 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Refreshing group entry cache Dec 16 13:10:59.793277 extend-filesystems[1980]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:10:59.800128 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Dec 16 13:10:59.754596 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:10:59.800281 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Failure getting groups, quitting Dec 16 13:10:59.800281 google_oslogin_nss_cache[1947]: oslogin_cache_refresh[1947]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:10:59.754623 ntpd[1951]: Listen normally on 3 eth0 172.31.28.132:123 Dec 16 13:10:59.754686 ntpd[1951]: Listen normally on 4 lo [::1]:123 Dec 16 13:10:59.808223 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:10:59.818095 coreos-metadata[1942]: Dec 16 13:10:59.814 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 13:10:59.754716 ntpd[1951]: bind(21) AF_INET6 [fe80::407:ecff:fea1:dc0d%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:10:59.811721 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:10:59.754737 ntpd[1951]: unable to create socket on eth0 (5) for [fe80::407:ecff:fea1:dc0d%2]:123 Dec 16 13:10:59.790791 oslogin_cache_refresh[1947]: Failure getting users, quitting Dec 16 13:10:59.790816 oslogin_cache_refresh[1947]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:10:59.823798 coreos-metadata[1942]: Dec 16 13:10:59.820 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 16 13:10:59.790876 oslogin_cache_refresh[1947]: Refreshing group entry cache Dec 16 13:10:59.799169 oslogin_cache_refresh[1947]: Failure getting groups, quitting Dec 16 13:10:59.799186 oslogin_cache_refresh[1947]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:10:59.825384 systemd-coredump[1988]: Process 1951 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 13:10:59.830843 coreos-metadata[1942]: Dec 16 13:10:59.827 INFO Fetch successful Dec 16 13:10:59.830843 coreos-metadata[1942]: Dec 16 13:10:59.827 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 16 13:10:59.830843 coreos-metadata[1942]: Dec 16 13:10:59.829 INFO Fetch successful Dec 16 13:10:59.830843 coreos-metadata[1942]: Dec 16 13:10:59.829 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 16 13:10:59.839518 coreos-metadata[1942]: Dec 16 13:10:59.832 INFO Fetch successful Dec 16 13:10:59.839518 coreos-metadata[1942]: Dec 16 13:10:59.832 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 16 13:10:59.839518 coreos-metadata[1942]: Dec 16 13:10:59.835 INFO Fetch successful Dec 16 13:10:59.839518 coreos-metadata[1942]: Dec 16 13:10:59.835 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 16 13:10:59.839518 coreos-metadata[1942]: Dec 16 13:10:59.837 INFO Fetch failed with 404: resource not found Dec 16 13:10:59.839518 coreos-metadata[1942]: Dec 16 13:10:59.837 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 16 13:10:59.833424 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Dec 16 13:10:59.843408 systemd[1]: Started systemd-coredump@0-1988-0.service - Process Core Dump (PID 1988/UID 0). Dec 16 13:10:59.846204 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:10:59.846475 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:10:59.851789 coreos-metadata[1942]: Dec 16 13:10:59.851 INFO Fetch successful Dec 16 13:10:59.851789 coreos-metadata[1942]: Dec 16 13:10:59.851 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 16 13:10:59.925379 coreos-metadata[1942]: Dec 16 13:10:59.853 INFO Fetch successful Dec 16 13:10:59.925379 coreos-metadata[1942]: Dec 16 13:10:59.853 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 16 13:10:59.925379 coreos-metadata[1942]: Dec 16 13:10:59.856 INFO Fetch successful Dec 16 13:10:59.925379 coreos-metadata[1942]: Dec 16 13:10:59.856 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 16 13:10:59.925379 coreos-metadata[1942]: Dec 16 13:10:59.863 INFO Fetch successful Dec 16 13:10:59.925379 coreos-metadata[1942]: Dec 16 13:10:59.863 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 16 13:10:59.925379 coreos-metadata[1942]: Dec 16 13:10:59.865 INFO Fetch successful Dec 16 13:10:59.925768 update_engine[1961]: I20251216 13:10:59.909885 1961 update_check_scheduler.cc:74] Next update check in 10m23s Dec 16 13:10:59.925856 tar[1966]: linux-amd64/LICENSE Dec 16 13:10:59.925856 tar[1966]: linux-amd64/helm Dec 16 13:10:59.880965 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:10:59.880354 dbus-daemon[1943]: [system] SELinux support is enabled Dec 16 13:10:59.882093 (ntainerd)[1982]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:10:59.915455 dbus-daemon[1943]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1730 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 13:10:59.908001 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:10:59.908041 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:10:59.909095 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:10:59.909123 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:10:59.922875 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:10:59.939026 jq[1978]: true Dec 16 13:10:59.940161 dbus-daemon[1943]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:10:59.946831 systemd-logind[1960]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:10:59.947269 systemd-logind[1960]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 16 13:10:59.947300 systemd-logind[1960]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:10:59.948003 systemd-logind[1960]: New seat seat0. Dec 16 13:10:59.949054 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:10:59.968218 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:10:59.985237 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 13:11:00.003041 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Dec 16 13:11:00.039729 extend-filesystems[1980]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 13:11:00.039729 extend-filesystems[1980]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 13:11:00.039729 extend-filesystems[1980]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Dec 16 13:11:00.038304 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:11:00.065929 extend-filesystems[1946]: Resized filesystem in /dev/nvme0n1p9 Dec 16 13:11:00.045908 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:11:00.067765 systemd-networkd[1730]: eth0: Gained IPv6LL Dec 16 13:11:00.076483 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 13:11:00.085673 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:11:00.089545 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:11:00.109420 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:11:00.118074 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 16 13:11:00.125244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:00.136946 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:11:00.138826 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:11:00.231108 sshd_keygen[1995]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:11:00.238745 bash[2049]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:11:00.256232 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:11:00.266456 systemd[1]: Starting sshkeys.service... Dec 16 13:11:00.376029 amazon-ssm-agent[2029]: Initializing new seelog logger Dec 16 13:11:00.376029 amazon-ssm-agent[2029]: New Seelog Logger Creation Complete Dec 16 13:11:00.376029 amazon-ssm-agent[2029]: 2025/12/16 13:11:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:11:00.376029 amazon-ssm-agent[2029]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:11:00.376029 amazon-ssm-agent[2029]: 2025/12/16 13:11:00 processing appconfig overrides Dec 16 13:11:00.370129 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:11:00.375127 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:11:00.387612 amazon-ssm-agent[2029]: 2025/12/16 13:11:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:11:00.387612 amazon-ssm-agent[2029]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:11:00.387612 amazon-ssm-agent[2029]: 2025/12/16 13:11:00 processing appconfig overrides Dec 16 13:11:00.389987 amazon-ssm-agent[2029]: 2025-12-16 13:11:00.3838 INFO Proxy environment variables: Dec 16 13:11:00.391400 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:11:00.399206 amazon-ssm-agent[2029]: 2025/12/16 13:11:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:11:00.399206 amazon-ssm-agent[2029]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:11:00.399206 amazon-ssm-agent[2029]: 2025/12/16 13:11:00 processing appconfig overrides Dec 16 13:11:00.408171 systemd-coredump[1997]: Process 1951 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1951: #0 0x000055a25d625aeb n/a (ntpd + 0x68aeb) #1 0x000055a25d5cecdf n/a (ntpd + 0x11cdf) #2 0x000055a25d5cf575 n/a (ntpd + 0x12575) #3 0x000055a25d5cad8a n/a (ntpd + 0xdd8a) #4 0x000055a25d5cc5d3 n/a (ntpd + 0xf5d3) #5 0x000055a25d5d4fd1 n/a (ntpd + 0x17fd1) #6 0x000055a25d5c5c2d n/a (ntpd + 0x8c2d) #7 0x00007f9e11a4316c n/a (libc.so.6 + 0x2716c) #8 0x00007f9e11a43229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055a25d5c5c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 16 13:11:00.412182 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 13:11:00.412378 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 13:11:00.427800 amazon-ssm-agent[2029]: 2025/12/16 13:11:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:11:00.427800 amazon-ssm-agent[2029]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:11:00.427800 amazon-ssm-agent[2029]: 2025/12/16 13:11:00 processing appconfig overrides Dec 16 13:11:00.429347 systemd[1]: systemd-coredump@0-1988-0.service: Deactivated successfully. Dec 16 13:11:00.495695 amazon-ssm-agent[2029]: 2025-12-16 13:11:00.3839 INFO no_proxy: Dec 16 13:11:00.500736 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:11:00.539982 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Dec 16 13:11:00.560838 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:11:00.569807 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:11:00.597191 amazon-ssm-agent[2029]: 2025-12-16 13:11:00.3839 INFO https_proxy: Dec 16 13:11:00.710658 amazon-ssm-agent[2029]: 2025-12-16 13:11:00.3853 INFO http_proxy: Dec 16 13:11:00.724888 ntpd[2163]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:11:00.724968 ntpd[2163]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:11:00.725403 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:11:00.725403 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:11:00.725403 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: ---------------------------------------------------- Dec 16 13:11:00.725403 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:11:00.725403 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:11:00.725403 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: corporation. Support and training for ntp-4 are Dec 16 13:11:00.725403 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: available at https://www.nwtime.org/support Dec 16 13:11:00.725403 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: ---------------------------------------------------- Dec 16 13:11:00.724979 ntpd[2163]: ---------------------------------------------------- Dec 16 13:11:00.727327 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: proto: precision = 0.102 usec (-23) Dec 16 13:11:00.724988 ntpd[2163]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:11:00.724997 ntpd[2163]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:11:00.727455 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: basedate set to 2025-11-30 Dec 16 13:11:00.727455 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: gps base set to 2025-11-30 (week 2395) Dec 16 13:11:00.725006 ntpd[2163]: corporation. Support and training for ntp-4 are Dec 16 13:11:00.727563 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:11:00.727563 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:11:00.725015 ntpd[2163]: available at https://www.nwtime.org/support Dec 16 13:11:00.746323 coreos-metadata[2109]: Dec 16 13:11:00.744 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 13:11:00.746323 coreos-metadata[2109]: Dec 16 13:11:00.746 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 16 13:11:00.746751 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:11:00.746751 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: Listen normally on 3 eth0 172.31.28.132:123 Dec 16 13:11:00.746751 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: Listen normally on 4 lo [::1]:123 Dec 16 13:11:00.746751 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: Listen normally on 5 eth0 [fe80::407:ecff:fea1:dc0d%2]:123 Dec 16 13:11:00.746751 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: Listening on routing socket on fd #22 for interface updates Dec 16 13:11:00.746751 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:11:00.746751 ntpd[2163]: 16 Dec 13:11:00 ntpd[2163]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:11:00.742534 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:11:00.725024 ntpd[2163]: ---------------------------------------------------- Dec 16 13:11:00.745756 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:11:00.727113 ntpd[2163]: proto: precision = 0.102 usec (-23) Dec 16 13:11:00.727391 ntpd[2163]: basedate set to 2025-11-30 Dec 16 13:11:00.727405 ntpd[2163]: gps base set to 2025-11-30 (week 2395) Dec 16 13:11:00.727501 ntpd[2163]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:11:00.727529 ntpd[2163]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:11:00.727733 ntpd[2163]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:11:00.727760 ntpd[2163]: Listen normally on 3 eth0 172.31.28.132:123 Dec 16 13:11:00.727789 ntpd[2163]: Listen normally on 4 lo [::1]:123 Dec 16 13:11:00.727840 ntpd[2163]: Listen normally on 5 eth0 [fe80::407:ecff:fea1:dc0d%2]:123 Dec 16 13:11:00.727869 ntpd[2163]: Listening on routing socket on fd #22 for interface updates Dec 16 13:11:00.735686 ntpd[2163]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:11:00.735720 ntpd[2163]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:11:00.753840 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:11:00.758911 coreos-metadata[2109]: Dec 16 13:11:00.758 INFO Fetch successful Dec 16 13:11:00.758911 coreos-metadata[2109]: Dec 16 13:11:00.758 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 13:11:00.763031 coreos-metadata[2109]: Dec 16 13:11:00.761 INFO Fetch successful Dec 16 13:11:00.771070 unknown[2109]: wrote ssh authorized keys file for user: core Dec 16 13:11:00.799245 amazon-ssm-agent[2029]: 2025-12-16 13:11:00.3855 INFO Checking if agent identity type OnPrem can be assumed Dec 16 13:11:00.851845 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:11:00.855062 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 13:11:00.868202 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:11:00.879950 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:11:00.881967 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:11:00.885139 dbus-daemon[1943]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 13:11:00.895465 update-ssh-keys[2176]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:11:00.898146 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:11:00.909439 systemd[1]: Finished sshkeys.service. Dec 16 13:11:00.915927 dbus-daemon[1943]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2013 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 13:11:00.922039 amazon-ssm-agent[2029]: 2025-12-16 13:11:00.3891 INFO Checking if agent identity type EC2 can be assumed Dec 16 13:11:00.928936 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 13:11:00.950830 locksmithd[2008]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:11:00.954502 containerd[1982]: time="2025-12-16T13:11:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:11:00.955698 containerd[1982]: time="2025-12-16T13:11:00.955649193Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:11:00.973710 containerd[1982]: time="2025-12-16T13:11:00.972613435Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.235µs" Dec 16 13:11:00.973710 containerd[1982]: time="2025-12-16T13:11:00.973243566Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:11:00.973710 containerd[1982]: time="2025-12-16T13:11:00.973279191Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:11:00.973710 containerd[1982]: time="2025-12-16T13:11:00.973472100Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:11:00.973710 containerd[1982]: time="2025-12-16T13:11:00.973496108Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:11:00.973710 containerd[1982]: time="2025-12-16T13:11:00.973532640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:11:00.973710 containerd[1982]: time="2025-12-16T13:11:00.973600237Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:11:00.973710 containerd[1982]: time="2025-12-16T13:11:00.973614533Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:11:00.974434 containerd[1982]: time="2025-12-16T13:11:00.974401795Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:11:00.974664 containerd[1982]: time="2025-12-16T13:11:00.974518914Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:11:00.974664 containerd[1982]: time="2025-12-16T13:11:00.974554398Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:11:00.974664 containerd[1982]: time="2025-12-16T13:11:00.974570643Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:11:00.974892 containerd[1982]: time="2025-12-16T13:11:00.974871203Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:11:00.975218 containerd[1982]: time="2025-12-16T13:11:00.975196062Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:11:00.975325 containerd[1982]: time="2025-12-16T13:11:00.975306662Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:11:00.975393 containerd[1982]: time="2025-12-16T13:11:00.975378285Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:11:00.975507 containerd[1982]: time="2025-12-16T13:11:00.975491637Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:11:00.976058 containerd[1982]: time="2025-12-16T13:11:00.976036344Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:11:00.976211 containerd[1982]: time="2025-12-16T13:11:00.976195023Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985316699Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985398285Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985430939Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985454137Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985472555Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985487831Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985510031Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985527236Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985543322Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985559512Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985572930Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985591082Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985792414Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:11:00.987680 containerd[1982]: time="2025-12-16T13:11:00.985826762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:11:00.988256 containerd[1982]: time="2025-12-16T13:11:00.985848676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:11:00.988256 containerd[1982]: time="2025-12-16T13:11:00.985869721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:11:00.988256 containerd[1982]: time="2025-12-16T13:11:00.985888716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:11:00.988256 containerd[1982]: time="2025-12-16T13:11:00.985905605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:11:00.988256 containerd[1982]: time="2025-12-16T13:11:00.985922295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:11:00.988256 containerd[1982]: time="2025-12-16T13:11:00.985937653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:11:00.988256 containerd[1982]: time="2025-12-16T13:11:00.985954782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:11:00.988256 containerd[1982]: time="2025-12-16T13:11:00.985971007Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:11:00.988256 containerd[1982]: time="2025-12-16T13:11:00.985989983Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:11:00.988256 containerd[1982]: time="2025-12-16T13:11:00.986052036Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:11:00.988256 containerd[1982]: time="2025-12-16T13:11:00.986070630Z" level=info msg="Start snapshots syncer" Dec 16 13:11:00.988256 containerd[1982]: time="2025-12-16T13:11:00.986123018Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:11:00.990662 containerd[1982]: time="2025-12-16T13:11:00.986596810Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:11:00.990662 containerd[1982]: time="2025-12-16T13:11:00.988781550Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:11:00.991433 containerd[1982]: time="2025-12-16T13:11:00.990980657Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:11:00.991433 containerd[1982]: time="2025-12-16T13:11:00.991181047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:11:00.991433 containerd[1982]: time="2025-12-16T13:11:00.991215395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:11:00.991433 containerd[1982]: time="2025-12-16T13:11:00.991232852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:11:00.991433 containerd[1982]: time="2025-12-16T13:11:00.991249045Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:11:00.991433 containerd[1982]: time="2025-12-16T13:11:00.991275850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:11:00.991433 containerd[1982]: time="2025-12-16T13:11:00.991292183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:11:00.991433 containerd[1982]: time="2025-12-16T13:11:00.991308455Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:11:00.991433 containerd[1982]: time="2025-12-16T13:11:00.991343600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:11:00.991433 containerd[1982]: time="2025-12-16T13:11:00.991359915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:11:00.991433 containerd[1982]: time="2025-12-16T13:11:00.991376336Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:11:00.993031 containerd[1982]: time="2025-12-16T13:11:00.992736487Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:11:00.993031 containerd[1982]: time="2025-12-16T13:11:00.992801867Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:11:00.993031 containerd[1982]: time="2025-12-16T13:11:00.992818699Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:11:00.993031 containerd[1982]: time="2025-12-16T13:11:00.992835789Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:11:00.993031 containerd[1982]: time="2025-12-16T13:11:00.992848993Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:11:00.993031 containerd[1982]: time="2025-12-16T13:11:00.992870638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:11:00.993031 containerd[1982]: time="2025-12-16T13:11:00.992894314Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:11:00.993031 containerd[1982]: time="2025-12-16T13:11:00.992917785Z" level=info msg="runtime interface created" Dec 16 13:11:00.993031 containerd[1982]: time="2025-12-16T13:11:00.992926620Z" level=info msg="created NRI interface" Dec 16 13:11:00.993031 containerd[1982]: time="2025-12-16T13:11:00.992940074Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:11:00.993031 containerd[1982]: time="2025-12-16T13:11:00.992960694Z" level=info msg="Connect containerd service" Dec 16 13:11:00.993031 containerd[1982]: time="2025-12-16T13:11:00.992997963Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:11:00.998813 containerd[1982]: time="2025-12-16T13:11:00.998259215Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:11:01.036457 amazon-ssm-agent[2029]: 2025-12-16 13:11:01.0361 INFO Agent will take identity from EC2 Dec 16 13:11:01.138010 amazon-ssm-agent[2029]: 2025-12-16 13:11:01.0374 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 16 13:11:01.156223 polkitd[2192]: Started polkitd version 126 Dec 16 13:11:01.170614 polkitd[2192]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 13:11:01.180195 polkitd[2192]: Loading rules from directory /run/polkit-1/rules.d Dec 16 13:11:01.180266 polkitd[2192]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:11:01.181801 polkitd[2192]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 13:11:01.181860 polkitd[2192]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:11:01.181917 polkitd[2192]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 13:11:01.182923 polkitd[2192]: Finished loading, compiling and executing 2 rules Dec 16 13:11:01.183380 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 13:11:01.187066 dbus-daemon[1943]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 13:11:01.188470 polkitd[2192]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 13:11:01.219072 systemd-hostnamed[2013]: Hostname set to (transient) Dec 16 13:11:01.219516 systemd-resolved[1894]: System hostname changed to 'ip-172-31-28-132'. Dec 16 13:11:01.240670 amazon-ssm-agent[2029]: 2025-12-16 13:11:01.0384 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 16 13:11:01.312890 containerd[1982]: time="2025-12-16T13:11:01.312845176Z" level=info msg="Start subscribing containerd event" Dec 16 13:11:01.313164 containerd[1982]: time="2025-12-16T13:11:01.313122131Z" level=info msg="Start recovering state" Dec 16 13:11:01.313602 containerd[1982]: time="2025-12-16T13:11:01.313579154Z" level=info msg="Start event monitor" Dec 16 13:11:01.313809 containerd[1982]: time="2025-12-16T13:11:01.313791574Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:11:01.313908 containerd[1982]: time="2025-12-16T13:11:01.313895242Z" level=info msg="Start streaming server" Dec 16 13:11:01.313979 containerd[1982]: time="2025-12-16T13:11:01.313966409Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:11:01.314192 containerd[1982]: time="2025-12-16T13:11:01.314175697Z" level=info msg="runtime interface starting up..." Dec 16 13:11:01.314267 containerd[1982]: time="2025-12-16T13:11:01.314255428Z" level=info msg="starting plugins..." Dec 16 13:11:01.314342 containerd[1982]: time="2025-12-16T13:11:01.314329511Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:11:01.314518 containerd[1982]: time="2025-12-16T13:11:01.313765588Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:11:01.314694 containerd[1982]: time="2025-12-16T13:11:01.314679539Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:11:01.314933 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:11:01.316894 containerd[1982]: time="2025-12-16T13:11:01.316716019Z" level=info msg="containerd successfully booted in 0.363345s" Dec 16 13:11:01.339413 amazon-ssm-agent[2029]: 2025-12-16 13:11:01.0384 INFO [amazon-ssm-agent] Starting Core Agent Dec 16 13:11:01.438705 amazon-ssm-agent[2029]: 2025-12-16 13:11:01.0384 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 16 13:11:01.458929 tar[1966]: linux-amd64/README.md Dec 16 13:11:01.482204 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:11:01.540249 amazon-ssm-agent[2029]: 2025-12-16 13:11:01.0384 INFO [Registrar] Starting registrar module Dec 16 13:11:01.640455 amazon-ssm-agent[2029]: 2025-12-16 13:11:01.0405 INFO [EC2Identity] Checking disk for registration info Dec 16 13:11:01.740775 amazon-ssm-agent[2029]: 2025-12-16 13:11:01.0406 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 16 13:11:01.841100 amazon-ssm-agent[2029]: 2025-12-16 13:11:01.0406 INFO [EC2Identity] Generating registration keypair Dec 16 13:11:02.236837 amazon-ssm-agent[2029]: 2025-12-16 13:11:02.2366 INFO [EC2Identity] Checking write access before registering Dec 16 13:11:02.292656 amazon-ssm-agent[2029]: 2025/12/16 13:11:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:11:02.292656 amazon-ssm-agent[2029]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:11:02.292656 amazon-ssm-agent[2029]: 2025/12/16 13:11:02 processing appconfig overrides Dec 16 13:11:02.331005 amazon-ssm-agent[2029]: 2025-12-16 13:11:02.2370 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 16 13:11:02.331005 amazon-ssm-agent[2029]: 2025-12-16 13:11:02.2921 INFO [EC2Identity] EC2 registration was successful. Dec 16 13:11:02.331005 amazon-ssm-agent[2029]: 2025-12-16 13:11:02.2922 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 16 13:11:02.331005 amazon-ssm-agent[2029]: 2025-12-16 13:11:02.2922 INFO [CredentialRefresher] credentialRefresher has started Dec 16 13:11:02.331005 amazon-ssm-agent[2029]: 2025-12-16 13:11:02.2923 INFO [CredentialRefresher] Starting credentials refresher loop Dec 16 13:11:02.331005 amazon-ssm-agent[2029]: 2025-12-16 13:11:02.3307 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 16 13:11:02.331005 amazon-ssm-agent[2029]: 2025-12-16 13:11:02.3309 INFO [CredentialRefresher] Credentials ready Dec 16 13:11:02.338146 amazon-ssm-agent[2029]: 2025-12-16 13:11:02.3310 INFO [CredentialRefresher] Next credential rotation will be in 29.999995238883333 minutes Dec 16 13:11:03.111186 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:11:03.113550 systemd[1]: Started sshd@0-172.31.28.132:22-139.178.68.195:58834.service - OpenSSH per-connection server daemon (139.178.68.195:58834). Dec 16 13:11:03.306256 sshd[2220]: Accepted publickey for core from 139.178.68.195 port 58834 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:11:03.309121 sshd-session[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:03.317366 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:11:03.320185 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:11:03.332085 systemd-logind[1960]: New session 1 of user core. Dec 16 13:11:03.351696 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:11:03.354100 amazon-ssm-agent[2029]: 2025-12-16 13:11:03.3520 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 16 13:11:03.357134 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:11:03.381039 (systemd)[2228]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:11:03.390098 systemd-logind[1960]: New session c1 of user core. Dec 16 13:11:03.456053 amazon-ssm-agent[2029]: 2025-12-16 13:11:03.3763 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2226) started Dec 16 13:11:03.555416 amazon-ssm-agent[2029]: 2025-12-16 13:11:03.3764 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 16 13:11:03.630932 systemd[2228]: Queued start job for default target default.target. Dec 16 13:11:03.649806 systemd[2228]: Created slice app.slice - User Application Slice. Dec 16 13:11:03.650033 systemd[2228]: Reached target paths.target - Paths. Dec 16 13:11:03.650184 systemd[2228]: Reached target timers.target - Timers. Dec 16 13:11:03.651487 systemd[2228]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:11:03.663538 systemd[2228]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:11:03.663669 systemd[2228]: Reached target sockets.target - Sockets. Dec 16 13:11:03.663715 systemd[2228]: Reached target basic.target - Basic System. Dec 16 13:11:03.663752 systemd[2228]: Reached target default.target - Main User Target. Dec 16 13:11:03.664030 systemd[2228]: Startup finished in 258ms. Dec 16 13:11:03.664038 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:11:03.672901 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:11:03.820755 systemd[1]: Started sshd@1-172.31.28.132:22-139.178.68.195:58848.service - OpenSSH per-connection server daemon (139.178.68.195:58848). Dec 16 13:11:04.002886 sshd[2251]: Accepted publickey for core from 139.178.68.195 port 58848 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:11:04.005620 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:04.012193 systemd-logind[1960]: New session 2 of user core. Dec 16 13:11:04.016852 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:11:04.141237 sshd[2254]: Connection closed by 139.178.68.195 port 58848 Dec 16 13:11:04.142986 sshd-session[2251]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:04.148775 systemd[1]: sshd@1-172.31.28.132:22-139.178.68.195:58848.service: Deactivated successfully. Dec 16 13:11:04.150537 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:11:04.151755 systemd-logind[1960]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:11:04.153596 systemd-logind[1960]: Removed session 2. Dec 16 13:11:04.176071 systemd[1]: Started sshd@2-172.31.28.132:22-139.178.68.195:58858.service - OpenSSH per-connection server daemon (139.178.68.195:58858). Dec 16 13:11:04.348423 sshd[2260]: Accepted publickey for core from 139.178.68.195 port 58858 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:11:04.349528 sshd-session[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:04.356441 systemd-logind[1960]: New session 3 of user core. Dec 16 13:11:04.365888 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:11:04.484138 sshd[2263]: Connection closed by 139.178.68.195 port 58858 Dec 16 13:11:04.484711 sshd-session[2260]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:04.488465 systemd[1]: sshd@2-172.31.28.132:22-139.178.68.195:58858.service: Deactivated successfully. Dec 16 13:11:04.490263 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:11:04.491049 systemd-logind[1960]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:11:04.492312 systemd-logind[1960]: Removed session 3. Dec 16 13:11:05.318951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:05.320164 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:11:05.321682 systemd[1]: Startup finished in 2.628s (kernel) + 6.069s (initrd) + 9.675s (userspace) = 18.373s. Dec 16 13:11:05.329622 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:11:07.065031 kubelet[2273]: E1216 13:11:07.064973 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:11:07.067698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:11:07.067972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:11:07.068612 systemd[1]: kubelet.service: Consumed 1.011s CPU time, 257.7M memory peak. Dec 16 13:11:09.030897 systemd-resolved[1894]: Clock change detected. Flushing caches. Dec 16 13:11:15.826639 systemd[1]: Started sshd@3-172.31.28.132:22-139.178.68.195:57744.service - OpenSSH per-connection server daemon (139.178.68.195:57744). Dec 16 13:11:15.995324 sshd[2285]: Accepted publickey for core from 139.178.68.195 port 57744 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:11:15.996724 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:16.003205 systemd-logind[1960]: New session 4 of user core. Dec 16 13:11:16.010740 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:11:16.129950 sshd[2288]: Connection closed by 139.178.68.195 port 57744 Dec 16 13:11:16.130514 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:16.134602 systemd[1]: sshd@3-172.31.28.132:22-139.178.68.195:57744.service: Deactivated successfully. Dec 16 13:11:16.136779 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:11:16.137706 systemd-logind[1960]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:11:16.139347 systemd-logind[1960]: Removed session 4. Dec 16 13:11:16.163184 systemd[1]: Started sshd@4-172.31.28.132:22-139.178.68.195:57756.service - OpenSSH per-connection server daemon (139.178.68.195:57756). Dec 16 13:11:16.333524 sshd[2294]: Accepted publickey for core from 139.178.68.195 port 57756 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:11:16.334953 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:16.341149 systemd-logind[1960]: New session 5 of user core. Dec 16 13:11:16.346706 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:11:16.465771 sshd[2297]: Connection closed by 139.178.68.195 port 57756 Dec 16 13:11:16.466257 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:16.472555 systemd[1]: sshd@4-172.31.28.132:22-139.178.68.195:57756.service: Deactivated successfully. Dec 16 13:11:16.477468 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:11:16.478554 systemd-logind[1960]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:11:16.480237 systemd-logind[1960]: Removed session 5. Dec 16 13:11:16.501626 systemd[1]: Started sshd@5-172.31.28.132:22-139.178.68.195:57764.service - OpenSSH per-connection server daemon (139.178.68.195:57764). Dec 16 13:11:16.675800 sshd[2303]: Accepted publickey for core from 139.178.68.195 port 57764 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:11:16.677292 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:16.683227 systemd-logind[1960]: New session 6 of user core. Dec 16 13:11:16.689715 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:11:16.812665 sshd[2306]: Connection closed by 139.178.68.195 port 57764 Dec 16 13:11:16.813682 sshd-session[2303]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:16.818083 systemd[1]: sshd@5-172.31.28.132:22-139.178.68.195:57764.service: Deactivated successfully. Dec 16 13:11:16.820215 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:11:16.821285 systemd-logind[1960]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:11:16.823005 systemd-logind[1960]: Removed session 6. Dec 16 13:11:16.843878 systemd[1]: Started sshd@6-172.31.28.132:22-139.178.68.195:57770.service - OpenSSH per-connection server daemon (139.178.68.195:57770). Dec 16 13:11:17.018615 sshd[2312]: Accepted publickey for core from 139.178.68.195 port 57770 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:11:17.019923 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:17.024736 systemd-logind[1960]: New session 7 of user core. Dec 16 13:11:17.035846 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:11:17.144023 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:11:17.144410 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:11:17.157922 sudo[2316]: pam_unix(sudo:session): session closed for user root Dec 16 13:11:17.180466 sshd[2315]: Connection closed by 139.178.68.195 port 57770 Dec 16 13:11:17.181287 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:17.185350 systemd[1]: sshd@6-172.31.28.132:22-139.178.68.195:57770.service: Deactivated successfully. Dec 16 13:11:17.186935 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:11:17.187739 systemd-logind[1960]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:11:17.189082 systemd-logind[1960]: Removed session 7. Dec 16 13:11:17.215377 systemd[1]: Started sshd@7-172.31.28.132:22-139.178.68.195:57772.service - OpenSSH per-connection server daemon (139.178.68.195:57772). Dec 16 13:11:17.388008 sshd[2322]: Accepted publickey for core from 139.178.68.195 port 57772 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:11:17.390169 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:17.395629 systemd-logind[1960]: New session 8 of user core. Dec 16 13:11:17.404862 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:11:17.500029 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:11:17.500299 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:11:17.505822 sudo[2327]: pam_unix(sudo:session): session closed for user root Dec 16 13:11:17.511416 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:11:17.511699 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:11:17.522778 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:11:17.566011 augenrules[2349]: No rules Dec 16 13:11:17.567326 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:11:17.567691 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:11:17.569343 sudo[2326]: pam_unix(sudo:session): session closed for user root Dec 16 13:11:17.592039 sshd[2325]: Connection closed by 139.178.68.195 port 57772 Dec 16 13:11:17.592692 sshd-session[2322]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:17.596415 systemd[1]: sshd@7-172.31.28.132:22-139.178.68.195:57772.service: Deactivated successfully. Dec 16 13:11:17.598058 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:11:17.598960 systemd-logind[1960]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:11:17.600085 systemd-logind[1960]: Removed session 8. Dec 16 13:11:17.625293 systemd[1]: Started sshd@8-172.31.28.132:22-139.178.68.195:57780.service - OpenSSH per-connection server daemon (139.178.68.195:57780). Dec 16 13:11:17.808358 sshd[2358]: Accepted publickey for core from 139.178.68.195 port 57780 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:11:17.809761 sshd-session[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:17.814395 systemd-logind[1960]: New session 9 of user core. Dec 16 13:11:17.821741 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:11:17.921519 sudo[2362]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:11:17.921783 sudo[2362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:11:18.337711 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:11:18.354981 (dockerd)[2380]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:11:18.623466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:11:18.627378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:18.671763 dockerd[2380]: time="2025-12-16T13:11:18.671704099Z" level=info msg="Starting up" Dec 16 13:11:18.679252 dockerd[2380]: time="2025-12-16T13:11:18.678790329Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:11:18.695472 dockerd[2380]: time="2025-12-16T13:11:18.695422943Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:11:18.743952 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2444726653-merged.mount: Deactivated successfully. Dec 16 13:11:18.813193 dockerd[2380]: time="2025-12-16T13:11:18.813073502Z" level=info msg="Loading containers: start." Dec 16 13:11:18.831515 kernel: Initializing XFRM netlink socket Dec 16 13:11:18.922621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:18.935042 (kubelet)[2441]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:11:18.982741 kubelet[2441]: E1216 13:11:18.982353 2441 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:11:18.986912 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:11:18.987054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:11:18.987376 systemd[1]: kubelet.service: Consumed 181ms CPU time, 110.2M memory peak. Dec 16 13:11:19.094666 (udev-worker)[2404]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:11:19.140942 systemd-networkd[1730]: docker0: Link UP Dec 16 13:11:19.153188 dockerd[2380]: time="2025-12-16T13:11:19.153138746Z" level=info msg="Loading containers: done." Dec 16 13:11:19.200962 dockerd[2380]: time="2025-12-16T13:11:19.200301351Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:11:19.200962 dockerd[2380]: time="2025-12-16T13:11:19.200392939Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:11:19.200962 dockerd[2380]: time="2025-12-16T13:11:19.200476185Z" level=info msg="Initializing buildkit" Dec 16 13:11:19.240677 dockerd[2380]: time="2025-12-16T13:11:19.240499594Z" level=info msg="Completed buildkit initialization" Dec 16 13:11:19.250582 dockerd[2380]: time="2025-12-16T13:11:19.250527154Z" level=info msg="Daemon has completed initialization" Dec 16 13:11:19.250924 dockerd[2380]: time="2025-12-16T13:11:19.250688861Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:11:19.250822 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:11:19.739669 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2874891120-merged.mount: Deactivated successfully. Dec 16 13:11:20.990807 containerd[1982]: time="2025-12-16T13:11:20.990763522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 13:11:21.635722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2337190194.mount: Deactivated successfully. Dec 16 13:11:23.031694 containerd[1982]: time="2025-12-16T13:11:23.031624497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:23.033095 containerd[1982]: time="2025-12-16T13:11:23.032867827Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Dec 16 13:11:23.034434 containerd[1982]: time="2025-12-16T13:11:23.034397624Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:23.038083 containerd[1982]: time="2025-12-16T13:11:23.037921261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:23.038499 containerd[1982]: time="2025-12-16T13:11:23.038434553Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.047631804s" Dec 16 13:11:23.038499 containerd[1982]: time="2025-12-16T13:11:23.038467763Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 13:11:23.039100 containerd[1982]: time="2025-12-16T13:11:23.038971736Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 13:11:24.585218 containerd[1982]: time="2025-12-16T13:11:24.585168648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:24.590294 containerd[1982]: time="2025-12-16T13:11:24.590170918Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Dec 16 13:11:24.595947 containerd[1982]: time="2025-12-16T13:11:24.595881563Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:24.603048 containerd[1982]: time="2025-12-16T13:11:24.602991890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:24.604709 containerd[1982]: time="2025-12-16T13:11:24.604245985Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.565227897s" Dec 16 13:11:24.604709 containerd[1982]: time="2025-12-16T13:11:24.604280119Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 13:11:24.604978 containerd[1982]: time="2025-12-16T13:11:24.604949502Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 13:11:25.753788 containerd[1982]: time="2025-12-16T13:11:25.753715335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:25.755378 containerd[1982]: time="2025-12-16T13:11:25.755336004Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Dec 16 13:11:25.757737 containerd[1982]: time="2025-12-16T13:11:25.757675668Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:25.761844 containerd[1982]: time="2025-12-16T13:11:25.760960779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:25.761844 containerd[1982]: time="2025-12-16T13:11:25.761721123Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.15667263s" Dec 16 13:11:25.761844 containerd[1982]: time="2025-12-16T13:11:25.761749850Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 13:11:25.762650 containerd[1982]: time="2025-12-16T13:11:25.762619225Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 13:11:26.843530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4203405776.mount: Deactivated successfully. Dec 16 13:11:27.248815 containerd[1982]: time="2025-12-16T13:11:27.248688014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:27.250674 containerd[1982]: time="2025-12-16T13:11:27.250611233Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Dec 16 13:11:27.253207 containerd[1982]: time="2025-12-16T13:11:27.253133746Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:27.257044 containerd[1982]: time="2025-12-16T13:11:27.256349189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:27.257044 containerd[1982]: time="2025-12-16T13:11:27.256926535Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.494173847s" Dec 16 13:11:27.257044 containerd[1982]: time="2025-12-16T13:11:27.256953859Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 13:11:27.257314 containerd[1982]: time="2025-12-16T13:11:27.257297347Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 13:11:27.835039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207046067.mount: Deactivated successfully. Dec 16 13:11:29.038934 containerd[1982]: time="2025-12-16T13:11:29.038874834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:29.040229 containerd[1982]: time="2025-12-16T13:11:29.039982221Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Dec 16 13:11:29.042180 containerd[1982]: time="2025-12-16T13:11:29.042140270Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:29.050149 containerd[1982]: time="2025-12-16T13:11:29.050085501Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.792712005s" Dec 16 13:11:29.050149 containerd[1982]: time="2025-12-16T13:11:29.050149465Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 13:11:29.050722 containerd[1982]: time="2025-12-16T13:11:29.050691089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:29.050987 containerd[1982]: time="2025-12-16T13:11:29.050751451Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 13:11:29.237673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:11:29.239881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:29.560815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:29.569024 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:11:29.615528 kubelet[2738]: E1216 13:11:29.615468 2738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:11:29.619014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:11:29.619220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:11:29.620235 systemd[1]: kubelet.service: Consumed 175ms CPU time, 108.2M memory peak. Dec 16 13:11:29.639461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1329146861.mount: Deactivated successfully. Dec 16 13:11:29.649469 containerd[1982]: time="2025-12-16T13:11:29.649402914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:29.650651 containerd[1982]: time="2025-12-16T13:11:29.650456690Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Dec 16 13:11:29.652092 containerd[1982]: time="2025-12-16T13:11:29.652049243Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:29.655171 containerd[1982]: time="2025-12-16T13:11:29.654718784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:29.655781 containerd[1982]: time="2025-12-16T13:11:29.655747576Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 604.745123ms" Dec 16 13:11:29.655906 containerd[1982]: time="2025-12-16T13:11:29.655873721Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 13:11:29.656350 containerd[1982]: time="2025-12-16T13:11:29.656318840Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 13:11:30.265180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9500075.mount: Deactivated successfully. Dec 16 13:11:32.541189 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 13:11:33.844846 containerd[1982]: time="2025-12-16T13:11:33.844710002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:33.846326 containerd[1982]: time="2025-12-16T13:11:33.846114878Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Dec 16 13:11:33.848013 containerd[1982]: time="2025-12-16T13:11:33.847967360Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:33.851509 containerd[1982]: time="2025-12-16T13:11:33.851448022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:33.855358 containerd[1982]: time="2025-12-16T13:11:33.854791818Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.198436155s" Dec 16 13:11:33.855358 containerd[1982]: time="2025-12-16T13:11:33.854841987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 13:11:37.176032 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:37.176862 systemd[1]: kubelet.service: Consumed 175ms CPU time, 108.2M memory peak. Dec 16 13:11:37.179524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:37.219565 systemd[1]: Reload requested from client PID 2834 ('systemctl') (unit session-9.scope)... Dec 16 13:11:37.219586 systemd[1]: Reloading... Dec 16 13:11:37.362513 zram_generator::config[2881]: No configuration found. Dec 16 13:11:37.601270 systemd[1]: Reloading finished in 381 ms. Dec 16 13:11:37.661308 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:11:37.661423 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:11:37.661791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:37.661853 systemd[1]: kubelet.service: Consumed 121ms CPU time, 98M memory peak. Dec 16 13:11:37.663769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:37.883900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:37.895980 (kubelet)[2941]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:11:37.950908 kubelet[2941]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:11:37.950908 kubelet[2941]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:11:37.955518 kubelet[2941]: I1216 13:11:37.955129 2941 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:11:38.819365 kubelet[2941]: I1216 13:11:38.819317 2941 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:11:38.819365 kubelet[2941]: I1216 13:11:38.819349 2941 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:11:38.822506 kubelet[2941]: I1216 13:11:38.820901 2941 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:11:38.822506 kubelet[2941]: I1216 13:11:38.820937 2941 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:11:38.822506 kubelet[2941]: I1216 13:11:38.821220 2941 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:11:38.841296 kubelet[2941]: E1216 13:11:38.841236 2941 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:11:38.842274 kubelet[2941]: I1216 13:11:38.842238 2941 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:11:38.859523 kubelet[2941]: I1216 13:11:38.859016 2941 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:11:38.868658 kubelet[2941]: I1216 13:11:38.868608 2941 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:11:38.873362 kubelet[2941]: I1216 13:11:38.873285 2941 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:11:38.876770 kubelet[2941]: I1216 13:11:38.873359 2941 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-132","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:11:38.877531 kubelet[2941]: I1216 13:11:38.877033 2941 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:11:38.877531 kubelet[2941]: I1216 13:11:38.877061 2941 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:11:38.877531 kubelet[2941]: I1216 13:11:38.877203 2941 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:11:38.883003 kubelet[2941]: I1216 13:11:38.882973 2941 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:11:38.884227 kubelet[2941]: I1216 13:11:38.884200 2941 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:11:38.884554 kubelet[2941]: I1216 13:11:38.884537 2941 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:11:38.884652 kubelet[2941]: I1216 13:11:38.884645 2941 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:11:38.887241 kubelet[2941]: I1216 13:11:38.887217 2941 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:11:38.894159 kubelet[2941]: E1216 13:11:38.894116 2941 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-132&limit=500&resourceVersion=0\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:11:38.894452 kubelet[2941]: E1216 13:11:38.894432 2941 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:11:38.895263 kubelet[2941]: I1216 13:11:38.895132 2941 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:11:38.899655 kubelet[2941]: I1216 13:11:38.899545 2941 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:11:38.900111 kubelet[2941]: I1216 13:11:38.899832 2941 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:11:38.900111 kubelet[2941]: W1216 13:11:38.899900 2941 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:11:38.904556 kubelet[2941]: I1216 13:11:38.904532 2941 server.go:1262] "Started kubelet" Dec 16 13:11:38.905040 kubelet[2941]: I1216 13:11:38.905002 2941 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:11:38.915119 kubelet[2941]: I1216 13:11:38.914747 2941 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:11:38.917262 kubelet[2941]: I1216 13:11:38.917205 2941 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:11:38.917470 kubelet[2941]: I1216 13:11:38.917452 2941 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:11:38.917994 kubelet[2941]: I1216 13:11:38.917971 2941 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:11:38.921685 kubelet[2941]: I1216 13:11:38.920958 2941 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:11:38.923533 kubelet[2941]: E1216 13:11:38.921627 2941 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.132:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.132:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-132.1881b43c0c5e0cc6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-132,UID:ip-172-31-28-132,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-132,},FirstTimestamp:2025-12-16 13:11:38.904472774 +0000 UTC m=+1.003637113,LastTimestamp:2025-12-16 13:11:38.904472774 +0000 UTC m=+1.003637113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-132,}" Dec 16 13:11:38.924956 kubelet[2941]: I1216 13:11:38.924930 2941 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:11:38.928694 kubelet[2941]: E1216 13:11:38.928671 2941 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-28-132\" not found" Dec 16 13:11:38.929456 kubelet[2941]: I1216 13:11:38.928802 2941 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:11:38.929456 kubelet[2941]: I1216 13:11:38.928965 2941 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:11:38.929456 kubelet[2941]: I1216 13:11:38.929005 2941 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:11:38.929456 kubelet[2941]: E1216 13:11:38.929333 2941 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:11:38.932017 kubelet[2941]: E1216 13:11:38.931414 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-132?timeout=10s\": dial tcp 172.31.28.132:6443: connect: connection refused" interval="200ms" Dec 16 13:11:38.939624 kubelet[2941]: E1216 13:11:38.939601 2941 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:11:38.940022 kubelet[2941]: I1216 13:11:38.940005 2941 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:11:38.940127 kubelet[2941]: I1216 13:11:38.940121 2941 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:11:38.940243 kubelet[2941]: I1216 13:11:38.940231 2941 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:11:38.949753 kubelet[2941]: I1216 13:11:38.948876 2941 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:11:38.952932 kubelet[2941]: I1216 13:11:38.952889 2941 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:11:38.952932 kubelet[2941]: I1216 13:11:38.952918 2941 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:11:38.953567 kubelet[2941]: I1216 13:11:38.952948 2941 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:11:38.953567 kubelet[2941]: E1216 13:11:38.952997 2941 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:11:38.963203 kubelet[2941]: E1216 13:11:38.963084 2941 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:11:38.981683 kubelet[2941]: I1216 13:11:38.981475 2941 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:11:38.981683 kubelet[2941]: I1216 13:11:38.981514 2941 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:11:38.981683 kubelet[2941]: I1216 13:11:38.981534 2941 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:11:38.993225 kubelet[2941]: I1216 13:11:38.993194 2941 policy_none.go:49] "None policy: Start" Dec 16 13:11:38.993829 kubelet[2941]: I1216 13:11:38.993391 2941 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:11:38.993829 kubelet[2941]: I1216 13:11:38.993405 2941 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:11:39.002523 kubelet[2941]: I1216 13:11:39.002444 2941 policy_none.go:47] "Start" Dec 16 13:11:39.007857 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:11:39.038602 kubelet[2941]: E1216 13:11:39.038559 2941 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-28-132\" not found" Dec 16 13:11:39.041977 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:11:39.060589 kubelet[2941]: E1216 13:11:39.060540 2941 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:11:39.065326 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:11:39.074548 kubelet[2941]: E1216 13:11:39.073542 2941 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:11:39.075133 kubelet[2941]: I1216 13:11:39.075068 2941 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:11:39.075133 kubelet[2941]: I1216 13:11:39.075084 2941 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:11:39.076088 kubelet[2941]: I1216 13:11:39.075421 2941 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:11:39.078769 kubelet[2941]: E1216 13:11:39.078702 2941 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:11:39.078769 kubelet[2941]: E1216 13:11:39.078748 2941 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-132\" not found" Dec 16 13:11:39.133069 kubelet[2941]: E1216 13:11:39.133005 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-132?timeout=10s\": dial tcp 172.31.28.132:6443: connect: connection refused" interval="400ms" Dec 16 13:11:39.177126 kubelet[2941]: I1216 13:11:39.177089 2941 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-132" Dec 16 13:11:39.177437 kubelet[2941]: E1216 13:11:39.177403 2941 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.132:6443/api/v1/nodes\": dial tcp 172.31.28.132:6443: connect: connection refused" node="ip-172-31-28-132" Dec 16 13:11:39.278464 systemd[1]: Created slice kubepods-burstable-pod1a6a4dcaf1386ab0c0e4ea98a7357f38.slice - libcontainer container kubepods-burstable-pod1a6a4dcaf1386ab0c0e4ea98a7357f38.slice. Dec 16 13:11:39.287564 kubelet[2941]: E1216 13:11:39.287516 2941 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-132\" not found" node="ip-172-31-28-132" Dec 16 13:11:39.292970 systemd[1]: Created slice kubepods-burstable-podf3f436d2a8208f4000a929b15fa8ac22.slice - libcontainer container kubepods-burstable-podf3f436d2a8208f4000a929b15fa8ac22.slice. Dec 16 13:11:39.295829 kubelet[2941]: E1216 13:11:39.295609 2941 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-132\" not found" node="ip-172-31-28-132" Dec 16 13:11:39.298083 systemd[1]: Created slice kubepods-burstable-podf7e47bf4011096ee0f66233de8438ed0.slice - libcontainer container kubepods-burstable-podf7e47bf4011096ee0f66233de8438ed0.slice. Dec 16 13:11:39.300046 kubelet[2941]: E1216 13:11:39.300015 2941 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-132\" not found" node="ip-172-31-28-132" Dec 16 13:11:39.340620 kubelet[2941]: I1216 13:11:39.340576 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a6a4dcaf1386ab0c0e4ea98a7357f38-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-132\" (UID: \"1a6a4dcaf1386ab0c0e4ea98a7357f38\") " pod="kube-system/kube-controller-manager-ip-172-31-28-132" Dec 16 13:11:39.340620 kubelet[2941]: I1216 13:11:39.340615 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7e47bf4011096ee0f66233de8438ed0-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-132\" (UID: \"f7e47bf4011096ee0f66233de8438ed0\") " pod="kube-system/kube-scheduler-ip-172-31-28-132" Dec 16 13:11:39.340620 kubelet[2941]: I1216 13:11:39.340639 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3f436d2a8208f4000a929b15fa8ac22-ca-certs\") pod \"kube-apiserver-ip-172-31-28-132\" (UID: \"f3f436d2a8208f4000a929b15fa8ac22\") " pod="kube-system/kube-apiserver-ip-172-31-28-132" Dec 16 13:11:39.340942 kubelet[2941]: I1216 13:11:39.340654 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3f436d2a8208f4000a929b15fa8ac22-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-132\" (UID: \"f3f436d2a8208f4000a929b15fa8ac22\") " pod="kube-system/kube-apiserver-ip-172-31-28-132" Dec 16 13:11:39.340942 kubelet[2941]: I1216 13:11:39.340674 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3f436d2a8208f4000a929b15fa8ac22-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-132\" (UID: \"f3f436d2a8208f4000a929b15fa8ac22\") " pod="kube-system/kube-apiserver-ip-172-31-28-132" Dec 16 13:11:39.340942 kubelet[2941]: I1216 13:11:39.340690 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a6a4dcaf1386ab0c0e4ea98a7357f38-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-132\" (UID: \"1a6a4dcaf1386ab0c0e4ea98a7357f38\") " pod="kube-system/kube-controller-manager-ip-172-31-28-132" Dec 16 13:11:39.340942 kubelet[2941]: I1216 13:11:39.340710 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1a6a4dcaf1386ab0c0e4ea98a7357f38-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-132\" (UID: \"1a6a4dcaf1386ab0c0e4ea98a7357f38\") " pod="kube-system/kube-controller-manager-ip-172-31-28-132" Dec 16 13:11:39.340942 kubelet[2941]: I1216 13:11:39.340724 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a6a4dcaf1386ab0c0e4ea98a7357f38-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-132\" (UID: \"1a6a4dcaf1386ab0c0e4ea98a7357f38\") " pod="kube-system/kube-controller-manager-ip-172-31-28-132" Dec 16 13:11:39.341066 kubelet[2941]: I1216 13:11:39.340781 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a6a4dcaf1386ab0c0e4ea98a7357f38-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-132\" (UID: \"1a6a4dcaf1386ab0c0e4ea98a7357f38\") " pod="kube-system/kube-controller-manager-ip-172-31-28-132" Dec 16 13:11:39.379006 kubelet[2941]: I1216 13:11:39.378954 2941 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-132" Dec 16 13:11:39.379298 kubelet[2941]: E1216 13:11:39.379273 2941 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.132:6443/api/v1/nodes\": dial tcp 172.31.28.132:6443: connect: connection refused" node="ip-172-31-28-132" Dec 16 13:11:39.534417 kubelet[2941]: E1216 13:11:39.534363 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-132?timeout=10s\": dial tcp 172.31.28.132:6443: connect: connection refused" interval="800ms" Dec 16 13:11:39.592752 containerd[1982]: time="2025-12-16T13:11:39.592651189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-132,Uid:1a6a4dcaf1386ab0c0e4ea98a7357f38,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:39.608336 containerd[1982]: time="2025-12-16T13:11:39.608280811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-132,Uid:f3f436d2a8208f4000a929b15fa8ac22,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:39.609410 containerd[1982]: time="2025-12-16T13:11:39.608284389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-132,Uid:f7e47bf4011096ee0f66233de8438ed0,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:39.781299 kubelet[2941]: I1216 13:11:39.781270 2941 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-132" Dec 16 13:11:39.781640 kubelet[2941]: E1216 13:11:39.781613 2941 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.132:6443/api/v1/nodes\": dial tcp 172.31.28.132:6443: connect: connection refused" node="ip-172-31-28-132" Dec 16 13:11:39.795345 kubelet[2941]: E1216 13:11:39.795290 2941 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:11:40.056699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount6259028.mount: Deactivated successfully. Dec 16 13:11:40.070102 containerd[1982]: time="2025-12-16T13:11:40.070045148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:11:40.074669 containerd[1982]: time="2025-12-16T13:11:40.074629430Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:11:40.077568 containerd[1982]: time="2025-12-16T13:11:40.077524075Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:11:40.079127 containerd[1982]: time="2025-12-16T13:11:40.079076461Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:11:40.080133 containerd[1982]: time="2025-12-16T13:11:40.080101025Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:11:40.082359 containerd[1982]: time="2025-12-16T13:11:40.082319304Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:11:40.083642 containerd[1982]: time="2025-12-16T13:11:40.083596659Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:11:40.085651 containerd[1982]: time="2025-12-16T13:11:40.085614219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:11:40.087507 containerd[1982]: time="2025-12-16T13:11:40.086602580Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 473.986538ms" Dec 16 13:11:40.092097 containerd[1982]: time="2025-12-16T13:11:40.092042249Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 488.487834ms" Dec 16 13:11:40.100077 containerd[1982]: time="2025-12-16T13:11:40.100024575Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 487.428755ms" Dec 16 13:11:40.203118 containerd[1982]: time="2025-12-16T13:11:40.202957741Z" level=info msg="connecting to shim 958d489aa657ef74df25970314d8038bbed6f05d059c0e02182557e7c01c6316" address="unix:///run/containerd/s/1aa77e9243252eb1dfef59425fe35b3719b36ee143f20e3f9f0aef5bd2d6f955" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:40.213644 containerd[1982]: time="2025-12-16T13:11:40.213538415Z" level=info msg="connecting to shim 960676773fb34d3630c0bec40ee1d4ab8cf74df3e7b61a63d20300596caedad2" address="unix:///run/containerd/s/adbb9347ae17322af28d3669011f371b734378149269f57b94054db146e7dd37" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:40.218513 containerd[1982]: time="2025-12-16T13:11:40.218245326Z" level=info msg="connecting to shim 772e63831d36e0978a05be85ff56c019ec85bacee521c51d6eadeebd0e9ef6e6" address="unix:///run/containerd/s/11a774dda9493e7a7741c2b0b2ed4459eca692c02daeea2259f53b8a1f2a00e0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:40.223701 kubelet[2941]: E1216 13:11:40.223670 2941 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-132&limit=500&resourceVersion=0\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:11:40.316766 systemd[1]: Started cri-containerd-772e63831d36e0978a05be85ff56c019ec85bacee521c51d6eadeebd0e9ef6e6.scope - libcontainer container 772e63831d36e0978a05be85ff56c019ec85bacee521c51d6eadeebd0e9ef6e6. Dec 16 13:11:40.325130 systemd[1]: Started cri-containerd-958d489aa657ef74df25970314d8038bbed6f05d059c0e02182557e7c01c6316.scope - libcontainer container 958d489aa657ef74df25970314d8038bbed6f05d059c0e02182557e7c01c6316. Dec 16 13:11:40.328267 systemd[1]: Started cri-containerd-960676773fb34d3630c0bec40ee1d4ab8cf74df3e7b61a63d20300596caedad2.scope - libcontainer container 960676773fb34d3630c0bec40ee1d4ab8cf74df3e7b61a63d20300596caedad2. Dec 16 13:11:40.336195 kubelet[2941]: E1216 13:11:40.336149 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-132?timeout=10s\": dial tcp 172.31.28.132:6443: connect: connection refused" interval="1.6s" Dec 16 13:11:40.440291 kubelet[2941]: E1216 13:11:40.439380 2941 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:11:40.445887 containerd[1982]: time="2025-12-16T13:11:40.445760186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-132,Uid:f7e47bf4011096ee0f66233de8438ed0,Namespace:kube-system,Attempt:0,} returns sandbox id \"958d489aa657ef74df25970314d8038bbed6f05d059c0e02182557e7c01c6316\"" Dec 16 13:11:40.448669 containerd[1982]: time="2025-12-16T13:11:40.448539593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-132,Uid:f3f436d2a8208f4000a929b15fa8ac22,Namespace:kube-system,Attempt:0,} returns sandbox id \"772e63831d36e0978a05be85ff56c019ec85bacee521c51d6eadeebd0e9ef6e6\"" Dec 16 13:11:40.456864 containerd[1982]: time="2025-12-16T13:11:40.456802030Z" level=info msg="CreateContainer within sandbox \"772e63831d36e0978a05be85ff56c019ec85bacee521c51d6eadeebd0e9ef6e6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:11:40.463814 kubelet[2941]: E1216 13:11:40.463772 2941 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:11:40.464077 containerd[1982]: time="2025-12-16T13:11:40.464051549Z" level=info msg="CreateContainer within sandbox \"958d489aa657ef74df25970314d8038bbed6f05d059c0e02182557e7c01c6316\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:11:40.473125 containerd[1982]: time="2025-12-16T13:11:40.472425225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-132,Uid:1a6a4dcaf1386ab0c0e4ea98a7357f38,Namespace:kube-system,Attempt:0,} returns sandbox id \"960676773fb34d3630c0bec40ee1d4ab8cf74df3e7b61a63d20300596caedad2\"" Dec 16 13:11:40.479429 containerd[1982]: time="2025-12-16T13:11:40.479395217Z" level=info msg="CreateContainer within sandbox \"960676773fb34d3630c0bec40ee1d4ab8cf74df3e7b61a63d20300596caedad2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:11:40.506913 containerd[1982]: time="2025-12-16T13:11:40.506868197Z" level=info msg="Container 7d49ee16a4b34cfb8b9ec9927b9bdf765e03d25b0ad409b8f7d3c96ba920494c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:40.507571 containerd[1982]: time="2025-12-16T13:11:40.507481414Z" level=info msg="Container 2ba56fad455492a964231a56c9d2a4d42d54920952c6b30a2752ea0fa6abb1b8: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:40.508623 containerd[1982]: time="2025-12-16T13:11:40.508592697Z" level=info msg="Container 04b168fc0e96ddeb0815ec11b093face1b1df2a01f37547e6a8247740c2eaa7c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:40.525935 containerd[1982]: time="2025-12-16T13:11:40.525889124Z" level=info msg="CreateContainer within sandbox \"772e63831d36e0978a05be85ff56c019ec85bacee521c51d6eadeebd0e9ef6e6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7d49ee16a4b34cfb8b9ec9927b9bdf765e03d25b0ad409b8f7d3c96ba920494c\"" Dec 16 13:11:40.527517 containerd[1982]: time="2025-12-16T13:11:40.526629186Z" level=info msg="StartContainer for \"7d49ee16a4b34cfb8b9ec9927b9bdf765e03d25b0ad409b8f7d3c96ba920494c\"" Dec 16 13:11:40.528314 containerd[1982]: time="2025-12-16T13:11:40.528278102Z" level=info msg="connecting to shim 7d49ee16a4b34cfb8b9ec9927b9bdf765e03d25b0ad409b8f7d3c96ba920494c" address="unix:///run/containerd/s/11a774dda9493e7a7741c2b0b2ed4459eca692c02daeea2259f53b8a1f2a00e0" protocol=ttrpc version=3 Dec 16 13:11:40.528557 containerd[1982]: time="2025-12-16T13:11:40.528532345Z" level=info msg="CreateContainer within sandbox \"958d489aa657ef74df25970314d8038bbed6f05d059c0e02182557e7c01c6316\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"04b168fc0e96ddeb0815ec11b093face1b1df2a01f37547e6a8247740c2eaa7c\"" Dec 16 13:11:40.529249 containerd[1982]: time="2025-12-16T13:11:40.529224593Z" level=info msg="StartContainer for \"04b168fc0e96ddeb0815ec11b093face1b1df2a01f37547e6a8247740c2eaa7c\"" Dec 16 13:11:40.531265 containerd[1982]: time="2025-12-16T13:11:40.531221339Z" level=info msg="CreateContainer within sandbox \"960676773fb34d3630c0bec40ee1d4ab8cf74df3e7b61a63d20300596caedad2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2ba56fad455492a964231a56c9d2a4d42d54920952c6b30a2752ea0fa6abb1b8\"" Dec 16 13:11:40.531803 containerd[1982]: time="2025-12-16T13:11:40.531775201Z" level=info msg="StartContainer for \"2ba56fad455492a964231a56c9d2a4d42d54920952c6b30a2752ea0fa6abb1b8\"" Dec 16 13:11:40.532189 containerd[1982]: time="2025-12-16T13:11:40.532162341Z" level=info msg="connecting to shim 04b168fc0e96ddeb0815ec11b093face1b1df2a01f37547e6a8247740c2eaa7c" address="unix:///run/containerd/s/1aa77e9243252eb1dfef59425fe35b3719b36ee143f20e3f9f0aef5bd2d6f955" protocol=ttrpc version=3 Dec 16 13:11:40.534513 containerd[1982]: time="2025-12-16T13:11:40.534447161Z" level=info msg="connecting to shim 2ba56fad455492a964231a56c9d2a4d42d54920952c6b30a2752ea0fa6abb1b8" address="unix:///run/containerd/s/adbb9347ae17322af28d3669011f371b734378149269f57b94054db146e7dd37" protocol=ttrpc version=3 Dec 16 13:11:40.567709 systemd[1]: Started cri-containerd-04b168fc0e96ddeb0815ec11b093face1b1df2a01f37547e6a8247740c2eaa7c.scope - libcontainer container 04b168fc0e96ddeb0815ec11b093face1b1df2a01f37547e6a8247740c2eaa7c. Dec 16 13:11:40.568996 systemd[1]: Started cri-containerd-7d49ee16a4b34cfb8b9ec9927b9bdf765e03d25b0ad409b8f7d3c96ba920494c.scope - libcontainer container 7d49ee16a4b34cfb8b9ec9927b9bdf765e03d25b0ad409b8f7d3c96ba920494c. Dec 16 13:11:40.582725 systemd[1]: Started cri-containerd-2ba56fad455492a964231a56c9d2a4d42d54920952c6b30a2752ea0fa6abb1b8.scope - libcontainer container 2ba56fad455492a964231a56c9d2a4d42d54920952c6b30a2752ea0fa6abb1b8. Dec 16 13:11:40.588506 kubelet[2941]: I1216 13:11:40.588461 2941 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-132" Dec 16 13:11:40.589323 kubelet[2941]: E1216 13:11:40.589277 2941 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.132:6443/api/v1/nodes\": dial tcp 172.31.28.132:6443: connect: connection refused" node="ip-172-31-28-132" Dec 16 13:11:40.694166 containerd[1982]: time="2025-12-16T13:11:40.694107667Z" level=info msg="StartContainer for \"04b168fc0e96ddeb0815ec11b093face1b1df2a01f37547e6a8247740c2eaa7c\" returns successfully" Dec 16 13:11:40.704147 containerd[1982]: time="2025-12-16T13:11:40.704085152Z" level=info msg="StartContainer for \"7d49ee16a4b34cfb8b9ec9927b9bdf765e03d25b0ad409b8f7d3c96ba920494c\" returns successfully" Dec 16 13:11:40.722659 containerd[1982]: time="2025-12-16T13:11:40.722527692Z" level=info msg="StartContainer for \"2ba56fad455492a964231a56c9d2a4d42d54920952c6b30a2752ea0fa6abb1b8\" returns successfully" Dec 16 13:11:40.992050 kubelet[2941]: E1216 13:11:40.991983 2941 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-132\" not found" node="ip-172-31-28-132" Dec 16 13:11:41.007810 kubelet[2941]: E1216 13:11:41.007101 2941 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-132\" not found" node="ip-172-31-28-132" Dec 16 13:11:41.014658 kubelet[2941]: E1216 13:11:41.014627 2941 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-132\" not found" node="ip-172-31-28-132" Dec 16 13:11:41.040810 kubelet[2941]: E1216 13:11:41.040772 2941 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:11:41.468006 kubelet[2941]: E1216 13:11:41.467954 2941 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:11:41.937567 kubelet[2941]: E1216 13:11:41.937515 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-132?timeout=10s\": dial tcp 172.31.28.132:6443: connect: connection refused" interval="3.2s" Dec 16 13:11:42.016044 kubelet[2941]: E1216 13:11:42.016007 2941 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-132\" not found" node="ip-172-31-28-132" Dec 16 13:11:42.016984 kubelet[2941]: E1216 13:11:42.016476 2941 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-132\" not found" node="ip-172-31-28-132" Dec 16 13:11:42.192312 kubelet[2941]: E1216 13:11:42.192203 2941 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:11:42.193323 kubelet[2941]: I1216 13:11:42.193295 2941 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-132" Dec 16 13:11:42.193715 kubelet[2941]: E1216 13:11:42.193684 2941 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.132:6443/api/v1/nodes\": dial tcp 172.31.28.132:6443: connect: connection refused" node="ip-172-31-28-132" Dec 16 13:11:42.586605 kubelet[2941]: E1216 13:11:42.586559 2941 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-132&limit=500&resourceVersion=0\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:11:42.832777 kubelet[2941]: E1216 13:11:42.832728 2941 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:11:44.145331 kubelet[2941]: E1216 13:11:44.145299 2941 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-132\" not found" node="ip-172-31-28-132" Dec 16 13:11:44.862445 kubelet[2941]: E1216 13:11:44.862414 2941 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-132\" not found" node="ip-172-31-28-132" Dec 16 13:11:45.138626 kubelet[2941]: E1216 13:11:45.138509 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-132?timeout=10s\": dial tcp 172.31.28.132:6443: connect: connection refused" interval="6.4s" Dec 16 13:11:45.396664 kubelet[2941]: I1216 13:11:45.396478 2941 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-132" Dec 16 13:11:46.259513 update_engine[1961]: I20251216 13:11:46.258535 1961 update_attempter.cc:509] Updating boot flags... Dec 16 13:11:47.603380 kubelet[2941]: I1216 13:11:47.603331 2941 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-132" Dec 16 13:11:47.629837 kubelet[2941]: I1216 13:11:47.629808 2941 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-132" Dec 16 13:11:47.639034 kubelet[2941]: E1216 13:11:47.638931 2941 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-132.1881b43c0c5e0cc6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-132,UID:ip-172-31-28-132,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-132,},FirstTimestamp:2025-12-16 13:11:38.904472774 +0000 UTC m=+1.003637113,LastTimestamp:2025-12-16 13:11:38.904472774 +0000 UTC m=+1.003637113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-132,}" Dec 16 13:11:47.646026 kubelet[2941]: E1216 13:11:47.645992 2941 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-132\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-132" Dec 16 13:11:47.646209 kubelet[2941]: I1216 13:11:47.646195 2941 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-132" Dec 16 13:11:47.651221 kubelet[2941]: E1216 13:11:47.651187 2941 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-132\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-132" Dec 16 13:11:47.651545 kubelet[2941]: I1216 13:11:47.651404 2941 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-132" Dec 16 13:11:47.654972 kubelet[2941]: E1216 13:11:47.654929 2941 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-132\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-132" Dec 16 13:11:47.901252 kubelet[2941]: I1216 13:11:47.900892 2941 apiserver.go:52] "Watching apiserver" Dec 16 13:11:47.929936 kubelet[2941]: I1216 13:11:47.929874 2941 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:11:49.869520 systemd[1]: Reload requested from client PID 3508 ('systemctl') (unit session-9.scope)... Dec 16 13:11:49.869540 systemd[1]: Reloading... Dec 16 13:11:49.989526 zram_generator::config[3552]: No configuration found. Dec 16 13:11:50.281243 systemd[1]: Reloading finished in 411 ms. Dec 16 13:11:50.309212 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:50.332607 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:11:50.332860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:50.332928 systemd[1]: kubelet.service: Consumed 1.541s CPU time, 120.8M memory peak. Dec 16 13:11:50.335192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:50.676689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:50.687003 (kubelet)[3612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:11:50.765426 kubelet[3612]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:11:50.765426 kubelet[3612]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:11:50.767721 kubelet[3612]: I1216 13:11:50.767662 3612 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:11:50.778094 kubelet[3612]: I1216 13:11:50.778054 3612 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:11:50.778094 kubelet[3612]: I1216 13:11:50.778084 3612 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:11:50.778288 kubelet[3612]: I1216 13:11:50.778115 3612 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:11:50.778288 kubelet[3612]: I1216 13:11:50.778123 3612 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:11:50.778454 kubelet[3612]: I1216 13:11:50.778420 3612 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:11:50.788053 kubelet[3612]: I1216 13:11:50.788012 3612 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:11:50.795594 kubelet[3612]: I1216 13:11:50.794813 3612 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:11:50.857514 kubelet[3612]: I1216 13:11:50.856798 3612 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:11:50.868465 kubelet[3612]: I1216 13:11:50.868400 3612 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:11:50.877004 kubelet[3612]: I1216 13:11:50.876936 3612 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:11:50.877397 kubelet[3612]: I1216 13:11:50.877197 3612 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-132","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:11:50.877397 kubelet[3612]: I1216 13:11:50.877382 3612 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:11:50.877397 kubelet[3612]: I1216 13:11:50.877393 3612 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:11:50.877667 kubelet[3612]: I1216 13:11:50.877424 3612 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:11:50.880971 kubelet[3612]: I1216 13:11:50.880918 3612 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:11:50.882249 kubelet[3612]: I1216 13:11:50.882107 3612 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:11:50.882249 kubelet[3612]: I1216 13:11:50.882248 3612 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:11:50.887220 kubelet[3612]: I1216 13:11:50.885784 3612 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:11:50.887220 kubelet[3612]: I1216 13:11:50.885817 3612 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:11:50.890596 kubelet[3612]: I1216 13:11:50.888554 3612 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:11:50.890596 kubelet[3612]: I1216 13:11:50.889630 3612 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:11:50.890596 kubelet[3612]: I1216 13:11:50.889752 3612 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:11:50.896699 kubelet[3612]: I1216 13:11:50.896671 3612 server.go:1262] "Started kubelet" Dec 16 13:11:50.899754 sudo[3627]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 13:11:50.901470 sudo[3627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 13:11:50.910803 kubelet[3612]: I1216 13:11:50.910775 3612 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:11:50.921261 kubelet[3612]: I1216 13:11:50.921222 3612 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:11:50.924407 kubelet[3612]: I1216 13:11:50.924129 3612 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:11:50.933330 kubelet[3612]: I1216 13:11:50.933230 3612 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:11:50.935608 kubelet[3612]: I1216 13:11:50.927141 3612 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:11:50.935608 kubelet[3612]: I1216 13:11:50.927776 3612 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:11:50.935736 kubelet[3612]: I1216 13:11:50.935653 3612 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:11:50.935895 kubelet[3612]: I1216 13:11:50.935875 3612 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:11:50.935934 kubelet[3612]: E1216 13:11:50.930310 3612 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:11:50.942734 kubelet[3612]: I1216 13:11:50.942529 3612 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:11:50.943232 kubelet[3612]: I1216 13:11:50.943205 3612 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:11:50.944367 kubelet[3612]: I1216 13:11:50.927117 3612 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:11:50.944631 kubelet[3612]: I1216 13:11:50.944615 3612 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:11:50.944810 kubelet[3612]: I1216 13:11:50.944638 3612 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:11:50.955073 kubelet[3612]: I1216 13:11:50.955040 3612 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:11:50.971219 kubelet[3612]: I1216 13:11:50.971163 3612 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:11:50.971219 kubelet[3612]: I1216 13:11:50.971203 3612 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:11:50.971219 kubelet[3612]: I1216 13:11:50.971230 3612 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:11:50.971426 kubelet[3612]: E1216 13:11:50.971276 3612 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:11:51.036668 kubelet[3612]: I1216 13:11:51.036630 3612 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:11:51.036668 kubelet[3612]: I1216 13:11:51.036648 3612 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:11:51.036668 kubelet[3612]: I1216 13:11:51.036672 3612 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:11:51.036904 kubelet[3612]: I1216 13:11:51.036831 3612 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:11:51.036904 kubelet[3612]: I1216 13:11:51.036845 3612 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:11:51.036904 kubelet[3612]: I1216 13:11:51.036865 3612 policy_none.go:49] "None policy: Start" Dec 16 13:11:51.036904 kubelet[3612]: I1216 13:11:51.036879 3612 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:11:51.036904 kubelet[3612]: I1216 13:11:51.036891 3612 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:11:51.037086 kubelet[3612]: I1216 13:11:51.037013 3612 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 13:11:51.037086 kubelet[3612]: I1216 13:11:51.037023 3612 policy_none.go:47] "Start" Dec 16 13:11:51.043410 kubelet[3612]: E1216 13:11:51.043375 3612 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:11:51.043866 kubelet[3612]: I1216 13:11:51.043818 3612 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:11:51.044045 kubelet[3612]: I1216 13:11:51.043835 3612 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:11:51.046540 kubelet[3612]: I1216 13:11:51.045900 3612 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:11:51.048906 kubelet[3612]: E1216 13:11:51.048876 3612 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:11:51.071980 kubelet[3612]: I1216 13:11:51.071917 3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-132" Dec 16 13:11:51.071980 kubelet[3612]: I1216 13:11:51.071943 3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-132" Dec 16 13:11:51.073685 kubelet[3612]: I1216 13:11:51.073661 3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-132" Dec 16 13:11:51.146519 kubelet[3612]: I1216 13:11:51.146445 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7e47bf4011096ee0f66233de8438ed0-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-132\" (UID: \"f7e47bf4011096ee0f66233de8438ed0\") " pod="kube-system/kube-scheduler-ip-172-31-28-132" Dec 16 13:11:51.146654 kubelet[3612]: I1216 13:11:51.146542 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3f436d2a8208f4000a929b15fa8ac22-ca-certs\") pod \"kube-apiserver-ip-172-31-28-132\" (UID: \"f3f436d2a8208f4000a929b15fa8ac22\") " pod="kube-system/kube-apiserver-ip-172-31-28-132" Dec 16 13:11:51.146654 kubelet[3612]: I1216 13:11:51.146563 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3f436d2a8208f4000a929b15fa8ac22-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-132\" (UID: \"f3f436d2a8208f4000a929b15fa8ac22\") " pod="kube-system/kube-apiserver-ip-172-31-28-132" Dec 16 13:11:51.146654 kubelet[3612]: I1216 13:11:51.146590 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a6a4dcaf1386ab0c0e4ea98a7357f38-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-132\" (UID: \"1a6a4dcaf1386ab0c0e4ea98a7357f38\") " pod="kube-system/kube-controller-manager-ip-172-31-28-132" Dec 16 13:11:51.146654 kubelet[3612]: I1216 13:11:51.146606 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1a6a4dcaf1386ab0c0e4ea98a7357f38-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-132\" (UID: \"1a6a4dcaf1386ab0c0e4ea98a7357f38\") " pod="kube-system/kube-controller-manager-ip-172-31-28-132" Dec 16 13:11:51.146654 kubelet[3612]: I1216 13:11:51.146623 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a6a4dcaf1386ab0c0e4ea98a7357f38-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-132\" (UID: \"1a6a4dcaf1386ab0c0e4ea98a7357f38\") " pod="kube-system/kube-controller-manager-ip-172-31-28-132" Dec 16 13:11:51.146782 kubelet[3612]: I1216 13:11:51.146638 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a6a4dcaf1386ab0c0e4ea98a7357f38-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-132\" (UID: \"1a6a4dcaf1386ab0c0e4ea98a7357f38\") " pod="kube-system/kube-controller-manager-ip-172-31-28-132" Dec 16 13:11:51.146782 kubelet[3612]: I1216 13:11:51.146657 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3f436d2a8208f4000a929b15fa8ac22-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-132\" (UID: \"f3f436d2a8208f4000a929b15fa8ac22\") " pod="kube-system/kube-apiserver-ip-172-31-28-132" Dec 16 13:11:51.146782 kubelet[3612]: I1216 13:11:51.146673 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a6a4dcaf1386ab0c0e4ea98a7357f38-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-132\" (UID: \"1a6a4dcaf1386ab0c0e4ea98a7357f38\") " pod="kube-system/kube-controller-manager-ip-172-31-28-132" Dec 16 13:11:51.157137 kubelet[3612]: I1216 13:11:51.157061 3612 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-132" Dec 16 13:11:51.170856 kubelet[3612]: I1216 13:11:51.170829 3612 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-132" Dec 16 13:11:51.172134 kubelet[3612]: I1216 13:11:51.172085 3612 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-132" Dec 16 13:11:51.888221 kubelet[3612]: I1216 13:11:51.888181 3612 apiserver.go:52] "Watching apiserver" Dec 16 13:11:51.936058 kubelet[3612]: I1216 13:11:51.935984 3612 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:11:51.962696 kubelet[3612]: I1216 13:11:51.962607 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-132" podStartSLOduration=0.962592329 podStartE2EDuration="962.592329ms" podCreationTimestamp="2025-12-16 13:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:11:51.95898189 +0000 UTC m=+1.263505257" watchObservedRunningTime="2025-12-16 13:11:51.962592329 +0000 UTC m=+1.267115700" Dec 16 13:11:51.989881 kubelet[3612]: I1216 13:11:51.989795 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-132" podStartSLOduration=0.989766477 podStartE2EDuration="989.766477ms" podCreationTimestamp="2025-12-16 13:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:11:51.979991808 +0000 UTC m=+1.284515176" watchObservedRunningTime="2025-12-16 13:11:51.989766477 +0000 UTC m=+1.294289842" Dec 16 13:11:52.010558 kubelet[3612]: I1216 13:11:52.010482 3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-132" Dec 16 13:11:52.011687 kubelet[3612]: I1216 13:11:52.011472 3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-132" Dec 16 13:11:52.021812 kubelet[3612]: E1216 13:11:52.021737 3612 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-132\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-132" Dec 16 13:11:52.023278 kubelet[3612]: E1216 13:11:52.023245 3612 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-132\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-132" Dec 16 13:11:52.027180 kubelet[3612]: I1216 13:11:52.027058 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-132" podStartSLOduration=1.027038914 podStartE2EDuration="1.027038914s" podCreationTimestamp="2025-12-16 13:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:11:51.990165372 +0000 UTC m=+1.294688743" watchObservedRunningTime="2025-12-16 13:11:52.027038914 +0000 UTC m=+1.331562284" Dec 16 13:11:52.608015 sudo[3627]: pam_unix(sudo:session): session closed for user root Dec 16 13:11:55.407984 sudo[2362]: pam_unix(sudo:session): session closed for user root Dec 16 13:11:55.431258 sshd[2361]: Connection closed by 139.178.68.195 port 57780 Dec 16 13:11:55.432744 sshd-session[2358]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:55.442120 systemd[1]: sshd@8-172.31.28.132:22-139.178.68.195:57780.service: Deactivated successfully. Dec 16 13:11:55.452059 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:11:55.452617 systemd[1]: session-9.scope: Consumed 5.393s CPU time, 212.5M memory peak. Dec 16 13:11:55.458232 systemd-logind[1960]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:11:55.461627 systemd-logind[1960]: Removed session 9. Dec 16 13:11:55.559914 kubelet[3612]: I1216 13:11:55.559886 3612 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:11:55.560886 containerd[1982]: time="2025-12-16T13:11:55.560612350Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:11:55.561951 kubelet[3612]: I1216 13:11:55.561760 3612 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:11:56.263752 systemd[1]: Created slice kubepods-besteffort-pode6521d89_a745_46fb_b169_031514ed0c6c.slice - libcontainer container kubepods-besteffort-pode6521d89_a745_46fb_b169_031514ed0c6c.slice. Dec 16 13:11:56.288598 kubelet[3612]: I1216 13:11:56.288564 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-lib-modules\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.288722 kubelet[3612]: I1216 13:11:56.288605 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cilium-config-path\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.288722 kubelet[3612]: I1216 13:11:56.288628 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-host-proc-sys-net\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.288722 kubelet[3612]: I1216 13:11:56.288654 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e6521d89-a745-46fb-b169-031514ed0c6c-kube-proxy\") pod \"kube-proxy-sbw9d\" (UID: \"e6521d89-a745-46fb-b169-031514ed0c6c\") " pod="kube-system/kube-proxy-sbw9d" Dec 16 13:11:56.288722 kubelet[3612]: I1216 13:11:56.288675 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6521d89-a745-46fb-b169-031514ed0c6c-xtables-lock\") pod \"kube-proxy-sbw9d\" (UID: \"e6521d89-a745-46fb-b169-031514ed0c6c\") " pod="kube-system/kube-proxy-sbw9d" Dec 16 13:11:56.288722 kubelet[3612]: I1216 13:11:56.288698 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6521d89-a745-46fb-b169-031514ed0c6c-lib-modules\") pod \"kube-proxy-sbw9d\" (UID: \"e6521d89-a745-46fb-b169-031514ed0c6c\") " pod="kube-system/kube-proxy-sbw9d" Dec 16 13:11:56.288965 kubelet[3612]: I1216 13:11:56.288718 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zfph\" (UniqueName: \"kubernetes.io/projected/e6521d89-a745-46fb-b169-031514ed0c6c-kube-api-access-7zfph\") pod \"kube-proxy-sbw9d\" (UID: \"e6521d89-a745-46fb-b169-031514ed0c6c\") " pod="kube-system/kube-proxy-sbw9d" Dec 16 13:11:56.288965 kubelet[3612]: I1216 13:11:56.288743 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-bpf-maps\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.288965 kubelet[3612]: I1216 13:11:56.288772 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-hostproc\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.288965 kubelet[3612]: I1216 13:11:56.288793 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cilium-cgroup\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.288965 kubelet[3612]: I1216 13:11:56.288816 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cni-path\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.288965 kubelet[3612]: I1216 13:11:56.288836 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-etc-cni-netd\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.289201 kubelet[3612]: I1216 13:11:56.288861 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cilium-run\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.289201 kubelet[3612]: I1216 13:11:56.288884 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-xtables-lock\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.289201 kubelet[3612]: I1216 13:11:56.288906 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-clustermesh-secrets\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.289201 kubelet[3612]: I1216 13:11:56.288929 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-host-proc-sys-kernel\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.289201 kubelet[3612]: I1216 13:11:56.288951 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-hubble-tls\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.289201 kubelet[3612]: I1216 13:11:56.288976 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9bnc\" (UniqueName: \"kubernetes.io/projected/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-kube-api-access-l9bnc\") pod \"cilium-vh98b\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " pod="kube-system/cilium-vh98b" Dec 16 13:11:56.292750 systemd[1]: Created slice kubepods-burstable-podc642fb9d_0374_4a8c_ad84_e3fad82ae9a4.slice - libcontainer container kubepods-burstable-podc642fb9d_0374_4a8c_ad84_e3fad82ae9a4.slice. Dec 16 13:11:56.475722 systemd[1]: Created slice kubepods-besteffort-podc3d8660b_c31d_4ae9_84fc_ed85b1aa10f6.slice - libcontainer container kubepods-besteffort-podc3d8660b_c31d_4ae9_84fc_ed85b1aa10f6.slice. Dec 16 13:11:56.491349 kubelet[3612]: I1216 13:11:56.490472 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-8wfg7\" (UID: \"c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6\") " pod="kube-system/cilium-operator-6f9c7c5859-8wfg7" Dec 16 13:11:56.491656 kubelet[3612]: I1216 13:11:56.491634 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l9qf\" (UniqueName: \"kubernetes.io/projected/c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6-kube-api-access-2l9qf\") pod \"cilium-operator-6f9c7c5859-8wfg7\" (UID: \"c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6\") " pod="kube-system/cilium-operator-6f9c7c5859-8wfg7" Dec 16 13:11:56.588054 containerd[1982]: time="2025-12-16T13:11:56.588002752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sbw9d,Uid:e6521d89-a745-46fb-b169-031514ed0c6c,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:56.607463 containerd[1982]: time="2025-12-16T13:11:56.607409286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vh98b,Uid:c642fb9d-0374-4a8c-ad84-e3fad82ae9a4,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:56.635682 containerd[1982]: time="2025-12-16T13:11:56.635630311Z" level=info msg="connecting to shim 32a984786b187d031f8a76431440727004a0f3dc7f66313cfc44fbfb32dff0e2" address="unix:///run/containerd/s/bce79e8bb81567f34f4d9ddbb6f0d06953eb16bfae34f273816abecd6de2f3b6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:56.661836 containerd[1982]: time="2025-12-16T13:11:56.661791158Z" level=info msg="connecting to shim 502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641" address="unix:///run/containerd/s/c8fb68f5e43bc22d35f3c55d336038cb4ff0aaaf7ae50e8ea53525a878640084" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:56.674321 systemd[1]: Started cri-containerd-32a984786b187d031f8a76431440727004a0f3dc7f66313cfc44fbfb32dff0e2.scope - libcontainer container 32a984786b187d031f8a76431440727004a0f3dc7f66313cfc44fbfb32dff0e2. Dec 16 13:11:56.699656 systemd[1]: Started cri-containerd-502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641.scope - libcontainer container 502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641. Dec 16 13:11:56.749886 containerd[1982]: time="2025-12-16T13:11:56.749815137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sbw9d,Uid:e6521d89-a745-46fb-b169-031514ed0c6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"32a984786b187d031f8a76431440727004a0f3dc7f66313cfc44fbfb32dff0e2\"" Dec 16 13:11:56.759183 containerd[1982]: time="2025-12-16T13:11:56.759103095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vh98b,Uid:c642fb9d-0374-4a8c-ad84-e3fad82ae9a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\"" Dec 16 13:11:56.762840 containerd[1982]: time="2025-12-16T13:11:56.762749477Z" level=info msg="CreateContainer within sandbox \"32a984786b187d031f8a76431440727004a0f3dc7f66313cfc44fbfb32dff0e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:11:56.765523 containerd[1982]: time="2025-12-16T13:11:56.764455697Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 13:11:56.783301 containerd[1982]: time="2025-12-16T13:11:56.783263438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-8wfg7,Uid:c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:56.806448 containerd[1982]: time="2025-12-16T13:11:56.806406933Z" level=info msg="Container 6de8290ed46adba2656145d954fedbbea30cb58a8cf18489b8f9c0e56aab77a1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:56.809903 containerd[1982]: time="2025-12-16T13:11:56.809859342Z" level=info msg="connecting to shim fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8" address="unix:///run/containerd/s/c0a9c8ed8d1cec99515e88254aa523c1eba64d1fd1b8af0a060bcda9ca7cd5e1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:56.823047 containerd[1982]: time="2025-12-16T13:11:56.822942792Z" level=info msg="CreateContainer within sandbox \"32a984786b187d031f8a76431440727004a0f3dc7f66313cfc44fbfb32dff0e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6de8290ed46adba2656145d954fedbbea30cb58a8cf18489b8f9c0e56aab77a1\"" Dec 16 13:11:56.824390 containerd[1982]: time="2025-12-16T13:11:56.823708197Z" level=info msg="StartContainer for \"6de8290ed46adba2656145d954fedbbea30cb58a8cf18489b8f9c0e56aab77a1\"" Dec 16 13:11:56.826145 containerd[1982]: time="2025-12-16T13:11:56.826110175Z" level=info msg="connecting to shim 6de8290ed46adba2656145d954fedbbea30cb58a8cf18489b8f9c0e56aab77a1" address="unix:///run/containerd/s/bce79e8bb81567f34f4d9ddbb6f0d06953eb16bfae34f273816abecd6de2f3b6" protocol=ttrpc version=3 Dec 16 13:11:56.844713 systemd[1]: Started cri-containerd-fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8.scope - libcontainer container fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8. Dec 16 13:11:56.862684 systemd[1]: Started cri-containerd-6de8290ed46adba2656145d954fedbbea30cb58a8cf18489b8f9c0e56aab77a1.scope - libcontainer container 6de8290ed46adba2656145d954fedbbea30cb58a8cf18489b8f9c0e56aab77a1. Dec 16 13:11:56.942526 containerd[1982]: time="2025-12-16T13:11:56.942294214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-8wfg7,Uid:c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\"" Dec 16 13:11:56.946984 containerd[1982]: time="2025-12-16T13:11:56.946936227Z" level=info msg="StartContainer for \"6de8290ed46adba2656145d954fedbbea30cb58a8cf18489b8f9c0e56aab77a1\" returns successfully" Dec 16 13:11:57.069195 kubelet[3612]: I1216 13:11:57.069132 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sbw9d" podStartSLOduration=1.069114253 podStartE2EDuration="1.069114253s" podCreationTimestamp="2025-12-16 13:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:11:57.041602166 +0000 UTC m=+6.346125537" watchObservedRunningTime="2025-12-16 13:11:57.069114253 +0000 UTC m=+6.373637625" Dec 16 13:12:05.326314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628884493.mount: Deactivated successfully. Dec 16 13:12:08.146358 containerd[1982]: time="2025-12-16T13:12:08.146293250Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:08.158659 containerd[1982]: time="2025-12-16T13:12:08.158602331Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 16 13:12:08.205168 containerd[1982]: time="2025-12-16T13:12:08.205088979Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:08.207574 containerd[1982]: time="2025-12-16T13:12:08.207520928Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.443002148s" Dec 16 13:12:08.207574 containerd[1982]: time="2025-12-16T13:12:08.207571045Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 16 13:12:08.219509 containerd[1982]: time="2025-12-16T13:12:08.218311192Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 13:12:08.222248 containerd[1982]: time="2025-12-16T13:12:08.222218508Z" level=info msg="CreateContainer within sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:12:08.257098 containerd[1982]: time="2025-12-16T13:12:08.256881584Z" level=info msg="Container 040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:12:08.259850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105913050.mount: Deactivated successfully. Dec 16 13:12:08.271297 containerd[1982]: time="2025-12-16T13:12:08.271253365Z" level=info msg="CreateContainer within sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e\"" Dec 16 13:12:08.273425 containerd[1982]: time="2025-12-16T13:12:08.272304970Z" level=info msg="StartContainer for \"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e\"" Dec 16 13:12:08.273622 containerd[1982]: time="2025-12-16T13:12:08.273466497Z" level=info msg="connecting to shim 040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e" address="unix:///run/containerd/s/c8fb68f5e43bc22d35f3c55d336038cb4ff0aaaf7ae50e8ea53525a878640084" protocol=ttrpc version=3 Dec 16 13:12:08.328676 systemd[1]: Started cri-containerd-040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e.scope - libcontainer container 040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e. Dec 16 13:12:08.378270 containerd[1982]: time="2025-12-16T13:12:08.378214341Z" level=info msg="StartContainer for \"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e\" returns successfully" Dec 16 13:12:08.402061 systemd[1]: cri-containerd-040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e.scope: Deactivated successfully. Dec 16 13:12:08.430130 containerd[1982]: time="2025-12-16T13:12:08.430070813Z" level=info msg="received container exit event container_id:\"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e\" id:\"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e\" pid:4034 exited_at:{seconds:1765890728 nanos:405357096}" Dec 16 13:12:09.183889 containerd[1982]: time="2025-12-16T13:12:09.183779205Z" level=info msg="CreateContainer within sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:12:09.195768 containerd[1982]: time="2025-12-16T13:12:09.195716266Z" level=info msg="Container ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:12:09.207011 containerd[1982]: time="2025-12-16T13:12:09.206641345Z" level=info msg="CreateContainer within sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a\"" Dec 16 13:12:09.211572 containerd[1982]: time="2025-12-16T13:12:09.211533208Z" level=info msg="StartContainer for \"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a\"" Dec 16 13:12:09.213783 containerd[1982]: time="2025-12-16T13:12:09.213688919Z" level=info msg="connecting to shim ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a" address="unix:///run/containerd/s/c8fb68f5e43bc22d35f3c55d336038cb4ff0aaaf7ae50e8ea53525a878640084" protocol=ttrpc version=3 Dec 16 13:12:09.237728 systemd[1]: Started cri-containerd-ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a.scope - libcontainer container ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a. Dec 16 13:12:09.265573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e-rootfs.mount: Deactivated successfully. Dec 16 13:12:09.353506 containerd[1982]: time="2025-12-16T13:12:09.353369735Z" level=info msg="StartContainer for \"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a\" returns successfully" Dec 16 13:12:09.369669 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:12:09.370750 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:12:09.372586 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:12:09.377692 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:12:09.382279 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:12:09.385574 systemd[1]: cri-containerd-ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a.scope: Deactivated successfully. Dec 16 13:12:09.393389 containerd[1982]: time="2025-12-16T13:12:09.393346587Z" level=info msg="received container exit event container_id:\"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a\" id:\"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a\" pid:4078 exited_at:{seconds:1765890729 nanos:393048424}" Dec 16 13:12:09.437135 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:12:09.455641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a-rootfs.mount: Deactivated successfully. Dec 16 13:12:10.151835 containerd[1982]: time="2025-12-16T13:12:10.151778301Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:10.152959 containerd[1982]: time="2025-12-16T13:12:10.152807066Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 16 13:12:10.154282 containerd[1982]: time="2025-12-16T13:12:10.154243869Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:10.159508 containerd[1982]: time="2025-12-16T13:12:10.158655570Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.939326486s" Dec 16 13:12:10.159508 containerd[1982]: time="2025-12-16T13:12:10.158714220Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 16 13:12:10.168704 containerd[1982]: time="2025-12-16T13:12:10.168645885Z" level=info msg="CreateContainer within sandbox \"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 13:12:10.178320 containerd[1982]: time="2025-12-16T13:12:10.178272781Z" level=info msg="Container 60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:12:10.194649 containerd[1982]: time="2025-12-16T13:12:10.194603253Z" level=info msg="CreateContainer within sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:12:10.211531 containerd[1982]: time="2025-12-16T13:12:10.211263772Z" level=info msg="CreateContainer within sandbox \"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\"" Dec 16 13:12:10.212740 containerd[1982]: time="2025-12-16T13:12:10.212713509Z" level=info msg="StartContainer for \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\"" Dec 16 13:12:10.213689 containerd[1982]: time="2025-12-16T13:12:10.213595450Z" level=info msg="Container a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:12:10.215855 containerd[1982]: time="2025-12-16T13:12:10.215601091Z" level=info msg="connecting to shim 60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29" address="unix:///run/containerd/s/c0a9c8ed8d1cec99515e88254aa523c1eba64d1fd1b8af0a060bcda9ca7cd5e1" protocol=ttrpc version=3 Dec 16 13:12:10.235675 containerd[1982]: time="2025-12-16T13:12:10.235179272Z" level=info msg="CreateContainer within sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465\"" Dec 16 13:12:10.237197 containerd[1982]: time="2025-12-16T13:12:10.237164500Z" level=info msg="StartContainer for \"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465\"" Dec 16 13:12:10.244297 containerd[1982]: time="2025-12-16T13:12:10.244249412Z" level=info msg="connecting to shim a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465" address="unix:///run/containerd/s/c8fb68f5e43bc22d35f3c55d336038cb4ff0aaaf7ae50e8ea53525a878640084" protocol=ttrpc version=3 Dec 16 13:12:10.253727 systemd[1]: Started cri-containerd-60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29.scope - libcontainer container 60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29. Dec 16 13:12:10.264624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2536449227.mount: Deactivated successfully. Dec 16 13:12:10.290741 systemd[1]: Started cri-containerd-a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465.scope - libcontainer container a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465. Dec 16 13:12:10.349226 containerd[1982]: time="2025-12-16T13:12:10.348966356Z" level=info msg="StartContainer for \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\" returns successfully" Dec 16 13:12:10.418627 containerd[1982]: time="2025-12-16T13:12:10.417453401Z" level=info msg="StartContainer for \"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465\" returns successfully" Dec 16 13:12:10.505099 systemd[1]: cri-containerd-a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465.scope: Deactivated successfully. Dec 16 13:12:10.505477 systemd[1]: cri-containerd-a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465.scope: Consumed 41ms CPU time, 4.6M memory peak, 1M read from disk. Dec 16 13:12:10.511938 containerd[1982]: time="2025-12-16T13:12:10.511883249Z" level=info msg="received container exit event container_id:\"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465\" id:\"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465\" pid:4159 exited_at:{seconds:1765890730 nanos:511625705}" Dec 16 13:12:10.555435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465-rootfs.mount: Deactivated successfully. Dec 16 13:12:11.214041 containerd[1982]: time="2025-12-16T13:12:11.213922605Z" level=info msg="CreateContainer within sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:12:11.242456 containerd[1982]: time="2025-12-16T13:12:11.242418043Z" level=info msg="Container 74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:12:11.264380 containerd[1982]: time="2025-12-16T13:12:11.264336304Z" level=info msg="CreateContainer within sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96\"" Dec 16 13:12:11.265777 containerd[1982]: time="2025-12-16T13:12:11.265748047Z" level=info msg="StartContainer for \"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96\"" Dec 16 13:12:11.269379 containerd[1982]: time="2025-12-16T13:12:11.269338695Z" level=info msg="connecting to shim 74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96" address="unix:///run/containerd/s/c8fb68f5e43bc22d35f3c55d336038cb4ff0aaaf7ae50e8ea53525a878640084" protocol=ttrpc version=3 Dec 16 13:12:11.325672 systemd[1]: Started cri-containerd-74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96.scope - libcontainer container 74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96. Dec 16 13:12:11.395590 kubelet[3612]: I1216 13:12:11.395519 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-8wfg7" podStartSLOduration=2.177310891 podStartE2EDuration="15.394146165s" podCreationTimestamp="2025-12-16 13:11:56 +0000 UTC" firstStartedPulling="2025-12-16 13:11:56.944163817 +0000 UTC m=+6.248687177" lastFinishedPulling="2025-12-16 13:12:10.160999104 +0000 UTC m=+19.465522451" observedRunningTime="2025-12-16 13:12:11.309472638 +0000 UTC m=+20.613996007" watchObservedRunningTime="2025-12-16 13:12:11.394146165 +0000 UTC m=+20.698669534" Dec 16 13:12:11.444082 containerd[1982]: time="2025-12-16T13:12:11.444025054Z" level=info msg="StartContainer for \"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96\" returns successfully" Dec 16 13:12:11.505808 systemd[1]: cri-containerd-74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96.scope: Deactivated successfully. Dec 16 13:12:11.509153 containerd[1982]: time="2025-12-16T13:12:11.509083851Z" level=info msg="received container exit event container_id:\"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96\" id:\"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96\" pid:4215 exited_at:{seconds:1765890731 nanos:507714090}" Dec 16 13:12:11.551937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96-rootfs.mount: Deactivated successfully. Dec 16 13:12:12.219680 containerd[1982]: time="2025-12-16T13:12:12.219598129Z" level=info msg="CreateContainer within sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:12:12.268447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660610980.mount: Deactivated successfully. Dec 16 13:12:12.279682 containerd[1982]: time="2025-12-16T13:12:12.262844156Z" level=info msg="Container 7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:12:12.292248 containerd[1982]: time="2025-12-16T13:12:12.292020053Z" level=info msg="CreateContainer within sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\"" Dec 16 13:12:12.293161 containerd[1982]: time="2025-12-16T13:12:12.293124880Z" level=info msg="StartContainer for \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\"" Dec 16 13:12:12.294126 containerd[1982]: time="2025-12-16T13:12:12.294089009Z" level=info msg="connecting to shim 7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db" address="unix:///run/containerd/s/c8fb68f5e43bc22d35f3c55d336038cb4ff0aaaf7ae50e8ea53525a878640084" protocol=ttrpc version=3 Dec 16 13:12:12.320732 systemd[1]: Started cri-containerd-7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db.scope - libcontainer container 7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db. Dec 16 13:12:12.398508 containerd[1982]: time="2025-12-16T13:12:12.396337404Z" level=info msg="StartContainer for \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\" returns successfully" Dec 16 13:12:13.212928 kubelet[3612]: I1216 13:12:13.212901 3612 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 13:12:13.324338 kubelet[3612]: I1216 13:12:13.323351 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vh98b" podStartSLOduration=5.86865792 podStartE2EDuration="17.323328709s" podCreationTimestamp="2025-12-16 13:11:56 +0000 UTC" firstStartedPulling="2025-12-16 13:11:56.762702096 +0000 UTC m=+6.067225457" lastFinishedPulling="2025-12-16 13:12:08.217372897 +0000 UTC m=+17.521896246" observedRunningTime="2025-12-16 13:12:13.272352397 +0000 UTC m=+22.576875765" watchObservedRunningTime="2025-12-16 13:12:13.323328709 +0000 UTC m=+22.627852075" Dec 16 13:12:13.341474 systemd[1]: Created slice kubepods-burstable-poda274236b_509b_4c6e_a755_598b6785c9ff.slice - libcontainer container kubepods-burstable-poda274236b_509b_4c6e_a755_598b6785c9ff.slice. Dec 16 13:12:13.348624 systemd[1]: Created slice kubepods-burstable-pod9e89cd21_fd3c_48e0_8b44_a3a2cb8f1dff.slice - libcontainer container kubepods-burstable-pod9e89cd21_fd3c_48e0_8b44_a3a2cb8f1dff.slice. Dec 16 13:12:13.493714 kubelet[3612]: I1216 13:12:13.493435 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e89cd21-fd3c-48e0-8b44-a3a2cb8f1dff-config-volume\") pod \"coredns-66bc5c9577-flb7q\" (UID: \"9e89cd21-fd3c-48e0-8b44-a3a2cb8f1dff\") " pod="kube-system/coredns-66bc5c9577-flb7q" Dec 16 13:12:13.493714 kubelet[3612]: I1216 13:12:13.493534 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg8rk\" (UniqueName: \"kubernetes.io/projected/a274236b-509b-4c6e-a755-598b6785c9ff-kube-api-access-fg8rk\") pod \"coredns-66bc5c9577-jcllp\" (UID: \"a274236b-509b-4c6e-a755-598b6785c9ff\") " pod="kube-system/coredns-66bc5c9577-jcllp" Dec 16 13:12:13.493714 kubelet[3612]: I1216 13:12:13.493565 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsxws\" (UniqueName: \"kubernetes.io/projected/9e89cd21-fd3c-48e0-8b44-a3a2cb8f1dff-kube-api-access-nsxws\") pod \"coredns-66bc5c9577-flb7q\" (UID: \"9e89cd21-fd3c-48e0-8b44-a3a2cb8f1dff\") " pod="kube-system/coredns-66bc5c9577-flb7q" Dec 16 13:12:13.493714 kubelet[3612]: I1216 13:12:13.493588 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a274236b-509b-4c6e-a755-598b6785c9ff-config-volume\") pod \"coredns-66bc5c9577-jcllp\" (UID: \"a274236b-509b-4c6e-a755-598b6785c9ff\") " pod="kube-system/coredns-66bc5c9577-jcllp" Dec 16 13:12:13.651125 containerd[1982]: time="2025-12-16T13:12:13.651056215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jcllp,Uid:a274236b-509b-4c6e-a755-598b6785c9ff,Namespace:kube-system,Attempt:0,}" Dec 16 13:12:13.661783 containerd[1982]: time="2025-12-16T13:12:13.661741497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-flb7q,Uid:9e89cd21-fd3c-48e0-8b44-a3a2cb8f1dff,Namespace:kube-system,Attempt:0,}" Dec 16 13:12:23.867306 systemd-networkd[1730]: cilium_host: Link UP Dec 16 13:12:23.867431 systemd-networkd[1730]: cilium_net: Link UP Dec 16 13:12:23.868678 systemd-networkd[1730]: cilium_net: Gained carrier Dec 16 13:12:23.868834 systemd-networkd[1730]: cilium_host: Gained carrier Dec 16 13:12:23.902684 systemd-networkd[1730]: cilium_host: Gained IPv6LL Dec 16 13:12:23.998803 (udev-worker)[4341]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:12:23.998804 (udev-worker)[4373]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:12:24.320456 (udev-worker)[4384]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:12:24.331683 systemd-networkd[1730]: cilium_vxlan: Link UP Dec 16 13:12:24.331694 systemd-networkd[1730]: cilium_vxlan: Gained carrier Dec 16 13:12:24.571639 systemd-networkd[1730]: cilium_net: Gained IPv6LL Dec 16 13:12:25.787713 systemd-networkd[1730]: cilium_vxlan: Gained IPv6LL Dec 16 13:12:26.801616 kernel: NET: Registered PF_ALG protocol family Dec 16 13:12:27.550197 (udev-worker)[4386]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:12:27.555609 systemd-networkd[1730]: lxc_health: Link UP Dec 16 13:12:27.563524 systemd-networkd[1730]: lxc_health: Gained carrier Dec 16 13:12:27.726512 kernel: eth0: renamed from tmp8a57d Dec 16 13:12:27.728675 systemd-networkd[1730]: lxcbb275a66add0: Link UP Dec 16 13:12:27.730079 systemd-networkd[1730]: lxcbb275a66add0: Gained carrier Dec 16 13:12:27.776510 kernel: eth0: renamed from tmp3590e Dec 16 13:12:27.775434 systemd-networkd[1730]: lxc620ced8fe32a: Link UP Dec 16 13:12:27.780572 systemd-networkd[1730]: lxc620ced8fe32a: Gained carrier Dec 16 13:12:28.923856 systemd-networkd[1730]: lxc_health: Gained IPv6LL Dec 16 13:12:28.987803 systemd-networkd[1730]: lxc620ced8fe32a: Gained IPv6LL Dec 16 13:12:28.989647 systemd-networkd[1730]: lxcbb275a66add0: Gained IPv6LL Dec 16 13:12:29.213972 systemd[1]: Started sshd@9-172.31.28.132:22-139.178.68.195:45814.service - OpenSSH per-connection server daemon (139.178.68.195:45814). Dec 16 13:12:29.457873 sshd[4733]: Accepted publickey for core from 139.178.68.195 port 45814 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:12:29.461269 sshd-session[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:29.469556 systemd-logind[1960]: New session 10 of user core. Dec 16 13:12:29.478761 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:12:30.541532 sshd[4736]: Connection closed by 139.178.68.195 port 45814 Dec 16 13:12:30.541729 sshd-session[4733]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:30.555823 systemd[1]: sshd@9-172.31.28.132:22-139.178.68.195:45814.service: Deactivated successfully. Dec 16 13:12:30.561953 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:12:30.564602 systemd-logind[1960]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:12:30.567202 systemd-logind[1960]: Removed session 10. Dec 16 13:12:31.030401 ntpd[2163]: Listen normally on 6 cilium_host 192.168.0.39:123 Dec 16 13:12:31.031114 ntpd[2163]: 16 Dec 13:12:31 ntpd[2163]: Listen normally on 6 cilium_host 192.168.0.39:123 Dec 16 13:12:31.031114 ntpd[2163]: 16 Dec 13:12:31 ntpd[2163]: Listen normally on 7 cilium_net [fe80::2888:edff:fe83:47a5%4]:123 Dec 16 13:12:31.031114 ntpd[2163]: 16 Dec 13:12:31 ntpd[2163]: Listen normally on 8 cilium_host [fe80::5467:34ff:fe9d:5a48%5]:123 Dec 16 13:12:31.031114 ntpd[2163]: 16 Dec 13:12:31 ntpd[2163]: Listen normally on 9 cilium_vxlan [fe80::2418:1dff:fed2:1589%6]:123 Dec 16 13:12:31.031114 ntpd[2163]: 16 Dec 13:12:31 ntpd[2163]: Listen normally on 10 lxc_health [fe80::905c:cfff:fe1c:bb7d%8]:123 Dec 16 13:12:31.031114 ntpd[2163]: 16 Dec 13:12:31 ntpd[2163]: Listen normally on 11 lxcbb275a66add0 [fe80::60f5:c6ff:fe20:eac1%10]:123 Dec 16 13:12:31.031114 ntpd[2163]: 16 Dec 13:12:31 ntpd[2163]: Listen normally on 12 lxc620ced8fe32a [fe80::1c2b:85ff:fee6:f6e0%12]:123 Dec 16 13:12:31.030475 ntpd[2163]: Listen normally on 7 cilium_net [fe80::2888:edff:fe83:47a5%4]:123 Dec 16 13:12:31.030534 ntpd[2163]: Listen normally on 8 cilium_host [fe80::5467:34ff:fe9d:5a48%5]:123 Dec 16 13:12:31.030562 ntpd[2163]: Listen normally on 9 cilium_vxlan [fe80::2418:1dff:fed2:1589%6]:123 Dec 16 13:12:31.030589 ntpd[2163]: Listen normally on 10 lxc_health [fe80::905c:cfff:fe1c:bb7d%8]:123 Dec 16 13:12:31.030615 ntpd[2163]: Listen normally on 11 lxcbb275a66add0 [fe80::60f5:c6ff:fe20:eac1%10]:123 Dec 16 13:12:31.030642 ntpd[2163]: Listen normally on 12 lxc620ced8fe32a [fe80::1c2b:85ff:fee6:f6e0%12]:123 Dec 16 13:12:32.867512 containerd[1982]: time="2025-12-16T13:12:32.865467331Z" level=info msg="connecting to shim 8a57d298c11ad03ec00d19fedcce34005e7a65e3bdb2b1d1e980d04e5638eb7b" address="unix:///run/containerd/s/2d8bd286b3dd3366b7185310c33260ec5c48755580dab4056be6fa494cfa8ec8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:12:32.882343 containerd[1982]: time="2025-12-16T13:12:32.882186576Z" level=info msg="connecting to shim 3590e9859f430a209b3de13b31e0fc251f2ccff46072579c29cc08c2fb222f9c" address="unix:///run/containerd/s/cb815ee6fc38f177a7907ff4777454a4b1cd0a097164b81482fa955eb30e46ef" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:12:32.938608 systemd[1]: Started cri-containerd-8a57d298c11ad03ec00d19fedcce34005e7a65e3bdb2b1d1e980d04e5638eb7b.scope - libcontainer container 8a57d298c11ad03ec00d19fedcce34005e7a65e3bdb2b1d1e980d04e5638eb7b. Dec 16 13:12:32.958708 systemd[1]: Started cri-containerd-3590e9859f430a209b3de13b31e0fc251f2ccff46072579c29cc08c2fb222f9c.scope - libcontainer container 3590e9859f430a209b3de13b31e0fc251f2ccff46072579c29cc08c2fb222f9c. Dec 16 13:12:33.072027 containerd[1982]: time="2025-12-16T13:12:33.071847476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jcllp,Uid:a274236b-509b-4c6e-a755-598b6785c9ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a57d298c11ad03ec00d19fedcce34005e7a65e3bdb2b1d1e980d04e5638eb7b\"" Dec 16 13:12:33.083263 containerd[1982]: time="2025-12-16T13:12:33.083219641Z" level=info msg="CreateContainer within sandbox \"8a57d298c11ad03ec00d19fedcce34005e7a65e3bdb2b1d1e980d04e5638eb7b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:12:33.085516 containerd[1982]: time="2025-12-16T13:12:33.085445432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-flb7q,Uid:9e89cd21-fd3c-48e0-8b44-a3a2cb8f1dff,Namespace:kube-system,Attempt:0,} returns sandbox id \"3590e9859f430a209b3de13b31e0fc251f2ccff46072579c29cc08c2fb222f9c\"" Dec 16 13:12:33.093372 containerd[1982]: time="2025-12-16T13:12:33.093315432Z" level=info msg="CreateContainer within sandbox \"3590e9859f430a209b3de13b31e0fc251f2ccff46072579c29cc08c2fb222f9c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:12:33.221877 containerd[1982]: time="2025-12-16T13:12:33.221621790Z" level=info msg="Container 27db37c74ba138ec2f943c858e0961cc86bc7a60081af9838f24c9c75dd9472a: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:12:33.221877 containerd[1982]: time="2025-12-16T13:12:33.221833592Z" level=info msg="Container 5314403ae610bf7d77b29729d19b4434303bb9b89761bec37abbda408f68e20a: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:12:33.238302 containerd[1982]: time="2025-12-16T13:12:33.238234644Z" level=info msg="CreateContainer within sandbox \"8a57d298c11ad03ec00d19fedcce34005e7a65e3bdb2b1d1e980d04e5638eb7b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5314403ae610bf7d77b29729d19b4434303bb9b89761bec37abbda408f68e20a\"" Dec 16 13:12:33.239535 containerd[1982]: time="2025-12-16T13:12:33.238873765Z" level=info msg="StartContainer for \"5314403ae610bf7d77b29729d19b4434303bb9b89761bec37abbda408f68e20a\"" Dec 16 13:12:33.240652 containerd[1982]: time="2025-12-16T13:12:33.240616349Z" level=info msg="connecting to shim 5314403ae610bf7d77b29729d19b4434303bb9b89761bec37abbda408f68e20a" address="unix:///run/containerd/s/2d8bd286b3dd3366b7185310c33260ec5c48755580dab4056be6fa494cfa8ec8" protocol=ttrpc version=3 Dec 16 13:12:33.242984 containerd[1982]: time="2025-12-16T13:12:33.242803023Z" level=info msg="CreateContainer within sandbox \"3590e9859f430a209b3de13b31e0fc251f2ccff46072579c29cc08c2fb222f9c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27db37c74ba138ec2f943c858e0961cc86bc7a60081af9838f24c9c75dd9472a\"" Dec 16 13:12:33.243573 containerd[1982]: time="2025-12-16T13:12:33.243553979Z" level=info msg="StartContainer for \"27db37c74ba138ec2f943c858e0961cc86bc7a60081af9838f24c9c75dd9472a\"" Dec 16 13:12:33.246273 containerd[1982]: time="2025-12-16T13:12:33.246045797Z" level=info msg="connecting to shim 27db37c74ba138ec2f943c858e0961cc86bc7a60081af9838f24c9c75dd9472a" address="unix:///run/containerd/s/cb815ee6fc38f177a7907ff4777454a4b1cd0a097164b81482fa955eb30e46ef" protocol=ttrpc version=3 Dec 16 13:12:33.274753 systemd[1]: Started cri-containerd-5314403ae610bf7d77b29729d19b4434303bb9b89761bec37abbda408f68e20a.scope - libcontainer container 5314403ae610bf7d77b29729d19b4434303bb9b89761bec37abbda408f68e20a. Dec 16 13:12:33.310752 systemd[1]: Started cri-containerd-27db37c74ba138ec2f943c858e0961cc86bc7a60081af9838f24c9c75dd9472a.scope - libcontainer container 27db37c74ba138ec2f943c858e0961cc86bc7a60081af9838f24c9c75dd9472a. Dec 16 13:12:33.394342 containerd[1982]: time="2025-12-16T13:12:33.393804064Z" level=info msg="StartContainer for \"27db37c74ba138ec2f943c858e0961cc86bc7a60081af9838f24c9c75dd9472a\" returns successfully" Dec 16 13:12:33.394342 containerd[1982]: time="2025-12-16T13:12:33.394159982Z" level=info msg="StartContainer for \"5314403ae610bf7d77b29729d19b4434303bb9b89761bec37abbda408f68e20a\" returns successfully" Dec 16 13:12:33.849375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276909904.mount: Deactivated successfully. Dec 16 13:12:34.354883 kubelet[3612]: I1216 13:12:34.352733 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-flb7q" podStartSLOduration=38.352719608 podStartE2EDuration="38.352719608s" podCreationTimestamp="2025-12-16 13:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:12:34.35044429 +0000 UTC m=+43.654967660" watchObservedRunningTime="2025-12-16 13:12:34.352719608 +0000 UTC m=+43.657242975" Dec 16 13:12:34.369345 kubelet[3612]: I1216 13:12:34.369281 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jcllp" podStartSLOduration=38.36925829 podStartE2EDuration="38.36925829s" podCreationTimestamp="2025-12-16 13:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:12:34.368172723 +0000 UTC m=+43.672696092" watchObservedRunningTime="2025-12-16 13:12:34.36925829 +0000 UTC m=+43.673781671" Dec 16 13:12:35.574463 systemd[1]: Started sshd@10-172.31.28.132:22-139.178.68.195:40724.service - OpenSSH per-connection server daemon (139.178.68.195:40724). Dec 16 13:12:35.785796 sshd[4915]: Accepted publickey for core from 139.178.68.195 port 40724 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:12:35.789394 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:35.795203 systemd-logind[1960]: New session 11 of user core. Dec 16 13:12:35.800964 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:12:36.082937 sshd[4918]: Connection closed by 139.178.68.195 port 40724 Dec 16 13:12:36.083557 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:36.090921 systemd-logind[1960]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:12:36.091983 systemd[1]: sshd@10-172.31.28.132:22-139.178.68.195:40724.service: Deactivated successfully. Dec 16 13:12:36.094880 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:12:36.097897 systemd-logind[1960]: Removed session 11. Dec 16 13:12:41.117703 systemd[1]: Started sshd@11-172.31.28.132:22-139.178.68.195:36764.service - OpenSSH per-connection server daemon (139.178.68.195:36764). Dec 16 13:12:41.317334 sshd[4938]: Accepted publickey for core from 139.178.68.195 port 36764 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:12:41.319982 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:41.326354 systemd-logind[1960]: New session 12 of user core. Dec 16 13:12:41.333701 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:12:41.643607 sshd[4941]: Connection closed by 139.178.68.195 port 36764 Dec 16 13:12:41.644344 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:41.648521 systemd[1]: sshd@11-172.31.28.132:22-139.178.68.195:36764.service: Deactivated successfully. Dec 16 13:12:41.650255 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:12:41.651662 systemd-logind[1960]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:12:41.653567 systemd-logind[1960]: Removed session 12. Dec 16 13:12:46.682757 systemd[1]: Started sshd@12-172.31.28.132:22-139.178.68.195:36774.service - OpenSSH per-connection server daemon (139.178.68.195:36774). Dec 16 13:12:46.852791 sshd[4961]: Accepted publickey for core from 139.178.68.195 port 36774 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:12:46.854087 sshd-session[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:46.859302 systemd-logind[1960]: New session 13 of user core. Dec 16 13:12:46.869739 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:12:47.085802 sshd[4964]: Connection closed by 139.178.68.195 port 36774 Dec 16 13:12:47.086445 sshd-session[4961]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:47.090058 systemd[1]: sshd@12-172.31.28.132:22-139.178.68.195:36774.service: Deactivated successfully. Dec 16 13:12:47.092171 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:12:47.094573 systemd-logind[1960]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:12:47.095911 systemd-logind[1960]: Removed session 13. Dec 16 13:12:52.119609 systemd[1]: Started sshd@13-172.31.28.132:22-139.178.68.195:44556.service - OpenSSH per-connection server daemon (139.178.68.195:44556). Dec 16 13:12:52.289185 sshd[4979]: Accepted publickey for core from 139.178.68.195 port 44556 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:12:52.290777 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:52.297590 systemd-logind[1960]: New session 14 of user core. Dec 16 13:12:52.307933 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:12:52.504051 sshd[4982]: Connection closed by 139.178.68.195 port 44556 Dec 16 13:12:52.504880 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:52.509069 systemd-logind[1960]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:12:52.509423 systemd[1]: sshd@13-172.31.28.132:22-139.178.68.195:44556.service: Deactivated successfully. Dec 16 13:12:52.511450 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:12:52.513550 systemd-logind[1960]: Removed session 14. Dec 16 13:12:52.537110 systemd[1]: Started sshd@14-172.31.28.132:22-139.178.68.195:44572.service - OpenSSH per-connection server daemon (139.178.68.195:44572). Dec 16 13:12:52.710528 sshd[4995]: Accepted publickey for core from 139.178.68.195 port 44572 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:12:52.711856 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:52.716391 systemd-logind[1960]: New session 15 of user core. Dec 16 13:12:52.723842 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:12:53.013545 sshd[4998]: Connection closed by 139.178.68.195 port 44572 Dec 16 13:12:53.014729 sshd-session[4995]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:53.021420 systemd-logind[1960]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:12:53.025010 systemd[1]: sshd@14-172.31.28.132:22-139.178.68.195:44572.service: Deactivated successfully. Dec 16 13:12:53.030864 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:12:53.033810 systemd-logind[1960]: Removed session 15. Dec 16 13:12:53.050082 systemd[1]: Started sshd@15-172.31.28.132:22-139.178.68.195:44582.service - OpenSSH per-connection server daemon (139.178.68.195:44582). Dec 16 13:12:53.230045 sshd[5008]: Accepted publickey for core from 139.178.68.195 port 44582 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:12:53.231547 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:53.237994 systemd-logind[1960]: New session 16 of user core. Dec 16 13:12:53.242730 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:12:53.465793 sshd[5011]: Connection closed by 139.178.68.195 port 44582 Dec 16 13:12:53.467805 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:53.471861 systemd[1]: sshd@15-172.31.28.132:22-139.178.68.195:44582.service: Deactivated successfully. Dec 16 13:12:53.474343 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:12:53.476215 systemd-logind[1960]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:12:53.478385 systemd-logind[1960]: Removed session 16. Dec 16 13:12:58.498633 systemd[1]: Started sshd@16-172.31.28.132:22-139.178.68.195:44596.service - OpenSSH per-connection server daemon (139.178.68.195:44596). Dec 16 13:12:58.659352 sshd[5025]: Accepted publickey for core from 139.178.68.195 port 44596 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:12:58.660921 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:58.666564 systemd-logind[1960]: New session 17 of user core. Dec 16 13:12:58.678723 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:12:58.866812 sshd[5028]: Connection closed by 139.178.68.195 port 44596 Dec 16 13:12:58.867382 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:58.871518 systemd[1]: sshd@16-172.31.28.132:22-139.178.68.195:44596.service: Deactivated successfully. Dec 16 13:12:58.873629 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:12:58.874386 systemd-logind[1960]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:12:58.877174 systemd-logind[1960]: Removed session 17. Dec 16 13:13:03.906180 systemd[1]: Started sshd@17-172.31.28.132:22-139.178.68.195:42930.service - OpenSSH per-connection server daemon (139.178.68.195:42930). Dec 16 13:13:04.088250 sshd[5044]: Accepted publickey for core from 139.178.68.195 port 42930 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:04.089521 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:04.095163 systemd-logind[1960]: New session 18 of user core. Dec 16 13:13:04.101975 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:13:04.313672 sshd[5047]: Connection closed by 139.178.68.195 port 42930 Dec 16 13:13:04.314915 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:04.320533 systemd[1]: sshd@17-172.31.28.132:22-139.178.68.195:42930.service: Deactivated successfully. Dec 16 13:13:04.323419 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:13:04.324863 systemd-logind[1960]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:13:04.327198 systemd-logind[1960]: Removed session 18. Dec 16 13:13:04.353630 systemd[1]: Started sshd@18-172.31.28.132:22-139.178.68.195:42940.service - OpenSSH per-connection server daemon (139.178.68.195:42940). Dec 16 13:13:04.529781 sshd[5059]: Accepted publickey for core from 139.178.68.195 port 42940 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:04.531248 sshd-session[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:04.537283 systemd-logind[1960]: New session 19 of user core. Dec 16 13:13:04.546730 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:13:07.958820 sshd[5062]: Connection closed by 139.178.68.195 port 42940 Dec 16 13:13:07.960174 sshd-session[5059]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:07.964668 systemd[1]: sshd@18-172.31.28.132:22-139.178.68.195:42940.service: Deactivated successfully. Dec 16 13:13:07.966714 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:13:07.968238 systemd-logind[1960]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:13:07.969970 systemd-logind[1960]: Removed session 19. Dec 16 13:13:07.992505 systemd[1]: Started sshd@19-172.31.28.132:22-139.178.68.195:42944.service - OpenSSH per-connection server daemon (139.178.68.195:42944). Dec 16 13:13:08.183736 sshd[5073]: Accepted publickey for core from 139.178.68.195 port 42944 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:08.185037 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:08.190415 systemd-logind[1960]: New session 20 of user core. Dec 16 13:13:08.195903 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:13:09.303244 sshd[5076]: Connection closed by 139.178.68.195 port 42944 Dec 16 13:13:09.304722 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:09.309449 systemd[1]: sshd@19-172.31.28.132:22-139.178.68.195:42944.service: Deactivated successfully. Dec 16 13:13:09.312895 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:13:09.314072 systemd-logind[1960]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:13:09.316372 systemd-logind[1960]: Removed session 20. Dec 16 13:13:09.340828 systemd[1]: Started sshd@20-172.31.28.132:22-139.178.68.195:42952.service - OpenSSH per-connection server daemon (139.178.68.195:42952). Dec 16 13:13:09.526706 sshd[5091]: Accepted publickey for core from 139.178.68.195 port 42952 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:09.528136 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:09.534055 systemd-logind[1960]: New session 21 of user core. Dec 16 13:13:09.541748 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:13:09.942821 sshd[5094]: Connection closed by 139.178.68.195 port 42952 Dec 16 13:13:09.943454 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:09.948133 systemd[1]: sshd@20-172.31.28.132:22-139.178.68.195:42952.service: Deactivated successfully. Dec 16 13:13:09.950382 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:13:09.951660 systemd-logind[1960]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:13:09.953337 systemd-logind[1960]: Removed session 21. Dec 16 13:13:09.981800 systemd[1]: Started sshd@21-172.31.28.132:22-139.178.68.195:42962.service - OpenSSH per-connection server daemon (139.178.68.195:42962). Dec 16 13:13:10.155543 sshd[5104]: Accepted publickey for core from 139.178.68.195 port 42962 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:10.156967 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:10.162389 systemd-logind[1960]: New session 22 of user core. Dec 16 13:13:10.166710 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:13:10.369528 sshd[5107]: Connection closed by 139.178.68.195 port 42962 Dec 16 13:13:10.370078 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:10.373869 systemd[1]: sshd@21-172.31.28.132:22-139.178.68.195:42962.service: Deactivated successfully. Dec 16 13:13:10.376140 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:13:10.377473 systemd-logind[1960]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:13:10.378931 systemd-logind[1960]: Removed session 22. Dec 16 13:13:15.403743 systemd[1]: Started sshd@22-172.31.28.132:22-139.178.68.195:59914.service - OpenSSH per-connection server daemon (139.178.68.195:59914). Dec 16 13:13:15.578607 sshd[5124]: Accepted publickey for core from 139.178.68.195 port 59914 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:15.580037 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:15.585820 systemd-logind[1960]: New session 23 of user core. Dec 16 13:13:15.591914 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:13:15.776118 sshd[5127]: Connection closed by 139.178.68.195 port 59914 Dec 16 13:13:15.776817 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:15.781468 systemd[1]: sshd@22-172.31.28.132:22-139.178.68.195:59914.service: Deactivated successfully. Dec 16 13:13:15.783408 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:13:15.784949 systemd-logind[1960]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:13:15.786330 systemd-logind[1960]: Removed session 23. Dec 16 13:13:20.813604 systemd[1]: Started sshd@23-172.31.28.132:22-139.178.68.195:41032.service - OpenSSH per-connection server daemon (139.178.68.195:41032). Dec 16 13:13:21.037031 sshd[5139]: Accepted publickey for core from 139.178.68.195 port 41032 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:21.038998 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:21.044273 systemd-logind[1960]: New session 24 of user core. Dec 16 13:13:21.049703 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:13:21.307761 sshd[5142]: Connection closed by 139.178.68.195 port 41032 Dec 16 13:13:21.308425 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:21.315888 systemd[1]: sshd@23-172.31.28.132:22-139.178.68.195:41032.service: Deactivated successfully. Dec 16 13:13:21.317896 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:13:21.319004 systemd-logind[1960]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:13:21.321011 systemd-logind[1960]: Removed session 24. Dec 16 13:13:26.337868 systemd[1]: Started sshd@24-172.31.28.132:22-139.178.68.195:41040.service - OpenSSH per-connection server daemon (139.178.68.195:41040). Dec 16 13:13:26.512346 sshd[5155]: Accepted publickey for core from 139.178.68.195 port 41040 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:26.514535 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:26.520219 systemd-logind[1960]: New session 25 of user core. Dec 16 13:13:26.529758 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:13:26.724940 sshd[5158]: Connection closed by 139.178.68.195 port 41040 Dec 16 13:13:26.725555 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:26.729504 systemd[1]: sshd@24-172.31.28.132:22-139.178.68.195:41040.service: Deactivated successfully. Dec 16 13:13:26.731952 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:13:26.732826 systemd-logind[1960]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:13:26.734514 systemd-logind[1960]: Removed session 25. Dec 16 13:13:26.766762 systemd[1]: Started sshd@25-172.31.28.132:22-139.178.68.195:41048.service - OpenSSH per-connection server daemon (139.178.68.195:41048). Dec 16 13:13:26.942931 sshd[5170]: Accepted publickey for core from 139.178.68.195 port 41048 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:26.944422 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:26.949257 systemd-logind[1960]: New session 26 of user core. Dec 16 13:13:26.954674 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:13:29.424516 containerd[1982]: time="2025-12-16T13:13:29.424096430Z" level=info msg="StopContainer for \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\" with timeout 30 (s)" Dec 16 13:13:29.425468 containerd[1982]: time="2025-12-16T13:13:29.425437623Z" level=info msg="Stop container \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\" with signal terminated" Dec 16 13:13:29.485946 systemd[1]: cri-containerd-60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29.scope: Deactivated successfully. Dec 16 13:13:29.490289 containerd[1982]: time="2025-12-16T13:13:29.490186780Z" level=info msg="received container exit event container_id:\"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\" id:\"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\" pid:4147 exited_at:{seconds:1765890809 nanos:489849895}" Dec 16 13:13:29.509864 containerd[1982]: time="2025-12-16T13:13:29.509647394Z" level=info msg="StopContainer for \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\" with timeout 2 (s)" Dec 16 13:13:29.511734 containerd[1982]: time="2025-12-16T13:13:29.511697806Z" level=info msg="Stop container \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\" with signal terminated" Dec 16 13:13:29.513691 containerd[1982]: time="2025-12-16T13:13:29.513476184Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:13:29.528065 systemd-networkd[1730]: lxc_health: Link DOWN Dec 16 13:13:29.528078 systemd-networkd[1730]: lxc_health: Lost carrier Dec 16 13:13:29.552757 systemd[1]: cri-containerd-7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db.scope: Deactivated successfully. Dec 16 13:13:29.553265 systemd[1]: cri-containerd-7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db.scope: Consumed 8.291s CPU time, 198.7M memory peak, 75.6M read from disk, 13.3M written to disk. Dec 16 13:13:29.557191 containerd[1982]: time="2025-12-16T13:13:29.556454038Z" level=info msg="received container exit event container_id:\"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\" id:\"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\" pid:4252 exited_at:{seconds:1765890809 nanos:556123673}" Dec 16 13:13:29.586971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29-rootfs.mount: Deactivated successfully. Dec 16 13:13:29.608374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db-rootfs.mount: Deactivated successfully. Dec 16 13:13:29.619154 containerd[1982]: time="2025-12-16T13:13:29.619105112Z" level=info msg="StopContainer for \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\" returns successfully" Dec 16 13:13:29.620932 containerd[1982]: time="2025-12-16T13:13:29.620668574Z" level=info msg="StopPodSandbox for \"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\"" Dec 16 13:13:29.625501 containerd[1982]: time="2025-12-16T13:13:29.625263352Z" level=info msg="Container to stop \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:13:29.629499 containerd[1982]: time="2025-12-16T13:13:29.629450651Z" level=info msg="StopContainer for \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\" returns successfully" Dec 16 13:13:29.631340 containerd[1982]: time="2025-12-16T13:13:29.631306202Z" level=info msg="StopPodSandbox for \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\"" Dec 16 13:13:29.632129 containerd[1982]: time="2025-12-16T13:13:29.631555254Z" level=info msg="Container to stop \"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:13:29.632129 containerd[1982]: time="2025-12-16T13:13:29.631577475Z" level=info msg="Container to stop \"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:13:29.632129 containerd[1982]: time="2025-12-16T13:13:29.631594008Z" level=info msg="Container to stop \"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:13:29.632129 containerd[1982]: time="2025-12-16T13:13:29.631612585Z" level=info msg="Container to stop \"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:13:29.632129 containerd[1982]: time="2025-12-16T13:13:29.631625339Z" level=info msg="Container to stop \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:13:29.643980 systemd[1]: cri-containerd-fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8.scope: Deactivated successfully. Dec 16 13:13:29.646318 systemd[1]: cri-containerd-502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641.scope: Deactivated successfully. Dec 16 13:13:29.650714 containerd[1982]: time="2025-12-16T13:13:29.650638971Z" level=info msg="received sandbox exit event container_id:\"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\" id:\"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\" exit_status:137 exited_at:{seconds:1765890809 nanos:650328467}" monitor_name=podsandbox Dec 16 13:13:29.654343 containerd[1982]: time="2025-12-16T13:13:29.654304509Z" level=info msg="received sandbox exit event container_id:\"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" id:\"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" exit_status:137 exited_at:{seconds:1765890809 nanos:653886393}" monitor_name=podsandbox Dec 16 13:13:29.692091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641-rootfs.mount: Deactivated successfully. Dec 16 13:13:29.696335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8-rootfs.mount: Deactivated successfully. Dec 16 13:13:29.711572 containerd[1982]: time="2025-12-16T13:13:29.711362821Z" level=info msg="shim disconnected" id=502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641 namespace=k8s.io Dec 16 13:13:29.711572 containerd[1982]: time="2025-12-16T13:13:29.711571816Z" level=warning msg="cleaning up after shim disconnected" id=502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641 namespace=k8s.io Dec 16 13:13:29.712584 containerd[1982]: time="2025-12-16T13:13:29.711582804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:13:29.714637 containerd[1982]: time="2025-12-16T13:13:29.714597264Z" level=info msg="shim disconnected" id=fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8 namespace=k8s.io Dec 16 13:13:29.714637 containerd[1982]: time="2025-12-16T13:13:29.714626652Z" level=warning msg="cleaning up after shim disconnected" id=fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8 namespace=k8s.io Dec 16 13:13:29.714844 containerd[1982]: time="2025-12-16T13:13:29.714634581Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:13:29.737810 containerd[1982]: time="2025-12-16T13:13:29.737769716Z" level=info msg="TearDown network for sandbox \"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\" successfully" Dec 16 13:13:29.737810 containerd[1982]: time="2025-12-16T13:13:29.737806368Z" level=info msg="StopPodSandbox for \"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\" returns successfully" Dec 16 13:13:29.739849 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8-shm.mount: Deactivated successfully. Dec 16 13:13:29.741882 containerd[1982]: time="2025-12-16T13:13:29.741288496Z" level=info msg="received sandbox container exit event sandbox_id:\"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\" exit_status:137 exited_at:{seconds:1765890809 nanos:650328467}" monitor_name=criService Dec 16 13:13:29.741882 containerd[1982]: time="2025-12-16T13:13:29.741643201Z" level=info msg="received sandbox container exit event sandbox_id:\"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" exit_status:137 exited_at:{seconds:1765890809 nanos:653886393}" monitor_name=criService Dec 16 13:13:29.741967 containerd[1982]: time="2025-12-16T13:13:29.741933291Z" level=info msg="TearDown network for sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" successfully" Dec 16 13:13:29.741967 containerd[1982]: time="2025-12-16T13:13:29.741951518Z" level=info msg="StopPodSandbox for \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" returns successfully" Dec 16 13:13:29.843828 kubelet[3612]: I1216 13:13:29.843784 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-clustermesh-secrets\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.843828 kubelet[3612]: I1216 13:13:29.843835 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-bpf-maps\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.844256 kubelet[3612]: I1216 13:13:29.843850 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cilium-cgroup\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.844256 kubelet[3612]: I1216 13:13:29.843865 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-host-proc-sys-net\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.844256 kubelet[3612]: I1216 13:13:29.843878 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-hostproc\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.844256 kubelet[3612]: I1216 13:13:29.843893 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cilium-run\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.844256 kubelet[3612]: I1216 13:13:29.843910 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-hubble-tls\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.844256 kubelet[3612]: I1216 13:13:29.843934 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9bnc\" (UniqueName: \"kubernetes.io/projected/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-kube-api-access-l9bnc\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.844406 kubelet[3612]: I1216 13:13:29.843950 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-etc-cni-netd\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.844406 kubelet[3612]: I1216 13:13:29.843981 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-host-proc-sys-kernel\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.844406 kubelet[3612]: I1216 13:13:29.843997 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l9qf\" (UniqueName: \"kubernetes.io/projected/c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6-kube-api-access-2l9qf\") pod \"c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6\" (UID: \"c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6\") " Dec 16 13:13:29.844406 kubelet[3612]: I1216 13:13:29.844012 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cni-path\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.844406 kubelet[3612]: I1216 13:13:29.844026 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-xtables-lock\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.844406 kubelet[3612]: I1216 13:13:29.844043 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6-cilium-config-path\") pod \"c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6\" (UID: \"c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6\") " Dec 16 13:13:29.844596 kubelet[3612]: I1216 13:13:29.844057 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-lib-modules\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.844596 kubelet[3612]: I1216 13:13:29.844073 3612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cilium-config-path\") pod \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\" (UID: \"c642fb9d-0374-4a8c-ad84-e3fad82ae9a4\") " Dec 16 13:13:29.846128 kubelet[3612]: I1216 13:13:29.846041 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:13:29.846724 kubelet[3612]: I1216 13:13:29.846679 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:13:29.846886 kubelet[3612]: I1216 13:13:29.846828 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:13:29.846886 kubelet[3612]: I1216 13:13:29.846846 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:13:29.847074 kubelet[3612]: I1216 13:13:29.846861 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-hostproc" (OuterVolumeSpecName: "hostproc") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:13:29.847218 kubelet[3612]: I1216 13:13:29.847140 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:13:29.847599 kubelet[3612]: I1216 13:13:29.847474 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cni-path" (OuterVolumeSpecName: "cni-path") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:13:29.847599 kubelet[3612]: I1216 13:13:29.847549 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:13:29.847599 kubelet[3612]: I1216 13:13:29.847564 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:13:29.848648 kubelet[3612]: I1216 13:13:29.848616 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:13:29.849352 kubelet[3612]: I1216 13:13:29.849298 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:13:29.854988 kubelet[3612]: I1216 13:13:29.854723 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6" (UID: "c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:13:29.858570 kubelet[3612]: I1216 13:13:29.858460 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6-kube-api-access-2l9qf" (OuterVolumeSpecName: "kube-api-access-2l9qf") pod "c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6" (UID: "c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6"). InnerVolumeSpecName "kube-api-access-2l9qf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:13:29.858814 kubelet[3612]: I1216 13:13:29.858551 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:13:29.858982 kubelet[3612]: I1216 13:13:29.858896 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-kube-api-access-l9bnc" (OuterVolumeSpecName: "kube-api-access-l9bnc") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "kube-api-access-l9bnc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:13:29.858982 kubelet[3612]: I1216 13:13:29.858913 3612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" (UID: "c642fb9d-0374-4a8c-ad84-e3fad82ae9a4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:13:29.945085 kubelet[3612]: I1216 13:13:29.944961 3612 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-host-proc-sys-kernel\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945085 kubelet[3612]: I1216 13:13:29.944996 3612 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2l9qf\" (UniqueName: \"kubernetes.io/projected/c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6-kube-api-access-2l9qf\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945085 kubelet[3612]: I1216 13:13:29.945007 3612 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cni-path\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945085 kubelet[3612]: I1216 13:13:29.945016 3612 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-xtables-lock\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945085 kubelet[3612]: I1216 13:13:29.945028 3612 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6-cilium-config-path\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945085 kubelet[3612]: I1216 13:13:29.945036 3612 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-lib-modules\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945085 kubelet[3612]: I1216 13:13:29.945044 3612 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cilium-config-path\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945085 kubelet[3612]: I1216 13:13:29.945052 3612 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-clustermesh-secrets\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945461 kubelet[3612]: I1216 13:13:29.945059 3612 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-bpf-maps\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945461 kubelet[3612]: I1216 13:13:29.945066 3612 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cilium-cgroup\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945461 kubelet[3612]: I1216 13:13:29.945073 3612 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-host-proc-sys-net\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945461 kubelet[3612]: I1216 13:13:29.945093 3612 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-hostproc\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945461 kubelet[3612]: I1216 13:13:29.945100 3612 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-cilium-run\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945461 kubelet[3612]: I1216 13:13:29.945107 3612 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-hubble-tls\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945461 kubelet[3612]: I1216 13:13:29.945114 3612 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9bnc\" (UniqueName: \"kubernetes.io/projected/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-kube-api-access-l9bnc\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:29.945461 kubelet[3612]: I1216 13:13:29.945121 3612 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4-etc-cni-netd\") on node \"ip-172-31-28-132\" DevicePath \"\"" Dec 16 13:13:30.471520 kubelet[3612]: I1216 13:13:30.471172 3612 scope.go:117] "RemoveContainer" containerID="60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29" Dec 16 13:13:30.478132 containerd[1982]: time="2025-12-16T13:13:30.478099772Z" level=info msg="RemoveContainer for \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\"" Dec 16 13:13:30.480868 systemd[1]: Removed slice kubepods-besteffort-podc3d8660b_c31d_4ae9_84fc_ed85b1aa10f6.slice - libcontainer container kubepods-besteffort-podc3d8660b_c31d_4ae9_84fc_ed85b1aa10f6.slice. Dec 16 13:13:30.486038 containerd[1982]: time="2025-12-16T13:13:30.485997893Z" level=info msg="RemoveContainer for \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\" returns successfully" Dec 16 13:13:30.487745 kubelet[3612]: I1216 13:13:30.487720 3612 scope.go:117] "RemoveContainer" containerID="60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29" Dec 16 13:13:30.488427 containerd[1982]: time="2025-12-16T13:13:30.488351141Z" level=error msg="ContainerStatus for \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\": not found" Dec 16 13:13:30.489430 kubelet[3612]: E1216 13:13:30.488630 3612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\": not found" containerID="60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29" Dec 16 13:13:30.489430 kubelet[3612]: I1216 13:13:30.488660 3612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29"} err="failed to get container status \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\": rpc error: code = NotFound desc = an error occurred when try to find container \"60a7464e50d1cde486b6aa1326dcaf0de5fb14a46b9cf231de46a748355d7a29\": not found" Dec 16 13:13:30.493831 kubelet[3612]: I1216 13:13:30.493795 3612 scope.go:117] "RemoveContainer" containerID="7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db" Dec 16 13:13:30.498548 containerd[1982]: time="2025-12-16T13:13:30.498336665Z" level=info msg="RemoveContainer for \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\"" Dec 16 13:13:30.501178 systemd[1]: Removed slice kubepods-burstable-podc642fb9d_0374_4a8c_ad84_e3fad82ae9a4.slice - libcontainer container kubepods-burstable-podc642fb9d_0374_4a8c_ad84_e3fad82ae9a4.slice. Dec 16 13:13:30.501455 systemd[1]: kubepods-burstable-podc642fb9d_0374_4a8c_ad84_e3fad82ae9a4.slice: Consumed 8.426s CPU time, 199M memory peak, 76.7M read from disk, 13.3M written to disk. Dec 16 13:13:30.506915 containerd[1982]: time="2025-12-16T13:13:30.506862888Z" level=info msg="RemoveContainer for \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\" returns successfully" Dec 16 13:13:30.507901 kubelet[3612]: I1216 13:13:30.507858 3612 scope.go:117] "RemoveContainer" containerID="74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96" Dec 16 13:13:30.510557 containerd[1982]: time="2025-12-16T13:13:30.510525717Z" level=info msg="RemoveContainer for \"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96\"" Dec 16 13:13:30.518000 containerd[1982]: time="2025-12-16T13:13:30.517962877Z" level=info msg="RemoveContainer for \"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96\" returns successfully" Dec 16 13:13:30.518363 kubelet[3612]: I1216 13:13:30.518206 3612 scope.go:117] "RemoveContainer" containerID="a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465" Dec 16 13:13:30.521726 containerd[1982]: time="2025-12-16T13:13:30.521657423Z" level=info msg="RemoveContainer for \"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465\"" Dec 16 13:13:30.537526 containerd[1982]: time="2025-12-16T13:13:30.528218278Z" level=info msg="RemoveContainer for \"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465\" returns successfully" Dec 16 13:13:30.537526 containerd[1982]: time="2025-12-16T13:13:30.530077852Z" level=info msg="RemoveContainer for \"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a\"" Dec 16 13:13:30.537526 containerd[1982]: time="2025-12-16T13:13:30.536663709Z" level=info msg="RemoveContainer for \"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a\" returns successfully" Dec 16 13:13:30.537698 kubelet[3612]: I1216 13:13:30.528508 3612 scope.go:117] "RemoveContainer" containerID="ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a" Dec 16 13:13:30.537698 kubelet[3612]: I1216 13:13:30.537020 3612 scope.go:117] "RemoveContainer" containerID="040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e" Dec 16 13:13:30.538970 containerd[1982]: time="2025-12-16T13:13:30.538932763Z" level=info msg="RemoveContainer for \"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e\"" Dec 16 13:13:30.544571 containerd[1982]: time="2025-12-16T13:13:30.544523522Z" level=info msg="RemoveContainer for \"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e\" returns successfully" Dec 16 13:13:30.544769 kubelet[3612]: I1216 13:13:30.544755 3612 scope.go:117] "RemoveContainer" containerID="7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db" Dec 16 13:13:30.545038 containerd[1982]: time="2025-12-16T13:13:30.544998041Z" level=error msg="ContainerStatus for \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\": not found" Dec 16 13:13:30.545169 kubelet[3612]: E1216 13:13:30.545144 3612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\": not found" containerID="7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db" Dec 16 13:13:30.545244 kubelet[3612]: I1216 13:13:30.545176 3612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db"} err="failed to get container status \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d1ee1ca59d5ce69372b966385d656c8d6d00e12c07a4df62ea4ba7e8106c7db\": not found" Dec 16 13:13:30.545244 kubelet[3612]: I1216 13:13:30.545196 3612 scope.go:117] "RemoveContainer" containerID="74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96" Dec 16 13:13:30.545386 containerd[1982]: time="2025-12-16T13:13:30.545357118Z" level=error msg="ContainerStatus for \"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96\": not found" Dec 16 13:13:30.545561 kubelet[3612]: E1216 13:13:30.545534 3612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96\": not found" containerID="74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96" Dec 16 13:13:30.545623 kubelet[3612]: I1216 13:13:30.545559 3612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96"} err="failed to get container status \"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96\": rpc error: code = NotFound desc = an error occurred when try to find container \"74b6e2ffcc194cee26f58046a852e25fe151c6f83b1f89ea0f913aa8b5c67f96\": not found" Dec 16 13:13:30.545623 kubelet[3612]: I1216 13:13:30.545587 3612 scope.go:117] "RemoveContainer" containerID="a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465" Dec 16 13:13:30.545800 containerd[1982]: time="2025-12-16T13:13:30.545778610Z" level=error msg="ContainerStatus for \"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465\": not found" Dec 16 13:13:30.545956 kubelet[3612]: E1216 13:13:30.545935 3612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465\": not found" containerID="a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465" Dec 16 13:13:30.545995 kubelet[3612]: I1216 13:13:30.545956 3612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465"} err="failed to get container status \"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0956b243892836212e53e2a4e03f2430cd408529b0d5ae97ec8047075cee465\": not found" Dec 16 13:13:30.545995 kubelet[3612]: I1216 13:13:30.545968 3612 scope.go:117] "RemoveContainer" containerID="ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a" Dec 16 13:13:30.546169 containerd[1982]: time="2025-12-16T13:13:30.546131418Z" level=error msg="ContainerStatus for \"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a\": not found" Dec 16 13:13:30.546287 kubelet[3612]: E1216 13:13:30.546262 3612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a\": not found" containerID="ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a" Dec 16 13:13:30.546351 kubelet[3612]: I1216 13:13:30.546285 3612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a"} err="failed to get container status \"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac08def9bfcce1bf90b7406045abb4f5103247468c2ebbde1436de714a1b342a\": not found" Dec 16 13:13:30.546351 kubelet[3612]: I1216 13:13:30.546298 3612 scope.go:117] "RemoveContainer" containerID="040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e" Dec 16 13:13:30.546500 containerd[1982]: time="2025-12-16T13:13:30.546463055Z" level=error msg="ContainerStatus for \"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e\": not found" Dec 16 13:13:30.546707 kubelet[3612]: E1216 13:13:30.546675 3612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e\": not found" containerID="040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e" Dec 16 13:13:30.546707 kubelet[3612]: I1216 13:13:30.546698 3612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e"} err="failed to get container status \"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e\": rpc error: code = NotFound desc = an error occurred when try to find container \"040aa3ef5bf1883c20f129fada785b51ac5a36768ddfcdfe3bc108506dc2393e\": not found" Dec 16 13:13:30.585208 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641-shm.mount: Deactivated successfully. Dec 16 13:13:30.585322 systemd[1]: var-lib-kubelet-pods-c3d8660b\x2dc31d\x2d4ae9\x2d84fc\x2ded85b1aa10f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2l9qf.mount: Deactivated successfully. Dec 16 13:13:30.585384 systemd[1]: var-lib-kubelet-pods-c642fb9d\x2d0374\x2d4a8c\x2dad84\x2de3fad82ae9a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl9bnc.mount: Deactivated successfully. Dec 16 13:13:30.585446 systemd[1]: var-lib-kubelet-pods-c642fb9d\x2d0374\x2d4a8c\x2dad84\x2de3fad82ae9a4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 13:13:30.585535 systemd[1]: var-lib-kubelet-pods-c642fb9d\x2d0374\x2d4a8c\x2dad84\x2de3fad82ae9a4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 13:13:30.974920 kubelet[3612]: I1216 13:13:30.974878 3612 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6" path="/var/lib/kubelet/pods/c3d8660b-c31d-4ae9-84fc-ed85b1aa10f6/volumes" Dec 16 13:13:30.975632 kubelet[3612]: I1216 13:13:30.975277 3612 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c642fb9d-0374-4a8c-ad84-e3fad82ae9a4" path="/var/lib/kubelet/pods/c642fb9d-0374-4a8c-ad84-e3fad82ae9a4/volumes" Dec 16 13:13:31.073317 kubelet[3612]: E1216 13:13:31.073245 3612 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:13:31.350679 sshd[5173]: Connection closed by 139.178.68.195 port 41048 Dec 16 13:13:31.351852 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:31.364885 systemd-logind[1960]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:13:31.365019 systemd[1]: sshd@25-172.31.28.132:22-139.178.68.195:41048.service: Deactivated successfully. Dec 16 13:13:31.367506 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:13:31.369651 systemd-logind[1960]: Removed session 26. Dec 16 13:13:31.381878 systemd[1]: Started sshd@26-172.31.28.132:22-139.178.68.195:45028.service - OpenSSH per-connection server daemon (139.178.68.195:45028). Dec 16 13:13:31.573047 sshd[5322]: Accepted publickey for core from 139.178.68.195 port 45028 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:31.575293 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:31.581203 systemd-logind[1960]: New session 27 of user core. Dec 16 13:13:31.586722 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 13:13:32.030326 ntpd[2163]: Deleting 10 lxc_health, [fe80::905c:cfff:fe1c:bb7d%8]:123, stats: received=0, sent=0, dropped=0, active_time=61 secs Dec 16 13:13:32.030899 ntpd[2163]: 16 Dec 13:13:32 ntpd[2163]: Deleting 10 lxc_health, [fe80::905c:cfff:fe1c:bb7d%8]:123, stats: received=0, sent=0, dropped=0, active_time=61 secs Dec 16 13:13:32.472290 sshd[5325]: Connection closed by 139.178.68.195 port 45028 Dec 16 13:13:32.472663 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:32.479917 systemd-logind[1960]: Session 27 logged out. Waiting for processes to exit. Dec 16 13:13:32.481920 systemd[1]: sshd@26-172.31.28.132:22-139.178.68.195:45028.service: Deactivated successfully. Dec 16 13:13:32.489994 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 13:13:32.517205 systemd-logind[1960]: Removed session 27. Dec 16 13:13:32.523819 systemd[1]: Started sshd@27-172.31.28.132:22-139.178.68.195:45032.service - OpenSSH per-connection server daemon (139.178.68.195:45032). Dec 16 13:13:32.543329 systemd[1]: Created slice kubepods-burstable-podceb72ea0_2e27_4219_b6ae_068c458e5b18.slice - libcontainer container kubepods-burstable-podceb72ea0_2e27_4219_b6ae_068c458e5b18.slice. Dec 16 13:13:32.663226 kubelet[3612]: I1216 13:13:32.663183 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbmzl\" (UniqueName: \"kubernetes.io/projected/ceb72ea0-2e27-4219-b6ae-068c458e5b18-kube-api-access-jbmzl\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.663226 kubelet[3612]: I1216 13:13:32.663223 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ceb72ea0-2e27-4219-b6ae-068c458e5b18-cilium-config-path\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.664072 kubelet[3612]: I1216 13:13:32.663242 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ceb72ea0-2e27-4219-b6ae-068c458e5b18-cilium-ipsec-secrets\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.664072 kubelet[3612]: I1216 13:13:32.663256 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ceb72ea0-2e27-4219-b6ae-068c458e5b18-hubble-tls\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.664072 kubelet[3612]: I1216 13:13:32.663277 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ceb72ea0-2e27-4219-b6ae-068c458e5b18-hostproc\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.664072 kubelet[3612]: I1216 13:13:32.663290 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ceb72ea0-2e27-4219-b6ae-068c458e5b18-etc-cni-netd\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.664072 kubelet[3612]: I1216 13:13:32.663302 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ceb72ea0-2e27-4219-b6ae-068c458e5b18-host-proc-sys-net\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.664072 kubelet[3612]: I1216 13:13:32.663316 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ceb72ea0-2e27-4219-b6ae-068c458e5b18-cilium-run\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.664262 kubelet[3612]: I1216 13:13:32.663331 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ceb72ea0-2e27-4219-b6ae-068c458e5b18-xtables-lock\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.664262 kubelet[3612]: I1216 13:13:32.663443 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ceb72ea0-2e27-4219-b6ae-068c458e5b18-cilium-cgroup\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.664262 kubelet[3612]: I1216 13:13:32.663469 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ceb72ea0-2e27-4219-b6ae-068c458e5b18-lib-modules\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.664262 kubelet[3612]: I1216 13:13:32.663511 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ceb72ea0-2e27-4219-b6ae-068c458e5b18-bpf-maps\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.664262 kubelet[3612]: I1216 13:13:32.663528 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ceb72ea0-2e27-4219-b6ae-068c458e5b18-cni-path\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.664262 kubelet[3612]: I1216 13:13:32.663543 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ceb72ea0-2e27-4219-b6ae-068c458e5b18-clustermesh-secrets\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.664408 kubelet[3612]: I1216 13:13:32.663558 3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ceb72ea0-2e27-4219-b6ae-068c458e5b18-host-proc-sys-kernel\") pod \"cilium-cdjvn\" (UID: \"ceb72ea0-2e27-4219-b6ae-068c458e5b18\") " pod="kube-system/cilium-cdjvn" Dec 16 13:13:32.722584 sshd[5336]: Accepted publickey for core from 139.178.68.195 port 45032 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:32.724191 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:32.729716 systemd-logind[1960]: New session 28 of user core. Dec 16 13:13:32.737724 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 13:13:32.855332 sshd[5339]: Connection closed by 139.178.68.195 port 45032 Dec 16 13:13:32.858214 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:32.858706 containerd[1982]: time="2025-12-16T13:13:32.858646963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdjvn,Uid:ceb72ea0-2e27-4219-b6ae-068c458e5b18,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:32.864899 systemd[1]: sshd@27-172.31.28.132:22-139.178.68.195:45032.service: Deactivated successfully. Dec 16 13:13:32.867557 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 13:13:32.869028 systemd-logind[1960]: Session 28 logged out. Waiting for processes to exit. Dec 16 13:13:32.871088 systemd-logind[1960]: Removed session 28. Dec 16 13:13:32.899137 containerd[1982]: time="2025-12-16T13:13:32.898656600Z" level=info msg="connecting to shim 71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12" address="unix:///run/containerd/s/1aadfe26e53d331804a97488f5e79c2b222bd902123b74e3c5b3f6bd6c1f42cc" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:32.898881 systemd[1]: Started sshd@28-172.31.28.132:22-139.178.68.195:45044.service - OpenSSH per-connection server daemon (139.178.68.195:45044). Dec 16 13:13:32.926934 systemd[1]: Started cri-containerd-71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12.scope - libcontainer container 71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12. Dec 16 13:13:32.965469 containerd[1982]: time="2025-12-16T13:13:32.965414615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdjvn,Uid:ceb72ea0-2e27-4219-b6ae-068c458e5b18,Namespace:kube-system,Attempt:0,} returns sandbox id \"71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12\"" Dec 16 13:13:32.974868 containerd[1982]: time="2025-12-16T13:13:32.974752286Z" level=info msg="CreateContainer within sandbox \"71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:13:32.987415 containerd[1982]: time="2025-12-16T13:13:32.987325366Z" level=info msg="Container c7717ae0274a8396d1fcc01eea7ebb0381571ad1f176bcf0afbadc064fb1739a: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:32.999620 containerd[1982]: time="2025-12-16T13:13:32.999562029Z" level=info msg="CreateContainer within sandbox \"71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c7717ae0274a8396d1fcc01eea7ebb0381571ad1f176bcf0afbadc064fb1739a\"" Dec 16 13:13:33.001263 containerd[1982]: time="2025-12-16T13:13:33.000866235Z" level=info msg="StartContainer for \"c7717ae0274a8396d1fcc01eea7ebb0381571ad1f176bcf0afbadc064fb1739a\"" Dec 16 13:13:33.002453 containerd[1982]: time="2025-12-16T13:13:33.002412999Z" level=info msg="connecting to shim c7717ae0274a8396d1fcc01eea7ebb0381571ad1f176bcf0afbadc064fb1739a" address="unix:///run/containerd/s/1aadfe26e53d331804a97488f5e79c2b222bd902123b74e3c5b3f6bd6c1f42cc" protocol=ttrpc version=3 Dec 16 13:13:33.031045 systemd[1]: Started cri-containerd-c7717ae0274a8396d1fcc01eea7ebb0381571ad1f176bcf0afbadc064fb1739a.scope - libcontainer container c7717ae0274a8396d1fcc01eea7ebb0381571ad1f176bcf0afbadc064fb1739a. Dec 16 13:13:33.072063 containerd[1982]: time="2025-12-16T13:13:33.072025190Z" level=info msg="StartContainer for \"c7717ae0274a8396d1fcc01eea7ebb0381571ad1f176bcf0afbadc064fb1739a\" returns successfully" Dec 16 13:13:33.088750 sshd[5360]: Accepted publickey for core from 139.178.68.195 port 45044 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:33.090541 sshd-session[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:33.097660 systemd-logind[1960]: New session 29 of user core. Dec 16 13:13:33.104745 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 16 13:13:33.338901 systemd[1]: cri-containerd-c7717ae0274a8396d1fcc01eea7ebb0381571ad1f176bcf0afbadc064fb1739a.scope: Deactivated successfully. Dec 16 13:13:33.339181 systemd[1]: cri-containerd-c7717ae0274a8396d1fcc01eea7ebb0381571ad1f176bcf0afbadc064fb1739a.scope: Consumed 24ms CPU time, 9.6M memory peak, 3.1M read from disk. Dec 16 13:13:33.340235 containerd[1982]: time="2025-12-16T13:13:33.340196250Z" level=info msg="received container exit event container_id:\"c7717ae0274a8396d1fcc01eea7ebb0381571ad1f176bcf0afbadc064fb1739a\" id:\"c7717ae0274a8396d1fcc01eea7ebb0381571ad1f176bcf0afbadc064fb1739a\" pid:5411 exited_at:{seconds:1765890813 nanos:339698349}" Dec 16 13:13:33.447055 kubelet[3612]: I1216 13:13:33.446742 3612 setters.go:543] "Node became not ready" node="ip-172-31-28-132" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T13:13:33Z","lastTransitionTime":"2025-12-16T13:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 13:13:33.513534 containerd[1982]: time="2025-12-16T13:13:33.513456260Z" level=info msg="CreateContainer within sandbox \"71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:13:33.527200 containerd[1982]: time="2025-12-16T13:13:33.527156832Z" level=info msg="Container 3da716d7cfa7828b6195e1aab2f05f7251aa0b15d9c75b6664158b948a9fb385: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:33.537578 containerd[1982]: time="2025-12-16T13:13:33.537537235Z" level=info msg="CreateContainer within sandbox \"71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3da716d7cfa7828b6195e1aab2f05f7251aa0b15d9c75b6664158b948a9fb385\"" Dec 16 13:13:33.538440 containerd[1982]: time="2025-12-16T13:13:33.538408627Z" level=info msg="StartContainer for \"3da716d7cfa7828b6195e1aab2f05f7251aa0b15d9c75b6664158b948a9fb385\"" Dec 16 13:13:33.540248 containerd[1982]: time="2025-12-16T13:13:33.540201576Z" level=info msg="connecting to shim 3da716d7cfa7828b6195e1aab2f05f7251aa0b15d9c75b6664158b948a9fb385" address="unix:///run/containerd/s/1aadfe26e53d331804a97488f5e79c2b222bd902123b74e3c5b3f6bd6c1f42cc" protocol=ttrpc version=3 Dec 16 13:13:33.559715 systemd[1]: Started cri-containerd-3da716d7cfa7828b6195e1aab2f05f7251aa0b15d9c75b6664158b948a9fb385.scope - libcontainer container 3da716d7cfa7828b6195e1aab2f05f7251aa0b15d9c75b6664158b948a9fb385. Dec 16 13:13:33.598470 containerd[1982]: time="2025-12-16T13:13:33.598322708Z" level=info msg="StartContainer for \"3da716d7cfa7828b6195e1aab2f05f7251aa0b15d9c75b6664158b948a9fb385\" returns successfully" Dec 16 13:13:33.783830 systemd[1]: cri-containerd-3da716d7cfa7828b6195e1aab2f05f7251aa0b15d9c75b6664158b948a9fb385.scope: Deactivated successfully. Dec 16 13:13:33.784515 systemd[1]: cri-containerd-3da716d7cfa7828b6195e1aab2f05f7251aa0b15d9c75b6664158b948a9fb385.scope: Consumed 22ms CPU time, 7.5M memory peak, 2.2M read from disk. Dec 16 13:13:33.785860 containerd[1982]: time="2025-12-16T13:13:33.784771113Z" level=info msg="received container exit event container_id:\"3da716d7cfa7828b6195e1aab2f05f7251aa0b15d9c75b6664158b948a9fb385\" id:\"3da716d7cfa7828b6195e1aab2f05f7251aa0b15d9c75b6664158b948a9fb385\" pid:5461 exited_at:{seconds:1765890813 nanos:784447466}" Dec 16 13:13:33.809437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3da716d7cfa7828b6195e1aab2f05f7251aa0b15d9c75b6664158b948a9fb385-rootfs.mount: Deactivated successfully. Dec 16 13:13:34.522515 containerd[1982]: time="2025-12-16T13:13:34.521947224Z" level=info msg="CreateContainer within sandbox \"71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:13:34.541334 containerd[1982]: time="2025-12-16T13:13:34.541274681Z" level=info msg="Container cb2cf236bd6f6636bb3c112e463acd8009b2440b147dc62b93cfd15afc6f1ba5: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:34.565502 containerd[1982]: time="2025-12-16T13:13:34.565441227Z" level=info msg="CreateContainer within sandbox \"71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cb2cf236bd6f6636bb3c112e463acd8009b2440b147dc62b93cfd15afc6f1ba5\"" Dec 16 13:13:34.568340 containerd[1982]: time="2025-12-16T13:13:34.568297569Z" level=info msg="StartContainer for \"cb2cf236bd6f6636bb3c112e463acd8009b2440b147dc62b93cfd15afc6f1ba5\"" Dec 16 13:13:34.571733 containerd[1982]: time="2025-12-16T13:13:34.571690886Z" level=info msg="connecting to shim cb2cf236bd6f6636bb3c112e463acd8009b2440b147dc62b93cfd15afc6f1ba5" address="unix:///run/containerd/s/1aadfe26e53d331804a97488f5e79c2b222bd902123b74e3c5b3f6bd6c1f42cc" protocol=ttrpc version=3 Dec 16 13:13:34.624033 systemd[1]: Started cri-containerd-cb2cf236bd6f6636bb3c112e463acd8009b2440b147dc62b93cfd15afc6f1ba5.scope - libcontainer container cb2cf236bd6f6636bb3c112e463acd8009b2440b147dc62b93cfd15afc6f1ba5. Dec 16 13:13:34.720542 containerd[1982]: time="2025-12-16T13:13:34.719276882Z" level=info msg="StartContainer for \"cb2cf236bd6f6636bb3c112e463acd8009b2440b147dc62b93cfd15afc6f1ba5\" returns successfully" Dec 16 13:13:34.802190 systemd[1]: cri-containerd-cb2cf236bd6f6636bb3c112e463acd8009b2440b147dc62b93cfd15afc6f1ba5.scope: Deactivated successfully. Dec 16 13:13:34.804221 containerd[1982]: time="2025-12-16T13:13:34.804182621Z" level=info msg="received container exit event container_id:\"cb2cf236bd6f6636bb3c112e463acd8009b2440b147dc62b93cfd15afc6f1ba5\" id:\"cb2cf236bd6f6636bb3c112e463acd8009b2440b147dc62b93cfd15afc6f1ba5\" pid:5504 exited_at:{seconds:1765890814 nanos:803762444}" Dec 16 13:13:34.834697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb2cf236bd6f6636bb3c112e463acd8009b2440b147dc62b93cfd15afc6f1ba5-rootfs.mount: Deactivated successfully. Dec 16 13:13:35.533383 containerd[1982]: time="2025-12-16T13:13:35.533333319Z" level=info msg="CreateContainer within sandbox \"71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:13:35.549510 containerd[1982]: time="2025-12-16T13:13:35.548164284Z" level=info msg="Container 0fb5b26c194ffe95edf3fe500235c58d9bb880440cbb2a52724a4f4181fb7333: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:35.566287 containerd[1982]: time="2025-12-16T13:13:35.566253817Z" level=info msg="CreateContainer within sandbox \"71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0fb5b26c194ffe95edf3fe500235c58d9bb880440cbb2a52724a4f4181fb7333\"" Dec 16 13:13:35.567221 containerd[1982]: time="2025-12-16T13:13:35.567136918Z" level=info msg="StartContainer for \"0fb5b26c194ffe95edf3fe500235c58d9bb880440cbb2a52724a4f4181fb7333\"" Dec 16 13:13:35.568600 containerd[1982]: time="2025-12-16T13:13:35.568529713Z" level=info msg="connecting to shim 0fb5b26c194ffe95edf3fe500235c58d9bb880440cbb2a52724a4f4181fb7333" address="unix:///run/containerd/s/1aadfe26e53d331804a97488f5e79c2b222bd902123b74e3c5b3f6bd6c1f42cc" protocol=ttrpc version=3 Dec 16 13:13:35.606722 systemd[1]: Started cri-containerd-0fb5b26c194ffe95edf3fe500235c58d9bb880440cbb2a52724a4f4181fb7333.scope - libcontainer container 0fb5b26c194ffe95edf3fe500235c58d9bb880440cbb2a52724a4f4181fb7333. Dec 16 13:13:35.640724 systemd[1]: cri-containerd-0fb5b26c194ffe95edf3fe500235c58d9bb880440cbb2a52724a4f4181fb7333.scope: Deactivated successfully. Dec 16 13:13:35.644220 containerd[1982]: time="2025-12-16T13:13:35.644144214Z" level=info msg="received container exit event container_id:\"0fb5b26c194ffe95edf3fe500235c58d9bb880440cbb2a52724a4f4181fb7333\" id:\"0fb5b26c194ffe95edf3fe500235c58d9bb880440cbb2a52724a4f4181fb7333\" pid:5545 exited_at:{seconds:1765890815 nanos:642277056}" Dec 16 13:13:35.653531 containerd[1982]: time="2025-12-16T13:13:35.653426911Z" level=info msg="StartContainer for \"0fb5b26c194ffe95edf3fe500235c58d9bb880440cbb2a52724a4f4181fb7333\" returns successfully" Dec 16 13:13:35.670283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fb5b26c194ffe95edf3fe500235c58d9bb880440cbb2a52724a4f4181fb7333-rootfs.mount: Deactivated successfully. Dec 16 13:13:36.074608 kubelet[3612]: E1216 13:13:36.074537 3612 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:13:36.538248 containerd[1982]: time="2025-12-16T13:13:36.537648411Z" level=info msg="CreateContainer within sandbox \"71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:13:36.555436 containerd[1982]: time="2025-12-16T13:13:36.555303123Z" level=info msg="Container 0bd2c032f9f5c7d1038dde7d729ea2ab4b253ed8b2276257200d784c886b33d5: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:36.561349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3151326085.mount: Deactivated successfully. Dec 16 13:13:36.567857 containerd[1982]: time="2025-12-16T13:13:36.567808801Z" level=info msg="CreateContainer within sandbox \"71003bf53477e3e269401fff39ddf56992ffe7e4f48ce6d0e27c7549c8b2fc12\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0bd2c032f9f5c7d1038dde7d729ea2ab4b253ed8b2276257200d784c886b33d5\"" Dec 16 13:13:36.568790 containerd[1982]: time="2025-12-16T13:13:36.568753840Z" level=info msg="StartContainer for \"0bd2c032f9f5c7d1038dde7d729ea2ab4b253ed8b2276257200d784c886b33d5\"" Dec 16 13:13:36.569830 containerd[1982]: time="2025-12-16T13:13:36.569802355Z" level=info msg="connecting to shim 0bd2c032f9f5c7d1038dde7d729ea2ab4b253ed8b2276257200d784c886b33d5" address="unix:///run/containerd/s/1aadfe26e53d331804a97488f5e79c2b222bd902123b74e3c5b3f6bd6c1f42cc" protocol=ttrpc version=3 Dec 16 13:13:36.592708 systemd[1]: Started cri-containerd-0bd2c032f9f5c7d1038dde7d729ea2ab4b253ed8b2276257200d784c886b33d5.scope - libcontainer container 0bd2c032f9f5c7d1038dde7d729ea2ab4b253ed8b2276257200d784c886b33d5. Dec 16 13:13:36.648642 containerd[1982]: time="2025-12-16T13:13:36.648569935Z" level=info msg="StartContainer for \"0bd2c032f9f5c7d1038dde7d729ea2ab4b253ed8b2276257200d784c886b33d5\" returns successfully" Dec 16 13:13:39.004562 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Dec 16 13:13:41.992011 systemd-networkd[1730]: lxc_health: Link UP Dec 16 13:13:41.995318 (udev-worker)[6150]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:13:41.999152 systemd-networkd[1730]: lxc_health: Gained carrier Dec 16 13:13:42.886420 kubelet[3612]: I1216 13:13:42.885765 3612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cdjvn" podStartSLOduration=10.885745515 podStartE2EDuration="10.885745515s" podCreationTimestamp="2025-12-16 13:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:13:37.564538394 +0000 UTC m=+106.869061763" watchObservedRunningTime="2025-12-16 13:13:42.885745515 +0000 UTC m=+112.190268914" Dec 16 13:13:43.803913 systemd-networkd[1730]: lxc_health: Gained IPv6LL Dec 16 13:13:46.030375 ntpd[2163]: Listen normally on 13 lxc_health [fe80::54aa:a5ff:feb0:9113%14]:123 Dec 16 13:13:46.030822 ntpd[2163]: 16 Dec 13:13:46 ntpd[2163]: Listen normally on 13 lxc_health [fe80::54aa:a5ff:feb0:9113%14]:123 Dec 16 13:13:46.485791 kubelet[3612]: E1216 13:13:46.485754 3612 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59572->127.0.0.1:44219: write tcp 127.0.0.1:59572->127.0.0.1:44219: write: broken pipe Dec 16 13:13:48.628474 sshd[5425]: Connection closed by 139.178.68.195 port 45044 Dec 16 13:13:48.629714 sshd-session[5360]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:48.641615 systemd-logind[1960]: Session 29 logged out. Waiting for processes to exit. Dec 16 13:13:48.642449 systemd[1]: sshd@28-172.31.28.132:22-139.178.68.195:45044.service: Deactivated successfully. Dec 16 13:13:48.644964 systemd[1]: session-29.scope: Deactivated successfully. Dec 16 13:13:48.646965 systemd-logind[1960]: Removed session 29. Dec 16 13:13:50.930195 containerd[1982]: time="2025-12-16T13:13:50.929951199Z" level=info msg="StopPodSandbox for \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\"" Dec 16 13:13:50.930195 containerd[1982]: time="2025-12-16T13:13:50.930113894Z" level=info msg="TearDown network for sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" successfully" Dec 16 13:13:50.930195 containerd[1982]: time="2025-12-16T13:13:50.930128581Z" level=info msg="StopPodSandbox for \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" returns successfully" Dec 16 13:13:50.930805 containerd[1982]: time="2025-12-16T13:13:50.930693914Z" level=info msg="RemovePodSandbox for \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\"" Dec 16 13:13:50.930805 containerd[1982]: time="2025-12-16T13:13:50.930728641Z" level=info msg="Forcibly stopping sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\"" Dec 16 13:13:50.930884 containerd[1982]: time="2025-12-16T13:13:50.930844150Z" level=info msg="TearDown network for sandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" successfully" Dec 16 13:13:50.932299 containerd[1982]: time="2025-12-16T13:13:50.932268645Z" level=info msg="Ensure that sandbox 502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641 in task-service has been cleanup successfully" Dec 16 13:13:50.940847 containerd[1982]: time="2025-12-16T13:13:50.940801417Z" level=info msg="RemovePodSandbox \"502448c6c34b50550bde5e66a0b1f9ed843ba2237d3d0b0f9c4dde396ecb4641\" returns successfully" Dec 16 13:13:50.941469 containerd[1982]: time="2025-12-16T13:13:50.941420669Z" level=info msg="StopPodSandbox for \"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\"" Dec 16 13:13:50.941627 containerd[1982]: time="2025-12-16T13:13:50.941562371Z" level=info msg="TearDown network for sandbox \"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\" successfully" Dec 16 13:13:50.941627 containerd[1982]: time="2025-12-16T13:13:50.941574163Z" level=info msg="StopPodSandbox for \"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\" returns successfully" Dec 16 13:13:50.941888 containerd[1982]: time="2025-12-16T13:13:50.941868385Z" level=info msg="RemovePodSandbox for \"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\"" Dec 16 13:13:50.942524 containerd[1982]: time="2025-12-16T13:13:50.942040315Z" level=info msg="Forcibly stopping sandbox \"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\"" Dec 16 13:13:50.942524 containerd[1982]: time="2025-12-16T13:13:50.942155133Z" level=info msg="TearDown network for sandbox \"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\" successfully" Dec 16 13:13:50.943355 containerd[1982]: time="2025-12-16T13:13:50.943285409Z" level=info msg="Ensure that sandbox fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8 in task-service has been cleanup successfully" Dec 16 13:13:50.949627 containerd[1982]: time="2025-12-16T13:13:50.949560730Z" level=info msg="RemovePodSandbox \"fb930096592894e3fdd95304d06c91333c6ab6df6046527c79b57e1ff18316b8\" returns successfully"