Dec 12 18:39:05.904715 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:39:05.904757 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:39:05.904776 kernel: BIOS-provided physical RAM map: Dec 12 18:39:05.904787 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 12 18:39:05.904798 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Dec 12 18:39:05.904809 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 12 18:39:05.904822 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 12 18:39:05.904834 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 12 18:39:05.904845 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 12 18:39:05.904858 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 12 18:39:05.904870 kernel: NX (Execute Disable) protection: active Dec 12 18:39:05.904896 kernel: APIC: Static calls initialized Dec 12 18:39:05.904907 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Dec 12 18:39:05.904920 kernel: extended physical RAM map: Dec 12 18:39:05.904935 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 12 18:39:05.904947 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Dec 12 18:39:05.904963 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Dec 12 18:39:05.904976 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Dec 12 18:39:05.904988 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 12 18:39:05.905000 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 12 18:39:05.905013 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 12 18:39:05.905025 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 12 18:39:05.905039 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 12 18:39:05.905051 kernel: efi: EFI v2.7 by EDK II Dec 12 18:39:05.905064 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Dec 12 18:39:05.905076 kernel: secureboot: Secure boot disabled Dec 12 18:39:05.905088 kernel: SMBIOS 2.7 present. Dec 12 18:39:05.905104 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 12 18:39:05.905115 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:39:05.905128 kernel: Hypervisor detected: KVM Dec 12 18:39:05.905139 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 12 18:39:05.905151 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:39:05.905163 kernel: kvm-clock: using sched offset of 5561291033 cycles Dec 12 18:39:05.905177 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:39:05.905190 kernel: tsc: Detected 2499.996 MHz processor Dec 12 18:39:05.905203 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:39:05.905216 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:39:05.905233 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 12 18:39:05.905245 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 12 18:39:05.905258 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:39:05.905276 kernel: Using GB pages for direct mapping Dec 12 18:39:05.905290 kernel: ACPI: Early table checksum verification disabled Dec 12 18:39:05.905303 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Dec 12 18:39:05.905318 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Dec 12 18:39:05.905336 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 12 18:39:05.905351 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 12 18:39:05.905364 kernel: ACPI: FACS 0x00000000789D0000 000040 Dec 12 18:39:05.905379 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 12 18:39:05.905393 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 12 18:39:05.905407 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 12 18:39:05.905421 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 12 18:39:05.905435 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 12 18:39:05.905453 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 12 18:39:05.910174 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 12 18:39:05.910194 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Dec 12 18:39:05.910209 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Dec 12 18:39:05.910225 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Dec 12 18:39:05.910241 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Dec 12 18:39:05.910257 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Dec 12 18:39:05.910273 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Dec 12 18:39:05.910295 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Dec 12 18:39:05.910311 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Dec 12 18:39:05.910325 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Dec 12 18:39:05.910340 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Dec 12 18:39:05.910356 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Dec 12 18:39:05.910371 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Dec 12 18:39:05.910387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 12 18:39:05.910402 kernel: NUMA: Initialized distance table, cnt=1 Dec 12 18:39:05.910418 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Dec 12 18:39:05.910439 kernel: Zone ranges: Dec 12 18:39:05.910455 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:39:05.910486 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Dec 12 18:39:05.910502 kernel: Normal empty Dec 12 18:39:05.910518 kernel: Device empty Dec 12 18:39:05.910534 kernel: Movable zone start for each node Dec 12 18:39:05.910550 kernel: Early memory node ranges Dec 12 18:39:05.910567 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 12 18:39:05.910583 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Dec 12 18:39:05.910599 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Dec 12 18:39:05.910619 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Dec 12 18:39:05.910635 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:39:05.910657 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 12 18:39:05.910673 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 12 18:39:05.910689 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Dec 12 18:39:05.910704 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 12 18:39:05.910720 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:39:05.910736 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 12 18:39:05.910752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:39:05.910771 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:39:05.910787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:39:05.910803 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:39:05.910820 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:39:05.910836 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:39:05.910851 kernel: TSC deadline timer available Dec 12 18:39:05.910865 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:39:05.910880 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:39:05.910895 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:39:05.910912 kernel: CPU topo: Max. threads per core: 2 Dec 12 18:39:05.910927 kernel: CPU topo: Num. cores per package: 1 Dec 12 18:39:05.910941 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:39:05.910956 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:39:05.910971 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:39:05.910986 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Dec 12 18:39:05.911001 kernel: Booting paravirtualized kernel on KVM Dec 12 18:39:05.911016 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:39:05.911032 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:39:05.911049 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:39:05.911065 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:39:05.911079 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:39:05.911094 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:39:05.911109 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:39:05.911126 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:39:05.911141 kernel: random: crng init done Dec 12 18:39:05.911156 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:39:05.911174 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 18:39:05.911189 kernel: Fallback order for Node 0: 0 Dec 12 18:39:05.911204 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Dec 12 18:39:05.911219 kernel: Policy zone: DMA32 Dec 12 18:39:05.911245 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:39:05.911264 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:39:05.911279 kernel: Kernel/User page tables isolation: enabled Dec 12 18:39:05.911295 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:39:05.911311 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:39:05.911326 kernel: Dynamic Preempt: voluntary Dec 12 18:39:05.911342 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:39:05.911359 kernel: rcu: RCU event tracing is enabled. Dec 12 18:39:05.911378 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:39:05.911395 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:39:05.911411 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:39:05.911427 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:39:05.911443 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:39:05.911483 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:39:05.911499 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:39:05.911515 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:39:05.911531 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:39:05.911547 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:39:05.911563 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:39:05.911579 kernel: Console: colour dummy device 80x25 Dec 12 18:39:05.911595 kernel: printk: legacy console [tty0] enabled Dec 12 18:39:05.911611 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:39:05.911631 kernel: ACPI: Core revision 20240827 Dec 12 18:39:05.911646 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 12 18:39:05.911662 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:39:05.911678 kernel: x2apic enabled Dec 12 18:39:05.911694 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:39:05.911710 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 12 18:39:05.911726 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 12 18:39:05.911742 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 12 18:39:05.911758 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Dec 12 18:39:05.911776 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:39:05.911792 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:39:05.911808 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:39:05.911825 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 12 18:39:05.911842 kernel: RETBleed: Vulnerable Dec 12 18:39:05.911859 kernel: Speculative Store Bypass: Vulnerable Dec 12 18:39:05.911876 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:39:05.911892 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:39:05.911909 kernel: GDS: Unknown: Dependent on hypervisor status Dec 12 18:39:05.911926 kernel: active return thunk: its_return_thunk Dec 12 18:39:05.911943 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 12 18:39:05.911963 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:39:05.911980 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:39:05.911997 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:39:05.912015 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 12 18:39:05.912032 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 12 18:39:05.912049 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 12 18:39:05.912065 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 12 18:39:05.912082 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 12 18:39:05.912099 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 12 18:39:05.912116 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:39:05.912135 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 12 18:39:05.912151 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 12 18:39:05.912168 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 12 18:39:05.912183 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 12 18:39:05.912198 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 12 18:39:05.912213 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 12 18:39:05.912229 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 12 18:39:05.912245 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:39:05.912260 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:39:05.912276 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:39:05.912291 kernel: landlock: Up and running. Dec 12 18:39:05.912307 kernel: SELinux: Initializing. Dec 12 18:39:05.912326 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:39:05.912342 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:39:05.912358 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 12 18:39:05.912373 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 12 18:39:05.912389 kernel: signal: max sigframe size: 3632 Dec 12 18:39:05.912405 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:39:05.912422 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:39:05.912438 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:39:05.912454 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 12 18:39:05.912483 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:39:05.912503 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:39:05.912518 kernel: .... node #0, CPUs: #1 Dec 12 18:39:05.912535 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 12 18:39:05.912551 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 12 18:39:05.912567 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:39:05.912583 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 12 18:39:05.912600 kernel: Memory: 1899860K/2037804K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 133380K reserved, 0K cma-reserved) Dec 12 18:39:05.912616 kernel: devtmpfs: initialized Dec 12 18:39:05.912634 kernel: x86/mm: Memory block size: 128MB Dec 12 18:39:05.912650 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Dec 12 18:39:05.912666 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:39:05.912683 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:39:05.912699 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:39:05.912715 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:39:05.912730 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:39:05.912746 kernel: audit: type=2000 audit(1765564741.430:1): state=initialized audit_enabled=0 res=1 Dec 12 18:39:05.912762 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:39:05.912781 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:39:05.912796 kernel: cpuidle: using governor menu Dec 12 18:39:05.912812 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:39:05.912829 kernel: dca service started, version 1.12.1 Dec 12 18:39:05.912845 kernel: PCI: Using configuration type 1 for base access Dec 12 18:39:05.912861 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:39:05.912877 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:39:05.912901 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:39:05.912917 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:39:05.912936 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:39:05.912952 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:39:05.912968 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:39:05.912984 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:39:05.913000 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 12 18:39:05.913015 kernel: ACPI: Interpreter enabled Dec 12 18:39:05.913031 kernel: ACPI: PM: (supports S0 S5) Dec 12 18:39:05.913047 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:39:05.913063 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:39:05.913082 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:39:05.913098 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 12 18:39:05.913114 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:39:05.913352 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:39:05.914570 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 12 18:39:05.914733 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 12 18:39:05.914755 kernel: acpiphp: Slot [3] registered Dec 12 18:39:05.914776 kernel: acpiphp: Slot [4] registered Dec 12 18:39:05.914792 kernel: acpiphp: Slot [5] registered Dec 12 18:39:05.914807 kernel: acpiphp: Slot [6] registered Dec 12 18:39:05.914823 kernel: acpiphp: Slot [7] registered Dec 12 18:39:05.914837 kernel: acpiphp: Slot [8] registered Dec 12 18:39:05.914853 kernel: acpiphp: Slot [9] registered Dec 12 18:39:05.914868 kernel: acpiphp: Slot [10] registered Dec 12 18:39:05.914884 kernel: acpiphp: Slot [11] registered Dec 12 18:39:05.914899 kernel: acpiphp: Slot [12] registered Dec 12 18:39:05.914914 kernel: acpiphp: Slot [13] registered Dec 12 18:39:05.914932 kernel: acpiphp: Slot [14] registered Dec 12 18:39:05.914948 kernel: acpiphp: Slot [15] registered Dec 12 18:39:05.914963 kernel: acpiphp: Slot [16] registered Dec 12 18:39:05.914978 kernel: acpiphp: Slot [17] registered Dec 12 18:39:05.914992 kernel: acpiphp: Slot [18] registered Dec 12 18:39:05.915007 kernel: acpiphp: Slot [19] registered Dec 12 18:39:05.915021 kernel: acpiphp: Slot [20] registered Dec 12 18:39:05.915036 kernel: acpiphp: Slot [21] registered Dec 12 18:39:05.915051 kernel: acpiphp: Slot [22] registered Dec 12 18:39:05.915069 kernel: acpiphp: Slot [23] registered Dec 12 18:39:05.915085 kernel: acpiphp: Slot [24] registered Dec 12 18:39:05.915099 kernel: acpiphp: Slot [25] registered Dec 12 18:39:05.915115 kernel: acpiphp: Slot [26] registered Dec 12 18:39:05.915130 kernel: acpiphp: Slot [27] registered Dec 12 18:39:05.915145 kernel: acpiphp: Slot [28] registered Dec 12 18:39:05.915162 kernel: acpiphp: Slot [29] registered Dec 12 18:39:05.915177 kernel: acpiphp: Slot [30] registered Dec 12 18:39:05.915192 kernel: acpiphp: Slot [31] registered Dec 12 18:39:05.915207 kernel: PCI host bridge to bus 0000:00 Dec 12 18:39:05.915376 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:39:05.916155 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:39:05.916298 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:39:05.916422 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 12 18:39:05.918673 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Dec 12 18:39:05.918858 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:39:05.919089 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:39:05.919285 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:39:05.919451 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Dec 12 18:39:05.919613 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 12 18:39:05.919746 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 12 18:39:05.919875 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 12 18:39:05.920009 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 12 18:39:05.920137 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 12 18:39:05.920266 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 12 18:39:05.920400 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 12 18:39:05.922695 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 10742 usecs Dec 12 18:39:05.922867 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:39:05.923008 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Dec 12 18:39:05.923143 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 12 18:39:05.923285 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:39:05.923426 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Dec 12 18:39:05.923584 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Dec 12 18:39:05.923723 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Dec 12 18:39:05.923852 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Dec 12 18:39:05.923871 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:39:05.923893 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:39:05.923907 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:39:05.923923 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:39:05.923937 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 12 18:39:05.923951 kernel: iommu: Default domain type: Translated Dec 12 18:39:05.923966 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:39:05.923981 kernel: efivars: Registered efivars operations Dec 12 18:39:05.923996 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:39:05.924011 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:39:05.924030 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Dec 12 18:39:05.924043 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Dec 12 18:39:05.924058 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Dec 12 18:39:05.924199 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 12 18:39:05.924331 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 12 18:39:05.926508 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:39:05.926545 kernel: vgaarb: loaded Dec 12 18:39:05.926560 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 12 18:39:05.926575 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 12 18:39:05.926596 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:39:05.926610 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:39:05.926624 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:39:05.926638 kernel: pnp: PnP ACPI init Dec 12 18:39:05.926652 kernel: pnp: PnP ACPI: found 5 devices Dec 12 18:39:05.926666 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:39:05.926681 kernel: NET: Registered PF_INET protocol family Dec 12 18:39:05.926695 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:39:05.926709 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 12 18:39:05.926727 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:39:05.926741 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 18:39:05.926755 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 12 18:39:05.926768 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 12 18:39:05.926782 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:39:05.926796 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:39:05.926811 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:39:05.926824 kernel: NET: Registered PF_XDP protocol family Dec 12 18:39:05.926985 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:39:05.927111 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:39:05.927228 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:39:05.927343 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 12 18:39:05.927468 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Dec 12 18:39:05.927607 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 12 18:39:05.927626 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:39:05.927640 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 12 18:39:05.927654 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 12 18:39:05.927671 kernel: clocksource: Switched to clocksource tsc Dec 12 18:39:05.927685 kernel: Initialise system trusted keyrings Dec 12 18:39:05.927699 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 12 18:39:05.927713 kernel: Key type asymmetric registered Dec 12 18:39:05.927727 kernel: Asymmetric key parser 'x509' registered Dec 12 18:39:05.927741 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:39:05.927756 kernel: io scheduler mq-deadline registered Dec 12 18:39:05.927771 kernel: io scheduler kyber registered Dec 12 18:39:05.928273 kernel: io scheduler bfq registered Dec 12 18:39:05.928297 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:39:05.928313 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:39:05.928330 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:39:05.928346 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:39:05.928362 kernel: i8042: Warning: Keylock active Dec 12 18:39:05.928376 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:39:05.928389 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:39:05.928590 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 12 18:39:05.928730 kernel: rtc_cmos 00:00: registered as rtc0 Dec 12 18:39:05.928864 kernel: rtc_cmos 00:00: setting system clock to 2025-12-12T18:39:05 UTC (1765564745) Dec 12 18:39:05.929036 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 12 18:39:05.929062 kernel: intel_pstate: CPU model not supported Dec 12 18:39:05.929080 kernel: efifb: probing for efifb Dec 12 18:39:05.929098 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Dec 12 18:39:05.929115 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Dec 12 18:39:05.929133 kernel: efifb: scrolling: redraw Dec 12 18:39:05.929154 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 12 18:39:05.929172 kernel: Console: switching to colour frame buffer device 100x37 Dec 12 18:39:05.929189 kernel: fb0: EFI VGA frame buffer device Dec 12 18:39:05.929209 kernel: pstore: Using crash dump compression: deflate Dec 12 18:39:05.929227 kernel: pstore: Registered efi_pstore as persistent store backend Dec 12 18:39:05.929244 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:39:05.929265 kernel: Segment Routing with IPv6 Dec 12 18:39:05.929283 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:39:05.929300 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:39:05.929318 kernel: Key type dns_resolver registered Dec 12 18:39:05.929339 kernel: IPI shorthand broadcast: enabled Dec 12 18:39:05.929356 kernel: sched_clock: Marking stable (4152002363, 408974807)->(4868449186, -307472016) Dec 12 18:39:05.929373 kernel: registered taskstats version 1 Dec 12 18:39:05.929391 kernel: Loading compiled-in X.509 certificates Dec 12 18:39:05.929409 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:39:05.929427 kernel: Demotion targets for Node 0: null Dec 12 18:39:05.929444 kernel: Key type .fscrypt registered Dec 12 18:39:05.929488 kernel: Key type fscrypt-provisioning registered Dec 12 18:39:05.929504 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:39:05.929524 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:39:05.929540 kernel: ima: No architecture policies found Dec 12 18:39:05.929556 kernel: clk: Disabling unused clocks Dec 12 18:39:05.929573 kernel: Warning: unable to open an initial console. Dec 12 18:39:05.929592 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:39:05.929611 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:39:05.929626 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:39:05.929643 kernel: Run /init as init process Dec 12 18:39:05.929660 kernel: with arguments: Dec 12 18:39:05.929676 kernel: /init Dec 12 18:39:05.929693 kernel: with environment: Dec 12 18:39:05.929709 kernel: HOME=/ Dec 12 18:39:05.929726 kernel: TERM=linux Dec 12 18:39:05.929746 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:39:05.929773 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:39:05.929792 systemd[1]: Detected virtualization amazon. Dec 12 18:39:05.929809 systemd[1]: Detected architecture x86-64. Dec 12 18:39:05.929827 systemd[1]: Running in initrd. Dec 12 18:39:05.929845 systemd[1]: No hostname configured, using default hostname. Dec 12 18:39:05.929864 systemd[1]: Hostname set to . Dec 12 18:39:05.929883 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:39:05.929904 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:39:05.929923 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:39:05.929943 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:39:05.929963 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:39:05.929982 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:39:05.930000 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:39:05.930020 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:39:05.930044 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:39:05.930064 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:39:05.930082 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:39:05.930101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:39:05.930119 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:39:05.930137 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:39:05.930155 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:39:05.930173 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:39:05.930195 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:39:05.930214 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:39:05.930232 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:39:05.930251 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:39:05.930269 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:39:05.930288 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:39:05.930306 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:39:05.930324 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:39:05.930343 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:39:05.930365 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:39:05.930384 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:39:05.930403 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:39:05.930421 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:39:05.930439 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:39:05.933526 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:39:05.933571 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:39:05.933591 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:39:05.933615 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:39:05.933637 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:39:05.933699 systemd-journald[188]: Collecting audit messages is disabled. Dec 12 18:39:05.933745 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:39:05.933764 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:39:05.933784 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:39:05.933803 systemd-journald[188]: Journal started Dec 12 18:39:05.933843 systemd-journald[188]: Runtime Journal (/run/log/journal/ec2ed9feb9c764cdcffd4c6b888fc8c6) is 4.7M, max 38.1M, 33.3M free. Dec 12 18:39:05.930514 systemd-modules-load[189]: Inserted module 'overlay' Dec 12 18:39:05.939511 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:39:05.954607 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:39:05.960894 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:39:05.968635 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:39:05.983803 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:39:05.983841 kernel: Bridge firewalling registered Dec 12 18:39:05.975885 systemd-modules-load[189]: Inserted module 'br_netfilter' Dec 12 18:39:05.987308 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:39:05.990067 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:39:05.995655 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:39:06.000777 systemd-tmpfiles[210]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:39:06.010876 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:39:06.003676 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:39:06.018123 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:39:06.020511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:39:06.030686 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:39:06.035600 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:39:06.041279 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:39:06.095436 systemd-resolved[234]: Positive Trust Anchors: Dec 12 18:39:06.096488 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:39:06.096561 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:39:06.103760 systemd-resolved[234]: Defaulting to hostname 'linux'. Dec 12 18:39:06.107156 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:39:06.107892 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:39:06.143497 kernel: SCSI subsystem initialized Dec 12 18:39:06.153495 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:39:06.164494 kernel: iscsi: registered transport (tcp) Dec 12 18:39:06.186819 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:39:06.186910 kernel: QLogic iSCSI HBA Driver Dec 12 18:39:06.206294 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:39:06.227367 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:39:06.229270 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:39:06.276230 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:39:06.278769 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:39:06.335496 kernel: raid6: avx512x4 gen() 18379 MB/s Dec 12 18:39:06.353487 kernel: raid6: avx512x2 gen() 18312 MB/s Dec 12 18:39:06.371489 kernel: raid6: avx512x1 gen() 18232 MB/s Dec 12 18:39:06.389486 kernel: raid6: avx2x4 gen() 18199 MB/s Dec 12 18:39:06.407487 kernel: raid6: avx2x2 gen() 18219 MB/s Dec 12 18:39:06.425834 kernel: raid6: avx2x1 gen() 13719 MB/s Dec 12 18:39:06.425891 kernel: raid6: using algorithm avx512x4 gen() 18379 MB/s Dec 12 18:39:06.444700 kernel: raid6: .... xor() 7717 MB/s, rmw enabled Dec 12 18:39:06.444759 kernel: raid6: using avx512x2 recovery algorithm Dec 12 18:39:06.466512 kernel: xor: automatically using best checksumming function avx Dec 12 18:39:06.635493 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:39:06.642795 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:39:06.645654 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:39:06.675841 systemd-udevd[436]: Using default interface naming scheme 'v255'. Dec 12 18:39:06.682606 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:39:06.687637 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:39:06.711960 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Dec 12 18:39:06.739621 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:39:06.741874 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:39:06.799106 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:39:06.802644 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:39:06.881152 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 12 18:39:06.881418 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 12 18:39:06.892613 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 12 18:39:06.907495 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:36:c4:86:da:bd Dec 12 18:39:06.912495 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:39:06.913237 (udev-worker)[488]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:39:06.927491 kernel: AES CTR mode by8 optimization enabled Dec 12 18:39:06.934851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:39:06.935048 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:39:06.939709 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:39:06.943157 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:39:06.953627 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:39:06.973915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:39:06.977369 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 12 18:39:06.976642 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:39:06.980570 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 12 18:39:06.980743 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:39:06.985991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:39:06.996113 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:39:06.996203 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 12 18:39:07.004512 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:39:07.004587 kernel: GPT:9289727 != 33554431 Dec 12 18:39:07.004601 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:39:07.005271 kernel: GPT:9289727 != 33554431 Dec 12 18:39:07.007121 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:39:07.007178 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 18:39:07.025149 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:39:07.046514 kernel: nvme nvme0: using unchecked data buffer Dec 12 18:39:07.166885 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 12 18:39:07.237113 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 12 18:39:07.248601 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:39:07.258716 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 12 18:39:07.259292 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 12 18:39:07.271836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 12 18:39:07.272673 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:39:07.274400 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:39:07.275794 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:39:07.277655 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:39:07.281635 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:39:07.299438 disk-uuid[673]: Primary Header is updated. Dec 12 18:39:07.299438 disk-uuid[673]: Secondary Entries is updated. Dec 12 18:39:07.299438 disk-uuid[673]: Secondary Header is updated. Dec 12 18:39:07.305527 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 18:39:07.307828 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:39:08.320490 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 18:39:08.322251 disk-uuid[676]: The operation has completed successfully. Dec 12 18:39:08.478869 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:39:08.479005 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:39:08.500556 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:39:08.518569 sh[941]: Success Dec 12 18:39:08.549768 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:39:08.549849 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:39:08.552554 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:39:08.563488 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 12 18:39:08.681298 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:39:08.696597 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:39:08.729919 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:39:08.748487 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (964) Dec 12 18:39:08.751493 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:39:08.751555 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:39:08.866538 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 18:39:08.866624 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:39:08.866647 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:39:08.883273 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:39:08.884236 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:39:08.885652 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:39:08.886430 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:39:08.888062 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:39:08.922572 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (994) Dec 12 18:39:08.926006 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:39:08.926071 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:39:08.944813 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 18:39:08.944990 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 18:39:08.954489 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:39:08.955711 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:39:08.957936 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:39:09.000043 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:39:09.003381 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:39:09.051786 systemd-networkd[1133]: lo: Link UP Dec 12 18:39:09.051800 systemd-networkd[1133]: lo: Gained carrier Dec 12 18:39:09.053681 systemd-networkd[1133]: Enumeration completed Dec 12 18:39:09.054085 systemd-networkd[1133]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:39:09.054090 systemd-networkd[1133]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:39:09.055485 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:39:09.056261 systemd[1]: Reached target network.target - Network. Dec 12 18:39:09.062121 systemd-networkd[1133]: eth0: Link UP Dec 12 18:39:09.062127 systemd-networkd[1133]: eth0: Gained carrier Dec 12 18:39:09.062149 systemd-networkd[1133]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:39:09.075593 systemd-networkd[1133]: eth0: DHCPv4 address 172.31.29.16/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 12 18:39:09.508294 ignition[1078]: Ignition 2.22.0 Dec 12 18:39:09.508314 ignition[1078]: Stage: fetch-offline Dec 12 18:39:09.508589 ignition[1078]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:39:09.508602 ignition[1078]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:39:09.509376 ignition[1078]: Ignition finished successfully Dec 12 18:39:09.512022 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:39:09.513792 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:39:09.549384 ignition[1143]: Ignition 2.22.0 Dec 12 18:39:09.549400 ignition[1143]: Stage: fetch Dec 12 18:39:09.549799 ignition[1143]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:39:09.549812 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:39:09.549927 ignition[1143]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:39:09.589283 ignition[1143]: PUT result: OK Dec 12 18:39:09.591401 ignition[1143]: parsed url from cmdline: "" Dec 12 18:39:09.591412 ignition[1143]: no config URL provided Dec 12 18:39:09.591423 ignition[1143]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:39:09.591440 ignition[1143]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:39:09.591486 ignition[1143]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:39:09.592262 ignition[1143]: PUT result: OK Dec 12 18:39:09.592320 ignition[1143]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 12 18:39:09.593121 ignition[1143]: GET result: OK Dec 12 18:39:09.593224 ignition[1143]: parsing config with SHA512: 5c9edea854aa62cddc3f934ada80446511b1363a021e5ad9fcd36ea613c49b388724e01bf9abf703e29a0e57d29e3e9dc80a5bf7f95ac03756285f303877f452 Dec 12 18:39:09.598099 unknown[1143]: fetched base config from "system" Dec 12 18:39:09.599073 ignition[1143]: fetch: fetch complete Dec 12 18:39:09.598116 unknown[1143]: fetched base config from "system" Dec 12 18:39:09.599095 ignition[1143]: fetch: fetch passed Dec 12 18:39:09.598145 unknown[1143]: fetched user config from "aws" Dec 12 18:39:09.599182 ignition[1143]: Ignition finished successfully Dec 12 18:39:09.602124 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:39:09.604273 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:39:09.643257 ignition[1150]: Ignition 2.22.0 Dec 12 18:39:09.643277 ignition[1150]: Stage: kargs Dec 12 18:39:09.643680 ignition[1150]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:39:09.643694 ignition[1150]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:39:09.643809 ignition[1150]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:39:09.644568 ignition[1150]: PUT result: OK Dec 12 18:39:09.647253 ignition[1150]: kargs: kargs passed Dec 12 18:39:09.647331 ignition[1150]: Ignition finished successfully Dec 12 18:39:09.649685 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:39:09.651163 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:39:09.678121 ignition[1156]: Ignition 2.22.0 Dec 12 18:39:09.678138 ignition[1156]: Stage: disks Dec 12 18:39:09.678587 ignition[1156]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:39:09.678600 ignition[1156]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:39:09.678714 ignition[1156]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:39:09.679993 ignition[1156]: PUT result: OK Dec 12 18:39:09.683436 ignition[1156]: disks: disks passed Dec 12 18:39:09.683550 ignition[1156]: Ignition finished successfully Dec 12 18:39:09.686114 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:39:09.686804 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:39:09.687217 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:39:09.687810 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:39:09.688393 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:39:09.689209 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:39:09.691063 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:39:09.745321 systemd-fsck[1164]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:39:09.750055 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:39:09.751883 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:39:09.914484 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:39:09.915070 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:39:09.915978 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:39:09.918162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:39:09.919939 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:39:09.922288 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:39:09.923045 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:39:09.923544 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:39:09.928979 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:39:09.931135 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:39:09.945502 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1183) Dec 12 18:39:09.948626 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:39:09.948690 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:39:09.956608 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 18:39:09.956686 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 18:39:09.959483 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:39:10.364797 initrd-setup-root[1207]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:39:10.381621 initrd-setup-root[1214]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:39:10.393120 initrd-setup-root[1221]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:39:10.409169 initrd-setup-root[1228]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:39:10.424655 systemd-networkd[1133]: eth0: Gained IPv6LL Dec 12 18:39:10.655102 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:39:10.657777 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:39:10.660623 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:39:10.677438 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:39:10.679740 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:39:10.709647 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:39:10.718026 ignition[1296]: INFO : Ignition 2.22.0 Dec 12 18:39:10.718026 ignition[1296]: INFO : Stage: mount Dec 12 18:39:10.719755 ignition[1296]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:39:10.719755 ignition[1296]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:39:10.719755 ignition[1296]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:39:10.719755 ignition[1296]: INFO : PUT result: OK Dec 12 18:39:10.722209 ignition[1296]: INFO : mount: mount passed Dec 12 18:39:10.722744 ignition[1296]: INFO : Ignition finished successfully Dec 12 18:39:10.724370 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:39:10.725999 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:39:10.917110 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:39:10.949494 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1307) Dec 12 18:39:10.952640 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:39:10.952711 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:39:10.962018 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 18:39:10.962104 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 18:39:10.964138 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:39:10.997061 ignition[1323]: INFO : Ignition 2.22.0 Dec 12 18:39:10.997061 ignition[1323]: INFO : Stage: files Dec 12 18:39:10.998499 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:39:10.998499 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:39:10.998499 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:39:10.998499 ignition[1323]: INFO : PUT result: OK Dec 12 18:39:11.001187 ignition[1323]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:39:11.002442 ignition[1323]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:39:11.002442 ignition[1323]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:39:11.015982 ignition[1323]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:39:11.016799 ignition[1323]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:39:11.016799 ignition[1323]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:39:11.016503 unknown[1323]: wrote ssh authorized keys file for user: core Dec 12 18:39:11.019583 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:39:11.020222 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 12 18:39:11.090349 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:39:11.310800 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:39:11.310800 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 18:39:11.312828 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 12 18:39:11.522928 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 12 18:39:11.641663 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 18:39:11.641663 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:39:11.644029 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:39:11.644029 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:39:11.644029 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:39:11.644029 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:39:11.644029 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:39:11.644029 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:39:11.644029 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:39:11.649733 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:39:11.649733 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:39:11.649733 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:39:11.652708 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:39:11.652708 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:39:11.652708 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 12 18:39:11.897179 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 12 18:39:12.166055 ignition[1323]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:39:12.166055 ignition[1323]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 12 18:39:12.175228 ignition[1323]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:39:12.180129 ignition[1323]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:39:12.180129 ignition[1323]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 12 18:39:12.180129 ignition[1323]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:39:12.182899 ignition[1323]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:39:12.182899 ignition[1323]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:39:12.182899 ignition[1323]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:39:12.182899 ignition[1323]: INFO : files: files passed Dec 12 18:39:12.182899 ignition[1323]: INFO : Ignition finished successfully Dec 12 18:39:12.182227 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:39:12.185585 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:39:12.188366 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:39:12.197727 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:39:12.198593 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:39:12.205084 initrd-setup-root-after-ignition[1354]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:39:12.206502 initrd-setup-root-after-ignition[1358]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:39:12.207942 initrd-setup-root-after-ignition[1354]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:39:12.208548 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:39:12.209381 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:39:12.211041 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:39:12.254931 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:39:12.255087 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:39:12.256680 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:39:12.257819 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:39:12.258711 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:39:12.259872 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:39:12.285924 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:39:12.288124 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:39:12.313750 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:39:12.314443 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:39:12.315566 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:39:12.316475 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:39:12.316715 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:39:12.318034 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:39:12.318929 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:39:12.319779 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:39:12.320572 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:39:12.321498 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:39:12.322223 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:39:12.323069 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:39:12.323857 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:39:12.324668 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:39:12.325898 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:39:12.326702 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:39:12.327440 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:39:12.327683 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:39:12.328731 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:39:12.329762 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:39:12.330388 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:39:12.330528 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:39:12.331224 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:39:12.331437 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:39:12.332770 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:39:12.333033 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:39:12.333851 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:39:12.334045 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:39:12.336585 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:39:12.339799 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:39:12.340294 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:39:12.340547 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:39:12.344365 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:39:12.345414 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:39:12.353090 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:39:12.353233 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:39:12.379396 ignition[1378]: INFO : Ignition 2.22.0 Dec 12 18:39:12.379396 ignition[1378]: INFO : Stage: umount Dec 12 18:39:12.380564 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:39:12.381744 ignition[1378]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:39:12.381744 ignition[1378]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:39:12.382876 ignition[1378]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:39:12.383379 ignition[1378]: INFO : PUT result: OK Dec 12 18:39:12.386173 ignition[1378]: INFO : umount: umount passed Dec 12 18:39:12.387854 ignition[1378]: INFO : Ignition finished successfully Dec 12 18:39:12.388571 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:39:12.388729 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:39:12.390282 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:39:12.390400 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:39:12.390957 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:39:12.391020 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:39:12.391672 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:39:12.391736 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:39:12.392338 systemd[1]: Stopped target network.target - Network. Dec 12 18:39:12.393134 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:39:12.393206 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:39:12.393894 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:39:12.394505 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:39:12.398522 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:39:12.398913 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:39:12.400349 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:39:12.401278 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:39:12.401342 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:39:12.401952 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:39:12.402005 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:39:12.402603 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:39:12.402682 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:39:12.403312 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:39:12.403374 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:39:12.404148 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:39:12.404801 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:39:12.410718 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:39:12.410868 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:39:12.415121 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:39:12.415529 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:39:12.415676 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:39:12.418073 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:39:12.418930 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:39:12.419662 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:39:12.419713 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:39:12.421607 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:39:12.422119 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:39:12.422194 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:39:12.422825 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:39:12.422885 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:39:12.423553 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:39:12.423612 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:39:12.424417 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:39:12.424556 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:39:12.425529 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:39:12.429786 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:39:12.429884 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:39:12.443660 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:39:12.443919 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:39:12.445833 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:39:12.445911 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:39:12.447570 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:39:12.447623 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:39:12.449715 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:39:12.449788 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:39:12.452799 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:39:12.453014 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:39:12.454129 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:39:12.454200 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:39:12.457567 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:39:12.459447 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:39:12.459626 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:39:12.461712 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:39:12.461783 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:39:12.463264 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 18:39:12.463328 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:39:12.466080 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:39:12.466146 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:39:12.466797 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:39:12.466854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:39:12.470404 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:39:12.470502 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 12 18:39:12.470555 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:39:12.470607 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:39:12.471166 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:39:12.471300 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:39:12.479833 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:39:12.479973 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:39:12.520738 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:39:12.520944 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:39:12.522124 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:39:12.522660 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:39:12.522721 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:39:12.524179 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:39:12.561161 systemd[1]: Switching root. Dec 12 18:39:12.608043 systemd-journald[188]: Journal stopped Dec 12 18:39:15.095070 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Dec 12 18:39:15.095166 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:39:15.095188 kernel: SELinux: policy capability open_perms=1 Dec 12 18:39:15.095207 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:39:15.095225 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:39:15.095258 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:39:15.095277 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:39:15.095295 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:39:15.095314 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:39:15.095335 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:39:15.095356 kernel: audit: type=1403 audit(1765564753.543:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:39:15.095379 systemd[1]: Successfully loaded SELinux policy in 91.990ms. Dec 12 18:39:15.095420 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.317ms. Dec 12 18:39:15.095444 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:39:15.097514 systemd[1]: Detected virtualization amazon. Dec 12 18:39:15.097553 systemd[1]: Detected architecture x86-64. Dec 12 18:39:15.097580 systemd[1]: Detected first boot. Dec 12 18:39:15.097600 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:39:15.097619 zram_generator::config[1423]: No configuration found. Dec 12 18:39:15.097645 kernel: Guest personality initialized and is inactive Dec 12 18:39:15.097666 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:39:15.097685 kernel: Initialized host personality Dec 12 18:39:15.097708 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:39:15.097725 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:39:15.097745 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:39:15.097764 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:39:15.097782 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:39:15.097801 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:39:15.097820 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:39:15.097839 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:39:15.097857 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:39:15.097879 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:39:15.097902 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:39:15.097921 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:39:15.097939 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:39:15.097959 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:39:15.097978 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:39:15.097998 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:39:15.098017 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:39:15.098038 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:39:15.098057 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:39:15.098076 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:39:15.098094 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:39:15.098112 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:39:15.098131 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:39:15.098149 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:39:15.098168 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:39:15.098190 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:39:15.098208 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:39:15.098226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:39:15.098245 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:39:15.098263 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:39:15.098282 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:39:15.098301 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:39:15.098319 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:39:15.098337 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:39:15.098359 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:39:15.098377 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:39:15.098398 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:39:15.098416 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:39:15.098435 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:39:15.098453 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:39:15.098882 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:39:15.098904 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:39:15.098924 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:39:15.098947 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:39:15.098966 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:39:15.098986 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:39:15.099006 systemd[1]: Reached target machines.target - Containers. Dec 12 18:39:15.099026 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:39:15.099045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:39:15.099063 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:39:15.099082 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:39:15.099115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:39:15.099133 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:39:15.099150 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:39:15.099168 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:39:15.099186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:39:15.099207 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:39:15.099661 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:39:15.099691 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:39:15.099713 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:39:15.099740 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:39:15.099763 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:39:15.099784 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:39:15.099810 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:39:15.099832 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:39:15.099855 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:39:15.099874 kernel: loop: module loaded Dec 12 18:39:15.099892 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:39:15.099912 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:39:15.099930 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:39:15.099953 systemd[1]: Stopped verity-setup.service. Dec 12 18:39:15.099973 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:39:15.099992 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:39:15.100011 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:39:15.100031 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:39:15.100051 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:39:15.100070 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:39:15.100089 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:39:15.100108 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:39:15.100130 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:39:15.100150 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:39:15.100170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:39:15.100190 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:39:15.100209 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:39:15.100229 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:39:15.100248 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:39:15.100267 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:39:15.100290 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:39:15.100310 kernel: ACPI: bus type drm_connector registered Dec 12 18:39:15.100328 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:39:15.100349 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:39:15.100369 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:39:15.100388 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:39:15.100410 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:39:15.100430 kernel: fuse: init (API version 7.41) Dec 12 18:39:15.100449 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:39:15.100484 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:39:15.100505 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:39:15.100522 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:39:15.100540 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:39:15.100599 systemd-journald[1509]: Collecting audit messages is disabled. Dec 12 18:39:15.100644 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:39:15.100664 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:39:15.100683 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:39:15.100702 systemd-journald[1509]: Journal started Dec 12 18:39:15.100738 systemd-journald[1509]: Runtime Journal (/run/log/journal/ec2ed9feb9c764cdcffd4c6b888fc8c6) is 4.7M, max 38.1M, 33.3M free. Dec 12 18:39:14.645096 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:39:14.669082 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 12 18:39:14.669611 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:39:15.106535 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:39:15.110498 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:39:15.119096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:39:15.124497 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:39:15.145547 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:39:15.145648 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:39:15.144856 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:39:15.150178 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:39:15.152591 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:39:15.154173 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:39:15.155894 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:39:15.199022 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:39:15.207496 kernel: loop0: detected capacity change from 0 to 219144 Dec 12 18:39:15.206157 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:39:15.207756 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:39:15.213840 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:39:15.227033 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:39:15.230590 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Dec 12 18:39:15.231004 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Dec 12 18:39:15.236547 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:39:15.249450 systemd-journald[1509]: Time spent on flushing to /var/log/journal/ec2ed9feb9c764cdcffd4c6b888fc8c6 is 32.864ms for 1033 entries. Dec 12 18:39:15.249450 systemd-journald[1509]: System Journal (/var/log/journal/ec2ed9feb9c764cdcffd4c6b888fc8c6) is 8M, max 195.6M, 187.6M free. Dec 12 18:39:15.292043 systemd-journald[1509]: Received client request to flush runtime journal. Dec 12 18:39:15.248650 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:39:15.253696 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:39:15.256100 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:39:15.295185 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:39:15.361203 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:39:15.365624 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:39:15.395389 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Dec 12 18:39:15.395420 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Dec 12 18:39:15.401527 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:39:15.406501 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:39:15.442484 kernel: loop1: detected capacity change from 0 to 128560 Dec 12 18:39:15.562620 kernel: loop2: detected capacity change from 0 to 110984 Dec 12 18:39:15.672810 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:39:15.679596 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:39:15.695803 kernel: loop3: detected capacity change from 0 to 72368 Dec 12 18:39:15.703068 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:39:15.832494 kernel: loop4: detected capacity change from 0 to 219144 Dec 12 18:39:15.864490 kernel: loop5: detected capacity change from 0 to 128560 Dec 12 18:39:15.893500 kernel: loop6: detected capacity change from 0 to 110984 Dec 12 18:39:15.918523 kernel: loop7: detected capacity change from 0 to 72368 Dec 12 18:39:15.934883 (sd-merge)[1583]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 12 18:39:15.935611 (sd-merge)[1583]: Merged extensions into '/usr'. Dec 12 18:39:15.942655 systemd[1]: Reload requested from client PID 1537 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:39:15.942831 systemd[1]: Reloading... Dec 12 18:39:16.065494 zram_generator::config[1609]: No configuration found. Dec 12 18:39:16.327625 systemd[1]: Reloading finished in 383 ms. Dec 12 18:39:16.350321 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:39:16.351437 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:39:16.363781 systemd[1]: Starting ensure-sysext.service... Dec 12 18:39:16.367625 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:39:16.370611 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:39:16.397507 systemd[1]: Reload requested from client PID 1661 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:39:16.397527 systemd[1]: Reloading... Dec 12 18:39:16.420985 systemd-udevd[1663]: Using default interface naming scheme 'v255'. Dec 12 18:39:16.432783 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:39:16.433916 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:39:16.434252 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:39:16.434576 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:39:16.435437 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:39:16.436946 systemd-tmpfiles[1662]: ACLs are not supported, ignoring. Dec 12 18:39:16.437040 systemd-tmpfiles[1662]: ACLs are not supported, ignoring. Dec 12 18:39:16.450402 systemd-tmpfiles[1662]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:39:16.450417 systemd-tmpfiles[1662]: Skipping /boot Dec 12 18:39:16.474105 systemd-tmpfiles[1662]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:39:16.474121 systemd-tmpfiles[1662]: Skipping /boot Dec 12 18:39:16.512495 zram_generator::config[1689]: No configuration found. Dec 12 18:39:16.805377 (udev-worker)[1727]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:39:16.922714 ldconfig[1533]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:39:16.931554 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:39:16.940499 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:39:16.952509 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:39:16.957489 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 12 18:39:16.961499 kernel: ACPI: button: Sleep Button [SLPF] Dec 12 18:39:16.972483 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 12 18:39:17.110668 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:39:17.110953 systemd[1]: Reloading finished in 712 ms. Dec 12 18:39:17.125750 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:39:17.128726 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:39:17.130048 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:39:17.159743 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:39:17.166774 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:39:17.170063 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:39:17.174693 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:39:17.181251 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:39:17.184385 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:39:17.195746 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:39:17.196042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:39:17.201960 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:39:17.209586 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:39:17.223005 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:39:17.223808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:39:17.223979 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:39:17.224127 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:39:17.230244 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:39:17.230890 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:39:17.231311 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:39:17.231439 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:39:17.237865 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:39:17.238435 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:39:17.248067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:39:17.248449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:39:17.257900 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:39:17.259376 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:39:17.259590 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:39:17.259862 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:39:17.261706 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:39:17.270746 systemd[1]: Finished ensure-sysext.service. Dec 12 18:39:17.275342 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:39:17.276768 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:39:17.292557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:39:17.294537 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:39:17.297990 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:39:17.304344 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:39:17.306133 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:39:17.309614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:39:17.310870 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:39:17.311737 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:39:17.315833 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:39:17.325958 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:39:17.334182 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:39:17.363994 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:39:17.379239 augenrules[1904]: No rules Dec 12 18:39:17.380775 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:39:17.381067 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:39:17.412218 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:39:17.434987 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:39:17.458957 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:39:17.574786 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 12 18:39:17.583649 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:39:17.587427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:39:17.643557 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:39:17.748663 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:39:17.751133 systemd-networkd[1856]: lo: Link UP Dec 12 18:39:17.751144 systemd-networkd[1856]: lo: Gained carrier Dec 12 18:39:17.755531 systemd-networkd[1856]: Enumeration completed Dec 12 18:39:17.755689 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:39:17.756832 systemd-networkd[1856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:39:17.756846 systemd-networkd[1856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:39:17.760714 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:39:17.761545 systemd-networkd[1856]: eth0: Link UP Dec 12 18:39:17.761821 systemd-networkd[1856]: eth0: Gained carrier Dec 12 18:39:17.761853 systemd-networkd[1856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:39:17.763149 systemd-resolved[1857]: Positive Trust Anchors: Dec 12 18:39:17.763534 systemd-resolved[1857]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:39:17.763647 systemd-resolved[1857]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:39:17.764710 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:39:17.771550 systemd-networkd[1856]: eth0: DHCPv4 address 172.31.29.16/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 12 18:39:17.774816 systemd-resolved[1857]: Defaulting to hostname 'linux'. Dec 12 18:39:17.777728 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:39:17.778263 systemd[1]: Reached target network.target - Network. Dec 12 18:39:17.778880 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:39:17.779747 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:39:17.780379 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:39:17.780899 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:39:17.781403 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:39:17.782070 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:39:17.782717 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:39:17.783231 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:39:17.783729 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:39:17.783770 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:39:17.784245 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:39:17.787610 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:39:17.789731 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:39:17.794697 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:39:17.795630 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:39:17.796206 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:39:17.799063 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:39:17.800434 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:39:17.802038 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:39:17.802715 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:39:17.804984 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:39:17.805429 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:39:17.805920 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:39:17.805958 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:39:17.807112 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:39:17.811636 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:39:17.813811 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:39:17.817832 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:39:17.821910 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:39:17.825783 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:39:17.827561 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:39:17.834485 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:39:17.837752 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:39:17.842250 systemd[1]: Started ntpd.service - Network Time Service. Dec 12 18:39:17.848370 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:39:17.854097 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 12 18:39:17.876034 jq[1946]: false Dec 12 18:39:17.876646 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:39:17.888770 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:39:17.899733 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:39:17.902758 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:39:17.903556 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:39:17.917757 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:39:17.926266 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:39:17.936484 google_oslogin_nss_cache[1948]: oslogin_cache_refresh[1948]: Refreshing passwd entry cache Dec 12 18:39:17.935022 oslogin_cache_refresh[1948]: Refreshing passwd entry cache Dec 12 18:39:17.940777 extend-filesystems[1947]: Found /dev/nvme0n1p6 Dec 12 18:39:17.943977 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:39:17.945959 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:39:17.947720 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:39:17.971223 google_oslogin_nss_cache[1948]: oslogin_cache_refresh[1948]: Failure getting users, quitting Dec 12 18:39:17.971223 google_oslogin_nss_cache[1948]: oslogin_cache_refresh[1948]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:39:17.971223 google_oslogin_nss_cache[1948]: oslogin_cache_refresh[1948]: Refreshing group entry cache Dec 12 18:39:17.971223 google_oslogin_nss_cache[1948]: oslogin_cache_refresh[1948]: Failure getting groups, quitting Dec 12 18:39:17.971223 google_oslogin_nss_cache[1948]: oslogin_cache_refresh[1948]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:39:17.968609 oslogin_cache_refresh[1948]: Failure getting users, quitting Dec 12 18:39:17.968632 oslogin_cache_refresh[1948]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:39:17.968691 oslogin_cache_refresh[1948]: Refreshing group entry cache Dec 12 18:39:17.978417 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:39:17.971081 oslogin_cache_refresh[1948]: Failure getting groups, quitting Dec 12 18:39:17.979835 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:39:17.971099 oslogin_cache_refresh[1948]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:39:18.004670 extend-filesystems[1947]: Found /dev/nvme0n1p9 Dec 12 18:39:18.004670 extend-filesystems[1947]: Checking size of /dev/nvme0n1p9 Dec 12 18:39:18.005664 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:39:18.006540 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:39:18.013270 jq[1963]: true Dec 12 18:39:18.036774 ntpd[1950]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:39:18.039634 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:39:18.039959 ntpd[1950]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:39:18.043129 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:39:18.043129 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: ---------------------------------------------------- Dec 12 18:39:18.043129 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:39:18.043129 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:39:18.043129 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: corporation. Support and training for ntp-4 are Dec 12 18:39:18.043129 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: available at https://www.nwtime.org/support Dec 12 18:39:18.043129 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: ---------------------------------------------------- Dec 12 18:39:18.040189 ntpd[1950]: ---------------------------------------------------- Dec 12 18:39:18.040199 ntpd[1950]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:39:18.040208 ntpd[1950]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:39:18.040218 ntpd[1950]: corporation. Support and training for ntp-4 are Dec 12 18:39:18.040227 ntpd[1950]: available at https://www.nwtime.org/support Dec 12 18:39:18.040236 ntpd[1950]: ---------------------------------------------------- Dec 12 18:39:18.049720 ntpd[1950]: proto: precision = 0.061 usec (-24) Dec 12 18:39:18.056494 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: proto: precision = 0.061 usec (-24) Dec 12 18:39:18.068399 kernel: ntpd[1950]: segfault at 24 ip 0000562012228aeb sp 00007ffcf4374060 error 4 in ntpd[68aeb,5620121c6000+80000] likely on CPU 0 (core 0, socket 0) Dec 12 18:39:18.068502 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 12 18:39:18.068530 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: basedate set to 2025-11-30 Dec 12 18:39:18.068530 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: gps base set to 2025-11-30 (week 2395) Dec 12 18:39:18.068530 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:39:18.068530 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:39:18.068530 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:39:18.068530 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: Listen normally on 3 eth0 172.31.29.16:123 Dec 12 18:39:18.068530 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: Listen normally on 4 lo [::1]:123 Dec 12 18:39:18.068530 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: bind(21) AF_INET6 [fe80::436:c4ff:fe86:dabd%2]:123 flags 0x811 failed: Cannot assign requested address Dec 12 18:39:18.068530 ntpd[1950]: 12 Dec 18:39:18 ntpd[1950]: unable to create socket on eth0 (5) for [fe80::436:c4ff:fe86:dabd%2]:123 Dec 12 18:39:18.059137 ntpd[1950]: basedate set to 2025-11-30 Dec 12 18:39:18.059167 ntpd[1950]: gps base set to 2025-11-30 (week 2395) Dec 12 18:39:18.059333 ntpd[1950]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:39:18.059366 ntpd[1950]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:39:18.059695 ntpd[1950]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:39:18.059727 ntpd[1950]: Listen normally on 3 eth0 172.31.29.16:123 Dec 12 18:39:18.059758 ntpd[1950]: Listen normally on 4 lo [::1]:123 Dec 12 18:39:18.059789 ntpd[1950]: bind(21) AF_INET6 [fe80::436:c4ff:fe86:dabd%2]:123 flags 0x811 failed: Cannot assign requested address Dec 12 18:39:18.059812 ntpd[1950]: unable to create socket on eth0 (5) for [fe80::436:c4ff:fe86:dabd%2]:123 Dec 12 18:39:18.081982 tar[1969]: linux-amd64/LICENSE Dec 12 18:39:18.081982 tar[1969]: linux-amd64/helm Dec 12 18:39:18.093624 (ntainerd)[1990]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:39:18.118510 extend-filesystems[1947]: Resized partition /dev/nvme0n1p9 Dec 12 18:39:18.109240 dbus-daemon[1944]: [system] SELinux support is enabled Dec 12 18:39:18.108919 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:39:18.109222 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:39:18.123111 extend-filesystems[1998]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:39:18.110077 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:39:18.115418 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:39:18.115450 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:39:18.116616 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:39:18.116641 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:39:18.136351 jq[1985]: true Dec 12 18:39:18.142865 systemd-coredump[2001]: Process 1950 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 12 18:39:18.149248 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Dec 12 18:39:18.153680 update_engine[1959]: I20251212 18:39:18.153545 1959 main.cc:92] Flatcar Update Engine starting Dec 12 18:39:18.160500 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Dec 12 18:39:18.161699 systemd[1]: Started systemd-coredump@0-2001-0.service - Process Core Dump (PID 2001/UID 0). Dec 12 18:39:18.163867 coreos-metadata[1943]: Dec 12 18:39:18.163 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 12 18:39:18.165017 coreos-metadata[1943]: Dec 12 18:39:18.164 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 12 18:39:18.165694 coreos-metadata[1943]: Dec 12 18:39:18.165 INFO Fetch successful Dec 12 18:39:18.165694 coreos-metadata[1943]: Dec 12 18:39:18.165 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 12 18:39:18.166730 coreos-metadata[1943]: Dec 12 18:39:18.166 INFO Fetch successful Dec 12 18:39:18.166730 coreos-metadata[1943]: Dec 12 18:39:18.166 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 12 18:39:18.167433 coreos-metadata[1943]: Dec 12 18:39:18.167 INFO Fetch successful Dec 12 18:39:18.167433 coreos-metadata[1943]: Dec 12 18:39:18.167 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 12 18:39:18.168667 coreos-metadata[1943]: Dec 12 18:39:18.168 INFO Fetch successful Dec 12 18:39:18.168667 coreos-metadata[1943]: Dec 12 18:39:18.168 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 12 18:39:18.169337 coreos-metadata[1943]: Dec 12 18:39:18.169 INFO Fetch failed with 404: resource not found Dec 12 18:39:18.169337 coreos-metadata[1943]: Dec 12 18:39:18.169 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 12 18:39:18.169881 coreos-metadata[1943]: Dec 12 18:39:18.169 INFO Fetch successful Dec 12 18:39:18.170052 coreos-metadata[1943]: Dec 12 18:39:18.169 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 12 18:39:18.172171 coreos-metadata[1943]: Dec 12 18:39:18.171 INFO Fetch successful Dec 12 18:39:18.172171 coreos-metadata[1943]: Dec 12 18:39:18.172 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 12 18:39:18.172832 coreos-metadata[1943]: Dec 12 18:39:18.172 INFO Fetch successful Dec 12 18:39:18.172832 coreos-metadata[1943]: Dec 12 18:39:18.172 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 12 18:39:18.174553 coreos-metadata[1943]: Dec 12 18:39:18.173 INFO Fetch successful Dec 12 18:39:18.174553 coreos-metadata[1943]: Dec 12 18:39:18.173 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 12 18:39:18.176945 coreos-metadata[1943]: Dec 12 18:39:18.176 INFO Fetch successful Dec 12 18:39:18.177190 dbus-daemon[1944]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1856 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 18:39:18.183320 update_engine[1959]: I20251212 18:39:18.183254 1959 update_check_scheduler.cc:74] Next update check in 11m50s Dec 12 18:39:18.196985 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 18:39:18.202637 systemd-logind[1957]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:39:18.202668 systemd-logind[1957]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 12 18:39:18.203798 systemd-logind[1957]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:39:18.204848 systemd-logind[1957]: New seat seat0. Dec 12 18:39:18.212692 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 12 18:39:18.213655 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:39:18.214603 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:39:18.238425 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:39:18.334173 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Dec 12 18:39:18.343749 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:39:18.345993 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:39:18.355493 extend-filesystems[1998]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 12 18:39:18.355493 extend-filesystems[1998]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 12 18:39:18.355493 extend-filesystems[1998]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Dec 12 18:39:18.376736 extend-filesystems[1947]: Resized filesystem in /dev/nvme0n1p9 Dec 12 18:39:18.356568 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:39:18.357066 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:39:18.455307 bash[2032]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:39:18.438031 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:39:18.443025 systemd[1]: Starting sshkeys.service... Dec 12 18:39:18.491346 systemd-coredump[2005]: Process 1950 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1950: #0 0x0000562012228aeb n/a (ntpd + 0x68aeb) #1 0x00005620121d1cdf n/a (ntpd + 0x11cdf) #2 0x00005620121d2575 n/a (ntpd + 0x12575) #3 0x00005620121cdd8a n/a (ntpd + 0xdd8a) #4 0x00005620121cf5d3 n/a (ntpd + 0xf5d3) #5 0x00005620121d7fd1 n/a (ntpd + 0x17fd1) #6 0x00005620121c8c2d n/a (ntpd + 0x8c2d) #7 0x00007f279403616c n/a (libc.so.6 + 0x2716c) #8 0x00007f2794036229 __libc_start_main (libc.so.6 + 0x27229) #9 0x00005620121c8c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 12 18:39:18.493971 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:39:18.499888 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:39:18.501446 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 12 18:39:18.502607 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 12 18:39:18.510375 systemd[1]: systemd-coredump@0-2001-0.service: Deactivated successfully. Dec 12 18:39:18.534711 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 18:39:18.556322 dbus-daemon[1944]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 18:39:18.563611 dbus-daemon[1944]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2006 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 18:39:18.577055 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 18:39:18.606782 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Dec 12 18:39:18.613897 systemd[1]: Started ntpd.service - Network Time Service. Dec 12 18:39:18.786497 kernel: ntpd[2097]: segfault at 24 ip 0000562ee9a3caeb sp 00007ffdeca5a700 error 4 in ntpd[68aeb,562ee99da000+80000] likely on CPU 0 (core 0, socket 0) Dec 12 18:39:18.786599 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 12 18:39:18.777774 ntpd[2097]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: ---------------------------------------------------- Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: corporation. Support and training for ntp-4 are Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: available at https://www.nwtime.org/support Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: ---------------------------------------------------- Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: proto: precision = 0.092 usec (-23) Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: basedate set to 2025-11-30 Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: gps base set to 2025-11-30 (week 2395) Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: Listen normally on 3 eth0 172.31.29.16:123 Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: Listen normally on 4 lo [::1]:123 Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: bind(21) AF_INET6 [fe80::436:c4ff:fe86:dabd%2]:123 flags 0x811 failed: Cannot assign requested address Dec 12 18:39:18.786961 ntpd[2097]: 12 Dec 18:39:18 ntpd[2097]: unable to create socket on eth0 (5) for [fe80::436:c4ff:fe86:dabd%2]:123 Dec 12 18:39:18.777851 ntpd[2097]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:39:18.777862 ntpd[2097]: ---------------------------------------------------- Dec 12 18:39:18.777871 ntpd[2097]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:39:18.777880 ntpd[2097]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:39:18.777890 ntpd[2097]: corporation. Support and training for ntp-4 are Dec 12 18:39:18.777899 ntpd[2097]: available at https://www.nwtime.org/support Dec 12 18:39:18.777908 ntpd[2097]: ---------------------------------------------------- Dec 12 18:39:18.778644 ntpd[2097]: proto: precision = 0.092 usec (-23) Dec 12 18:39:18.778892 ntpd[2097]: basedate set to 2025-11-30 Dec 12 18:39:18.778903 ntpd[2097]: gps base set to 2025-11-30 (week 2395) Dec 12 18:39:18.778996 ntpd[2097]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:39:18.779025 ntpd[2097]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:39:18.779211 ntpd[2097]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:39:18.779236 ntpd[2097]: Listen normally on 3 eth0 172.31.29.16:123 Dec 12 18:39:18.779262 ntpd[2097]: Listen normally on 4 lo [::1]:123 Dec 12 18:39:18.779289 ntpd[2097]: bind(21) AF_INET6 [fe80::436:c4ff:fe86:dabd%2]:123 flags 0x811 failed: Cannot assign requested address Dec 12 18:39:18.779309 ntpd[2097]: unable to create socket on eth0 (5) for [fe80::436:c4ff:fe86:dabd%2]:123 Dec 12 18:39:18.861061 coreos-metadata[2061]: Dec 12 18:39:18.860 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 12 18:39:18.862771 coreos-metadata[2061]: Dec 12 18:39:18.862 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 12 18:39:18.864274 coreos-metadata[2061]: Dec 12 18:39:18.864 INFO Fetch successful Dec 12 18:39:18.864274 coreos-metadata[2061]: Dec 12 18:39:18.864 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 12 18:39:18.865998 coreos-metadata[2061]: Dec 12 18:39:18.865 INFO Fetch successful Dec 12 18:39:18.873766 systemd-coredump[2137]: Process 2097 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 12 18:39:18.878750 polkitd[2092]: Started polkitd version 126 Dec 12 18:39:18.876616 unknown[2061]: wrote ssh authorized keys file for user: core Dec 12 18:39:18.940261 systemd[1]: Started systemd-coredump@1-2137-0.service - Process Core Dump (PID 2137/UID 0). Dec 12 18:39:18.947273 polkitd[2092]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 18:39:18.964917 polkitd[2092]: Loading rules from directory /run/polkit-1/rules.d Dec 12 18:39:18.965004 polkitd[2092]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:39:18.965442 polkitd[2092]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 18:39:18.965511 polkitd[2092]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:39:18.965568 polkitd[2092]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 18:39:18.978295 polkitd[2092]: Finished loading, compiling and executing 2 rules Dec 12 18:39:18.987058 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 18:39:18.988636 dbus-daemon[1944]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 18:39:18.995184 polkitd[2092]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 18:39:19.011248 update-ssh-keys[2148]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:39:19.013526 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:39:19.022164 systemd[1]: Finished sshkeys.service. Dec 12 18:39:19.042342 locksmithd[2011]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:39:19.139490 sshd_keygen[1993]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:39:19.159106 systemd-hostnamed[2006]: Hostname set to (transient) Dec 12 18:39:19.159953 systemd-resolved[1857]: System hostname changed to 'ip-172-31-29-16'. Dec 12 18:39:19.193796 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:39:19.204377 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:39:19.242803 containerd[1990]: time="2025-12-12T18:39:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:39:19.245491 containerd[1990]: time="2025-12-12T18:39:19.245243435Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:39:19.248180 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:39:19.248547 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:39:19.256669 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:39:19.257549 systemd-networkd[1856]: eth0: Gained IPv6LL Dec 12 18:39:19.265004 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:39:19.269787 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:39:19.277945 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 12 18:39:19.280563 systemd-coredump[2152]: Process 2097 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2097: #0 0x0000562ee9a3caeb n/a (ntpd + 0x68aeb) #1 0x0000562ee99e5cdf n/a (ntpd + 0x11cdf) #2 0x0000562ee99e6575 n/a (ntpd + 0x12575) #3 0x0000562ee99e1d8a n/a (ntpd + 0xdd8a) #4 0x0000562ee99e35d3 n/a (ntpd + 0xf5d3) #5 0x0000562ee99ebfd1 n/a (ntpd + 0x17fd1) #6 0x0000562ee99dcc2d n/a (ntpd + 0x8c2d) #7 0x00007fcfc84c516c n/a (libc.so.6 + 0x2716c) #8 0x00007fcfc84c5229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000562ee99dcc55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 12 18:39:19.283259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:19.287263 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:39:19.290017 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 12 18:39:19.290211 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 12 18:39:19.297149 systemd[1]: systemd-coredump@1-2137-0.service: Deactivated successfully. Dec 12 18:39:19.324021 containerd[1990]: time="2025-12-12T18:39:19.323621841Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="123.717µs" Dec 12 18:39:19.324021 containerd[1990]: time="2025-12-12T18:39:19.323660227Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:39:19.324021 containerd[1990]: time="2025-12-12T18:39:19.323687173Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:39:19.324021 containerd[1990]: time="2025-12-12T18:39:19.323868526Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:39:19.324021 containerd[1990]: time="2025-12-12T18:39:19.323888611Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:39:19.324021 containerd[1990]: time="2025-12-12T18:39:19.323923556Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:39:19.324021 containerd[1990]: time="2025-12-12T18:39:19.323996675Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:39:19.324021 containerd[1990]: time="2025-12-12T18:39:19.324012556Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:39:19.324530 containerd[1990]: time="2025-12-12T18:39:19.324302760Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:39:19.324530 containerd[1990]: time="2025-12-12T18:39:19.324324841Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:39:19.324530 containerd[1990]: time="2025-12-12T18:39:19.324340959Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:39:19.324530 containerd[1990]: time="2025-12-12T18:39:19.324352879Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:39:19.324530 containerd[1990]: time="2025-12-12T18:39:19.324439723Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:39:19.328374 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:39:19.332787 containerd[1990]: time="2025-12-12T18:39:19.332743473Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:39:19.332908 containerd[1990]: time="2025-12-12T18:39:19.332831361Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:39:19.332908 containerd[1990]: time="2025-12-12T18:39:19.332850862Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:39:19.337219 containerd[1990]: time="2025-12-12T18:39:19.337163572Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:39:19.337574 containerd[1990]: time="2025-12-12T18:39:19.337548059Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:39:19.337833 containerd[1990]: time="2025-12-12T18:39:19.337670765Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:39:19.340838 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.342955385Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.343034252Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.343073371Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.343090673Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.343108587Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.343123108Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.343140082Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.343158248Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.343176752Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.343192335Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.343204528Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.343221627Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.343370953Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:39:19.343913 containerd[1990]: time="2025-12-12T18:39:19.343396037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:39:19.348201 containerd[1990]: time="2025-12-12T18:39:19.343415239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:39:19.348201 containerd[1990]: time="2025-12-12T18:39:19.343433116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:39:19.348201 containerd[1990]: time="2025-12-12T18:39:19.343448782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:39:19.344958 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:39:19.346687 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:39:19.349483 containerd[1990]: time="2025-12-12T18:39:19.348857140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:39:19.349483 containerd[1990]: time="2025-12-12T18:39:19.348913232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:39:19.349483 containerd[1990]: time="2025-12-12T18:39:19.348934773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:39:19.349483 containerd[1990]: time="2025-12-12T18:39:19.348956500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:39:19.349483 containerd[1990]: time="2025-12-12T18:39:19.348982372Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:39:19.349483 containerd[1990]: time="2025-12-12T18:39:19.349001359Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:39:19.349483 containerd[1990]: time="2025-12-12T18:39:19.349063106Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:39:19.349483 containerd[1990]: time="2025-12-12T18:39:19.349083318Z" level=info msg="Start snapshots syncer" Dec 12 18:39:19.349483 containerd[1990]: time="2025-12-12T18:39:19.349127597Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:39:19.351634 containerd[1990]: time="2025-12-12T18:39:19.350746063Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:39:19.354859 containerd[1990]: time="2025-12-12T18:39:19.351892682Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.351996705Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.357304836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.357352901Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.357370276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.357387721Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.357414711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.357429688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.357444316Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.357506232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.357522866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.357539952Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.357592642Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.358191069Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:39:19.358507 containerd[1990]: time="2025-12-12T18:39:19.358214572Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:39:19.359079 containerd[1990]: time="2025-12-12T18:39:19.358233594Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:39:19.359079 containerd[1990]: time="2025-12-12T18:39:19.358247298Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:39:19.359079 containerd[1990]: time="2025-12-12T18:39:19.358269207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:39:19.359079 containerd[1990]: time="2025-12-12T18:39:19.358300087Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:39:19.359079 containerd[1990]: time="2025-12-12T18:39:19.358321129Z" level=info msg="runtime interface created" Dec 12 18:39:19.359079 containerd[1990]: time="2025-12-12T18:39:19.358329412Z" level=info msg="created NRI interface" Dec 12 18:39:19.359079 containerd[1990]: time="2025-12-12T18:39:19.358340647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:39:19.359079 containerd[1990]: time="2025-12-12T18:39:19.358360191Z" level=info msg="Connect containerd service" Dec 12 18:39:19.359079 containerd[1990]: time="2025-12-12T18:39:19.358396353Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:39:19.362620 containerd[1990]: time="2025-12-12T18:39:19.359546370Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:39:19.375771 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:39:19.549553 amazon-ssm-agent[2183]: Initializing new seelog logger Dec 12 18:39:19.549553 amazon-ssm-agent[2183]: New Seelog Logger Creation Complete Dec 12 18:39:19.549553 amazon-ssm-agent[2183]: 2025/12/12 18:39:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:39:19.549553 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:39:19.550716 amazon-ssm-agent[2183]: 2025/12/12 18:39:19 processing appconfig overrides Dec 12 18:39:19.551255 amazon-ssm-agent[2183]: 2025/12/12 18:39:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:39:19.552484 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:39:19.552484 amazon-ssm-agent[2183]: 2025/12/12 18:39:19 processing appconfig overrides Dec 12 18:39:19.552484 amazon-ssm-agent[2183]: 2025/12/12 18:39:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:39:19.552484 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:39:19.552484 amazon-ssm-agent[2183]: 2025/12/12 18:39:19 processing appconfig overrides Dec 12 18:39:19.552484 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.5511 INFO Proxy environment variables: Dec 12 18:39:19.577525 amazon-ssm-agent[2183]: 2025/12/12 18:39:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:39:19.577525 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:39:19.577525 amazon-ssm-agent[2183]: 2025/12/12 18:39:19 processing appconfig overrides Dec 12 18:39:19.633307 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Dec 12 18:39:19.640250 tar[1969]: linux-amd64/README.md Dec 12 18:39:19.640826 systemd[1]: Started ntpd.service - Network Time Service. Dec 12 18:39:19.654482 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.5512 INFO https_proxy: Dec 12 18:39:19.682757 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:39:19.708685 ntpd[2224]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:39:19.717495 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:39:19.717495 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:39:19.717495 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: ---------------------------------------------------- Dec 12 18:39:19.717495 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:39:19.717495 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:39:19.717495 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: corporation. Support and training for ntp-4 are Dec 12 18:39:19.717495 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: available at https://www.nwtime.org/support Dec 12 18:39:19.717495 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: ---------------------------------------------------- Dec 12 18:39:19.717495 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: proto: precision = 0.088 usec (-23) Dec 12 18:39:19.717446 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:39:19.714592 ntpd[2224]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:39:19.714603 ntpd[2224]: ---------------------------------------------------- Dec 12 18:39:19.714613 ntpd[2224]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:39:19.714622 ntpd[2224]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:39:19.714631 ntpd[2224]: corporation. Support and training for ntp-4 are Dec 12 18:39:19.714640 ntpd[2224]: available at https://www.nwtime.org/support Dec 12 18:39:19.714648 ntpd[2224]: ---------------------------------------------------- Dec 12 18:39:19.715389 ntpd[2224]: proto: precision = 0.088 usec (-23) Dec 12 18:39:19.722480 systemd[1]: Started sshd@0-172.31.29.16:22-139.178.89.65:33534.service - OpenSSH per-connection server daemon (139.178.89.65:33534). Dec 12 18:39:19.724919 ntpd[2224]: basedate set to 2025-11-30 Dec 12 18:39:19.726007 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: basedate set to 2025-11-30 Dec 12 18:39:19.726007 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: gps base set to 2025-11-30 (week 2395) Dec 12 18:39:19.726007 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:39:19.726007 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:39:19.726007 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:39:19.726007 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: Listen normally on 3 eth0 172.31.29.16:123 Dec 12 18:39:19.726007 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: Listen normally on 4 lo [::1]:123 Dec 12 18:39:19.726007 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: Listen normally on 5 eth0 [fe80::436:c4ff:fe86:dabd%2]:123 Dec 12 18:39:19.726007 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: Listening on routing socket on fd #22 for interface updates Dec 12 18:39:19.724943 ntpd[2224]: gps base set to 2025-11-30 (week 2395) Dec 12 18:39:19.725046 ntpd[2224]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:39:19.725073 ntpd[2224]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:39:19.725254 ntpd[2224]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:39:19.725279 ntpd[2224]: Listen normally on 3 eth0 172.31.29.16:123 Dec 12 18:39:19.725305 ntpd[2224]: Listen normally on 4 lo [::1]:123 Dec 12 18:39:19.725330 ntpd[2224]: Listen normally on 5 eth0 [fe80::436:c4ff:fe86:dabd%2]:123 Dec 12 18:39:19.725364 ntpd[2224]: Listening on routing socket on fd #22 for interface updates Dec 12 18:39:19.731560 ntpd[2224]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 18:39:19.732273 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 18:39:19.732273 ntpd[2224]: 12 Dec 18:39:19 ntpd[2224]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 18:39:19.731594 ntpd[2224]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 18:39:19.739832 containerd[1990]: time="2025-12-12T18:39:19.739072774Z" level=info msg="Start subscribing containerd event" Dec 12 18:39:19.739832 containerd[1990]: time="2025-12-12T18:39:19.739150112Z" level=info msg="Start recovering state" Dec 12 18:39:19.739832 containerd[1990]: time="2025-12-12T18:39:19.739715059Z" level=info msg="Start event monitor" Dec 12 18:39:19.739832 containerd[1990]: time="2025-12-12T18:39:19.739736738Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:39:19.739832 containerd[1990]: time="2025-12-12T18:39:19.739748286Z" level=info msg="Start streaming server" Dec 12 18:39:19.739832 containerd[1990]: time="2025-12-12T18:39:19.739816161Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:39:19.740136 containerd[1990]: time="2025-12-12T18:39:19.739870509Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:39:19.740136 containerd[1990]: time="2025-12-12T18:39:19.739882686Z" level=info msg="runtime interface starting up..." Dec 12 18:39:19.740136 containerd[1990]: time="2025-12-12T18:39:19.739890465Z" level=info msg="starting plugins..." Dec 12 18:39:19.740136 containerd[1990]: time="2025-12-12T18:39:19.739913416Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:39:19.740136 containerd[1990]: time="2025-12-12T18:39:19.739980940Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:39:19.741145 containerd[1990]: time="2025-12-12T18:39:19.740335723Z" level=info msg="containerd successfully booted in 0.498313s" Dec 12 18:39:19.740332 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:39:19.751705 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.5512 INFO http_proxy: Dec 12 18:39:19.841135 amazon-ssm-agent[2183]: 2025/12/12 18:39:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:39:19.841135 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:39:19.841135 amazon-ssm-agent[2183]: 2025/12/12 18:39:19 processing appconfig overrides Dec 12 18:39:19.849378 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.5512 INFO no_proxy: Dec 12 18:39:19.882230 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.5514 INFO Checking if agent identity type OnPrem can be assumed Dec 12 18:39:19.882230 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.5516 INFO Checking if agent identity type EC2 can be assumed Dec 12 18:39:19.882230 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.6294 INFO Agent will take identity from EC2 Dec 12 18:39:19.882230 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.6313 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 12 18:39:19.882230 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.6314 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.6314 INFO [amazon-ssm-agent] Starting Core Agent Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.6314 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.6314 INFO [Registrar] Starting registrar module Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.6442 INFO [EC2Identity] Checking disk for registration info Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.6443 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.6443 INFO [EC2Identity] Generating registration keypair Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.8013 INFO [EC2Identity] Checking write access before registering Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.8017 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.8400 INFO [EC2Identity] EC2 registration was successful. Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.8401 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.8402 INFO [CredentialRefresher] credentialRefresher has started Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.8402 INFO [CredentialRefresher] Starting credentials refresher loop Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.8819 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 12 18:39:19.882451 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.8821 INFO [CredentialRefresher] Credentials ready Dec 12 18:39:19.947189 amazon-ssm-agent[2183]: 2025-12-12 18:39:19.8823 INFO [CredentialRefresher] Next credential rotation will be in 29.999994152016665 minutes Dec 12 18:39:19.971935 sshd[2231]: Accepted publickey for core from 139.178.89.65 port 33534 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:39:19.974924 sshd-session[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:19.989647 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:39:19.992321 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:39:19.996707 systemd-logind[1957]: New session 1 of user core. Dec 12 18:39:20.019077 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:39:20.022807 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:39:20.041313 (systemd)[2238]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:39:20.044611 systemd-logind[1957]: New session c1 of user core. Dec 12 18:39:20.235055 systemd[2238]: Queued start job for default target default.target. Dec 12 18:39:20.243099 systemd[2238]: Created slice app.slice - User Application Slice. Dec 12 18:39:20.243150 systemd[2238]: Reached target paths.target - Paths. Dec 12 18:39:20.243804 systemd[2238]: Reached target timers.target - Timers. Dec 12 18:39:20.246227 systemd[2238]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:39:20.270361 systemd[2238]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:39:20.270537 systemd[2238]: Reached target sockets.target - Sockets. Dec 12 18:39:20.270602 systemd[2238]: Reached target basic.target - Basic System. Dec 12 18:39:20.270654 systemd[2238]: Reached target default.target - Main User Target. Dec 12 18:39:20.270688 systemd[2238]: Startup finished in 217ms. Dec 12 18:39:20.271435 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:39:20.281038 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:39:20.431443 systemd[1]: Started sshd@1-172.31.29.16:22-139.178.89.65:33540.service - OpenSSH per-connection server daemon (139.178.89.65:33540). Dec 12 18:39:20.614189 sshd[2249]: Accepted publickey for core from 139.178.89.65 port 33540 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:39:20.615861 sshd-session[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:20.622897 systemd-logind[1957]: New session 2 of user core. Dec 12 18:39:20.631155 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:39:20.755235 sshd[2252]: Connection closed by 139.178.89.65 port 33540 Dec 12 18:39:20.755974 sshd-session[2249]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:20.762575 systemd[1]: sshd@1-172.31.29.16:22-139.178.89.65:33540.service: Deactivated successfully. Dec 12 18:39:20.765906 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:39:20.768435 systemd-logind[1957]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:39:20.769917 systemd-logind[1957]: Removed session 2. Dec 12 18:39:20.789106 systemd[1]: Started sshd@2-172.31.29.16:22-139.178.89.65:33544.service - OpenSSH per-connection server daemon (139.178.89.65:33544). Dec 12 18:39:20.897837 amazon-ssm-agent[2183]: 2025-12-12 18:39:20.8970 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 12 18:39:20.962800 sshd[2258]: Accepted publickey for core from 139.178.89.65 port 33544 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:39:20.966090 sshd-session[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:20.977019 systemd-logind[1957]: New session 3 of user core. Dec 12 18:39:20.983730 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:39:20.999780 amazon-ssm-agent[2183]: 2025-12-12 18:39:20.9005 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2263) started Dec 12 18:39:21.103415 amazon-ssm-agent[2183]: 2025-12-12 18:39:20.9006 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 12 18:39:21.114881 sshd[2268]: Connection closed by 139.178.89.65 port 33544 Dec 12 18:39:21.115791 sshd-session[2258]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:21.122249 systemd-logind[1957]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:39:21.123186 systemd[1]: sshd@2-172.31.29.16:22-139.178.89.65:33544.service: Deactivated successfully. Dec 12 18:39:21.126501 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:39:21.128514 systemd-logind[1957]: Removed session 3. Dec 12 18:39:21.379000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:21.380178 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:39:21.381471 systemd[1]: Startup finished in 4.213s (kernel) + 7.873s (initrd) + 7.927s (userspace) = 20.014s. Dec 12 18:39:21.394446 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:39:22.388115 kubelet[2285]: E1212 18:39:22.388056 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:39:22.396407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:39:22.396624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:39:22.397388 systemd[1]: kubelet.service: Consumed 1.022s CPU time, 256.6M memory peak. Dec 12 18:39:27.286375 systemd-resolved[1857]: Clock change detected. Flushing caches. Dec 12 18:39:31.720859 systemd[1]: Started sshd@3-172.31.29.16:22-139.178.89.65:60914.service - OpenSSH per-connection server daemon (139.178.89.65:60914). Dec 12 18:39:31.896463 sshd[2297]: Accepted publickey for core from 139.178.89.65 port 60914 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:39:31.897866 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:31.904375 systemd-logind[1957]: New session 4 of user core. Dec 12 18:39:31.909841 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:39:32.033944 sshd[2300]: Connection closed by 139.178.89.65 port 60914 Dec 12 18:39:32.034919 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:32.040059 systemd[1]: sshd@3-172.31.29.16:22-139.178.89.65:60914.service: Deactivated successfully. Dec 12 18:39:32.041959 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:39:32.042937 systemd-logind[1957]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:39:32.044920 systemd-logind[1957]: Removed session 4. Dec 12 18:39:32.068464 systemd[1]: Started sshd@4-172.31.29.16:22-139.178.89.65:60926.service - OpenSSH per-connection server daemon (139.178.89.65:60926). Dec 12 18:39:32.258004 sshd[2306]: Accepted publickey for core from 139.178.89.65 port 60926 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:39:32.259431 sshd-session[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:32.273621 systemd-logind[1957]: New session 5 of user core. Dec 12 18:39:32.285833 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:39:32.405613 sshd[2309]: Connection closed by 139.178.89.65 port 60926 Dec 12 18:39:32.406332 sshd-session[2306]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:32.410904 systemd[1]: sshd@4-172.31.29.16:22-139.178.89.65:60926.service: Deactivated successfully. Dec 12 18:39:32.413222 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:39:32.415598 systemd-logind[1957]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:39:32.416876 systemd-logind[1957]: Removed session 5. Dec 12 18:39:32.443131 systemd[1]: Started sshd@5-172.31.29.16:22-139.178.89.65:60934.service - OpenSSH per-connection server daemon (139.178.89.65:60934). Dec 12 18:39:32.621023 sshd[2315]: Accepted publickey for core from 139.178.89.65 port 60934 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:39:32.622530 sshd-session[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:32.628392 systemd-logind[1957]: New session 6 of user core. Dec 12 18:39:32.635949 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:39:32.757114 sshd[2318]: Connection closed by 139.178.89.65 port 60934 Dec 12 18:39:32.757885 sshd-session[2315]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:32.763842 systemd[1]: sshd@5-172.31.29.16:22-139.178.89.65:60934.service: Deactivated successfully. Dec 12 18:39:32.766112 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:39:32.767177 systemd-logind[1957]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:39:32.768975 systemd-logind[1957]: Removed session 6. Dec 12 18:39:32.795390 systemd[1]: Started sshd@6-172.31.29.16:22-139.178.89.65:60944.service - OpenSSH per-connection server daemon (139.178.89.65:60944). Dec 12 18:39:32.987140 sshd[2324]: Accepted publickey for core from 139.178.89.65 port 60944 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:39:32.989311 sshd-session[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:32.990828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:39:32.994812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:32.999661 systemd-logind[1957]: New session 7 of user core. Dec 12 18:39:33.002878 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:39:33.181238 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:39:33.181642 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:39:33.195214 sudo[2331]: pam_unix(sudo:session): session closed for user root Dec 12 18:39:33.219172 sshd[2330]: Connection closed by 139.178.89.65 port 60944 Dec 12 18:39:33.220271 sshd-session[2324]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:33.226957 systemd[1]: sshd@6-172.31.29.16:22-139.178.89.65:60944.service: Deactivated successfully. Dec 12 18:39:33.229299 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:39:33.230307 systemd-logind[1957]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:39:33.232326 systemd-logind[1957]: Removed session 7. Dec 12 18:39:33.252379 systemd[1]: Started sshd@7-172.31.29.16:22-139.178.89.65:60948.service - OpenSSH per-connection server daemon (139.178.89.65:60948). Dec 12 18:39:33.429015 sshd[2337]: Accepted publickey for core from 139.178.89.65 port 60948 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:39:33.430429 sshd-session[2337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:33.438399 systemd-logind[1957]: New session 8 of user core. Dec 12 18:39:33.444836 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:39:33.544488 sudo[2342]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:39:33.544894 sudo[2342]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:39:33.588659 sudo[2342]: pam_unix(sudo:session): session closed for user root Dec 12 18:39:33.596276 sudo[2341]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:39:33.596676 sudo[2341]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:39:33.609539 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:39:33.657722 augenrules[2364]: No rules Dec 12 18:39:33.659345 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:39:33.659653 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:39:33.661744 sudo[2341]: pam_unix(sudo:session): session closed for user root Dec 12 18:39:33.685133 sshd[2340]: Connection closed by 139.178.89.65 port 60948 Dec 12 18:39:33.685954 sshd-session[2337]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:33.693272 systemd[1]: sshd@7-172.31.29.16:22-139.178.89.65:60948.service: Deactivated successfully. Dec 12 18:39:33.697498 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:39:33.700122 systemd-logind[1957]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:39:33.701852 systemd-logind[1957]: Removed session 8. Dec 12 18:39:33.706026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:33.715103 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:39:33.722144 systemd[1]: Started sshd@8-172.31.29.16:22-139.178.89.65:60954.service - OpenSSH per-connection server daemon (139.178.89.65:60954). Dec 12 18:39:33.769542 kubelet[2378]: E1212 18:39:33.769494 2378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:39:33.773756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:39:33.773969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:39:33.774612 systemd[1]: kubelet.service: Consumed 193ms CPU time, 110.5M memory peak. Dec 12 18:39:33.898219 sshd[2384]: Accepted publickey for core from 139.178.89.65 port 60954 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:39:33.899647 sshd-session[2384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:33.905590 systemd-logind[1957]: New session 9 of user core. Dec 12 18:39:33.913780 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:39:34.013868 sudo[2390]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:39:34.014250 sudo[2390]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:39:34.881685 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:39:34.897105 (dockerd)[2409]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:39:35.540100 dockerd[2409]: time="2025-12-12T18:39:35.540037100Z" level=info msg="Starting up" Dec 12 18:39:35.544629 dockerd[2409]: time="2025-12-12T18:39:35.544174315Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:39:35.557160 dockerd[2409]: time="2025-12-12T18:39:35.557111439Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:39:35.628847 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2445849840-merged.mount: Deactivated successfully. Dec 12 18:39:35.649066 systemd[1]: var-lib-docker-metacopy\x2dcheck334232351-merged.mount: Deactivated successfully. Dec 12 18:39:35.670882 dockerd[2409]: time="2025-12-12T18:39:35.670690530Z" level=info msg="Loading containers: start." Dec 12 18:39:35.696586 kernel: Initializing XFRM netlink socket Dec 12 18:39:35.973341 (udev-worker)[2431]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:39:36.030699 systemd-networkd[1856]: docker0: Link UP Dec 12 18:39:36.045445 dockerd[2409]: time="2025-12-12T18:39:36.045382678Z" level=info msg="Loading containers: done." Dec 12 18:39:36.066578 dockerd[2409]: time="2025-12-12T18:39:36.066509631Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:39:36.066833 dockerd[2409]: time="2025-12-12T18:39:36.066614994Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:39:36.066833 dockerd[2409]: time="2025-12-12T18:39:36.066714969Z" level=info msg="Initializing buildkit" Dec 12 18:39:36.100814 dockerd[2409]: time="2025-12-12T18:39:36.100743605Z" level=info msg="Completed buildkit initialization" Dec 12 18:39:36.109773 dockerd[2409]: time="2025-12-12T18:39:36.109725508Z" level=info msg="Daemon has completed initialization" Dec 12 18:39:36.110699 dockerd[2409]: time="2025-12-12T18:39:36.109990709Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:39:36.110086 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:39:37.585771 containerd[1990]: time="2025-12-12T18:39:37.584279678Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 12 18:39:38.222464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189051873.mount: Deactivated successfully. Dec 12 18:39:39.542143 containerd[1990]: time="2025-12-12T18:39:39.540991123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:39.542590 containerd[1990]: time="2025-12-12T18:39:39.542215313Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Dec 12 18:39:39.543152 containerd[1990]: time="2025-12-12T18:39:39.543108753Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:39.550571 containerd[1990]: time="2025-12-12T18:39:39.550206674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:39.552458 containerd[1990]: time="2025-12-12T18:39:39.552401417Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.968073738s" Dec 12 18:39:39.552458 containerd[1990]: time="2025-12-12T18:39:39.552457267Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 12 18:39:39.554077 containerd[1990]: time="2025-12-12T18:39:39.554045640Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 12 18:39:41.037246 containerd[1990]: time="2025-12-12T18:39:41.036665683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:41.043197 containerd[1990]: time="2025-12-12T18:39:41.043148380Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Dec 12 18:39:41.051222 containerd[1990]: time="2025-12-12T18:39:41.051138935Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:41.057778 containerd[1990]: time="2025-12-12T18:39:41.057692603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:41.058797 containerd[1990]: time="2025-12-12T18:39:41.058671309Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.504589467s" Dec 12 18:39:41.058797 containerd[1990]: time="2025-12-12T18:39:41.058708500Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 12 18:39:41.059688 containerd[1990]: time="2025-12-12T18:39:41.059658951Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 12 18:39:42.205433 containerd[1990]: time="2025-12-12T18:39:42.205328360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:42.206293 containerd[1990]: time="2025-12-12T18:39:42.206260448Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Dec 12 18:39:42.207580 containerd[1990]: time="2025-12-12T18:39:42.207260024Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:42.210048 containerd[1990]: time="2025-12-12T18:39:42.209991082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:42.211660 containerd[1990]: time="2025-12-12T18:39:42.210773515Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.151081651s" Dec 12 18:39:42.211660 containerd[1990]: time="2025-12-12T18:39:42.210821900Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 12 18:39:42.212106 containerd[1990]: time="2025-12-12T18:39:42.212071255Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 12 18:39:43.227638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088809054.mount: Deactivated successfully. Dec 12 18:39:43.667521 containerd[1990]: time="2025-12-12T18:39:43.667438506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:43.668855 containerd[1990]: time="2025-12-12T18:39:43.668691937Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Dec 12 18:39:43.669841 containerd[1990]: time="2025-12-12T18:39:43.669803551Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:43.672032 containerd[1990]: time="2025-12-12T18:39:43.671991774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:43.672645 containerd[1990]: time="2025-12-12T18:39:43.672617033Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.460503396s" Dec 12 18:39:43.672927 containerd[1990]: time="2025-12-12T18:39:43.672904019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 12 18:39:43.673438 containerd[1990]: time="2025-12-12T18:39:43.673405662Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 12 18:39:43.799510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 18:39:43.802346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:44.087053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:44.098170 (kubelet)[2703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:39:44.162663 kubelet[2703]: E1212 18:39:44.162574 2703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:39:44.165393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:39:44.166203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:39:44.167004 systemd[1]: kubelet.service: Consumed 195ms CPU time, 108.6M memory peak. Dec 12 18:39:44.189007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3484896292.mount: Deactivated successfully. Dec 12 18:39:45.352438 containerd[1990]: time="2025-12-12T18:39:45.352319208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:45.355104 containerd[1990]: time="2025-12-12T18:39:45.355024207Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Dec 12 18:39:45.358272 containerd[1990]: time="2025-12-12T18:39:45.358219048Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:45.363584 containerd[1990]: time="2025-12-12T18:39:45.363380052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:45.364620 containerd[1990]: time="2025-12-12T18:39:45.364580652Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.691144089s" Dec 12 18:39:45.364620 containerd[1990]: time="2025-12-12T18:39:45.364618110Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 12 18:39:45.365206 containerd[1990]: time="2025-12-12T18:39:45.365179576Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 12 18:39:45.805665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2795323716.mount: Deactivated successfully. Dec 12 18:39:45.813740 containerd[1990]: time="2025-12-12T18:39:45.813684146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:45.814766 containerd[1990]: time="2025-12-12T18:39:45.814522261Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Dec 12 18:39:45.816417 containerd[1990]: time="2025-12-12T18:39:45.816384532Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:45.820572 containerd[1990]: time="2025-12-12T18:39:45.819678628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:45.821068 containerd[1990]: time="2025-12-12T18:39:45.821029425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 455.818579ms" Dec 12 18:39:45.821199 containerd[1990]: time="2025-12-12T18:39:45.821178885Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 12 18:39:45.821757 containerd[1990]: time="2025-12-12T18:39:45.821725909Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 12 18:39:46.312005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2742807012.mount: Deactivated successfully. Dec 12 18:39:49.108117 containerd[1990]: time="2025-12-12T18:39:49.108040515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:49.109640 containerd[1990]: time="2025-12-12T18:39:49.109483247Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Dec 12 18:39:49.112573 containerd[1990]: time="2025-12-12T18:39:49.110797654Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:49.114005 containerd[1990]: time="2025-12-12T18:39:49.113965597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:49.115251 containerd[1990]: time="2025-12-12T18:39:49.115208484Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.293437901s" Dec 12 18:39:49.115409 containerd[1990]: time="2025-12-12T18:39:49.115389176Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 12 18:39:49.764495 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 18:39:53.211107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:53.211389 systemd[1]: kubelet.service: Consumed 195ms CPU time, 108.6M memory peak. Dec 12 18:39:53.214264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:53.250119 systemd[1]: Reload requested from client PID 2850 ('systemctl') (unit session-9.scope)... Dec 12 18:39:53.250149 systemd[1]: Reloading... Dec 12 18:39:53.413580 zram_generator::config[2900]: No configuration found. Dec 12 18:39:53.688748 systemd[1]: Reloading finished in 437 ms. Dec 12 18:39:53.754284 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:39:53.754456 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:39:53.754861 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:53.754918 systemd[1]: kubelet.service: Consumed 148ms CPU time, 98.1M memory peak. Dec 12 18:39:53.757706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:54.316143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:54.328963 (kubelet)[2958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:39:54.392023 kubelet[2958]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:39:54.392023 kubelet[2958]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:39:54.392023 kubelet[2958]: I1212 18:39:54.391590 2958 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:39:55.567690 kubelet[2958]: I1212 18:39:55.567640 2958 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 18:39:55.567690 kubelet[2958]: I1212 18:39:55.567675 2958 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:39:55.570920 kubelet[2958]: I1212 18:39:55.570880 2958 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 18:39:55.570920 kubelet[2958]: I1212 18:39:55.570920 2958 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:39:55.571293 kubelet[2958]: I1212 18:39:55.571268 2958 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:39:55.603589 kubelet[2958]: I1212 18:39:55.603027 2958 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:39:55.609143 kubelet[2958]: E1212 18:39:55.609085 2958 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.29.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:39:55.622215 kubelet[2958]: I1212 18:39:55.622187 2958 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:39:55.628170 kubelet[2958]: I1212 18:39:55.628142 2958 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 18:39:55.632418 kubelet[2958]: I1212 18:39:55.632349 2958 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:39:55.634621 kubelet[2958]: I1212 18:39:55.632411 2958 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-16","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:39:55.634621 kubelet[2958]: I1212 18:39:55.634624 2958 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:39:55.634862 kubelet[2958]: I1212 18:39:55.634643 2958 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 18:39:55.634862 kubelet[2958]: I1212 18:39:55.634803 2958 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 18:39:55.637416 kubelet[2958]: I1212 18:39:55.637384 2958 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:39:55.637678 kubelet[2958]: I1212 18:39:55.637658 2958 kubelet.go:475] "Attempting to sync node with API server" Dec 12 18:39:55.637678 kubelet[2958]: I1212 18:39:55.637680 2958 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:39:55.637817 kubelet[2958]: I1212 18:39:55.637709 2958 kubelet.go:387] "Adding apiserver pod source" Dec 12 18:39:55.637817 kubelet[2958]: I1212 18:39:55.637740 2958 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:39:55.642489 kubelet[2958]: E1212 18:39:55.642198 2958 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:39:55.642489 kubelet[2958]: E1212 18:39:55.642293 2958 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-16&limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:39:55.646321 kubelet[2958]: I1212 18:39:55.646290 2958 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:39:55.652725 kubelet[2958]: I1212 18:39:55.652587 2958 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:39:55.652947 kubelet[2958]: I1212 18:39:55.652785 2958 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 18:39:55.656518 kubelet[2958]: W1212 18:39:55.656459 2958 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:39:55.661871 kubelet[2958]: I1212 18:39:55.661709 2958 server.go:1262] "Started kubelet" Dec 12 18:39:55.665050 kubelet[2958]: I1212 18:39:55.665026 2958 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:39:55.688633 kubelet[2958]: E1212 18:39:55.669570 2958 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.16:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-16.18808bd3c9594039 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-16,UID:ip-172-31-29-16,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-16,},FirstTimestamp:2025-12-12 18:39:55.661664313 +0000 UTC m=+1.327734602,LastTimestamp:2025-12-12 18:39:55.661664313 +0000 UTC m=+1.327734602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-16,}" Dec 12 18:39:55.688633 kubelet[2958]: I1212 18:39:55.671602 2958 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:39:55.688633 kubelet[2958]: I1212 18:39:55.687998 2958 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 18:39:55.691497 kubelet[2958]: E1212 18:39:55.691454 2958 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-29-16\" not found" Dec 12 18:39:55.692209 kubelet[2958]: I1212 18:39:55.691903 2958 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 18:39:55.692209 kubelet[2958]: I1212 18:39:55.691975 2958 reconciler.go:29] "Reconciler: start to sync state" Dec 12 18:39:55.708718 kubelet[2958]: E1212 18:39:55.708442 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": dial tcp 172.31.29.16:6443: connect: connection refused" interval="200ms" Dec 12 18:39:55.722235 kubelet[2958]: I1212 18:39:55.718428 2958 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:39:55.722235 kubelet[2958]: I1212 18:39:55.718530 2958 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 18:39:55.727274 kubelet[2958]: I1212 18:39:55.727236 2958 server.go:310] "Adding debug handlers to kubelet server" Dec 12 18:39:55.728413 kubelet[2958]: I1212 18:39:55.728375 2958 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:39:55.734122 kubelet[2958]: I1212 18:39:55.734085 2958 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:39:55.735116 kubelet[2958]: E1212 18:39:55.735078 2958 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:39:55.738872 kubelet[2958]: I1212 18:39:55.738842 2958 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:39:55.739010 kubelet[2958]: I1212 18:39:55.738984 2958 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:39:55.742639 kubelet[2958]: I1212 18:39:55.742609 2958 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:39:55.754326 kubelet[2958]: E1212 18:39:55.754288 2958 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:39:55.760341 kubelet[2958]: I1212 18:39:55.760297 2958 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 18:39:55.765189 kubelet[2958]: I1212 18:39:55.765156 2958 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 18:39:55.765369 kubelet[2958]: I1212 18:39:55.765358 2958 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 18:39:55.765474 kubelet[2958]: I1212 18:39:55.765464 2958 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 18:39:55.765636 kubelet[2958]: E1212 18:39:55.765605 2958 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:39:55.771031 kubelet[2958]: E1212 18:39:55.770993 2958 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:39:55.776092 kubelet[2958]: I1212 18:39:55.776062 2958 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:39:55.776092 kubelet[2958]: I1212 18:39:55.776087 2958 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:39:55.776272 kubelet[2958]: I1212 18:39:55.776108 2958 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:39:55.778404 kubelet[2958]: I1212 18:39:55.778373 2958 policy_none.go:49] "None policy: Start" Dec 12 18:39:55.778404 kubelet[2958]: I1212 18:39:55.778405 2958 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 18:39:55.778592 kubelet[2958]: I1212 18:39:55.778421 2958 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 18:39:55.780594 kubelet[2958]: I1212 18:39:55.780530 2958 policy_none.go:47] "Start" Dec 12 18:39:55.786162 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:39:55.792637 kubelet[2958]: E1212 18:39:55.792599 2958 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-29-16\" not found" Dec 12 18:39:55.801897 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:39:55.806848 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:39:55.819198 kubelet[2958]: E1212 18:39:55.819091 2958 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:39:55.821369 kubelet[2958]: I1212 18:39:55.821344 2958 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:39:55.821534 kubelet[2958]: I1212 18:39:55.821497 2958 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:39:55.822501 kubelet[2958]: I1212 18:39:55.822141 2958 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:39:55.827971 kubelet[2958]: E1212 18:39:55.827885 2958 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:39:55.828130 kubelet[2958]: E1212 18:39:55.827987 2958 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-16\" not found" Dec 12 18:39:55.888749 systemd[1]: Created slice kubepods-burstable-pod1e5fb53e6a3403fc6af39bdc769a1fee.slice - libcontainer container kubepods-burstable-pod1e5fb53e6a3403fc6af39bdc769a1fee.slice. Dec 12 18:39:55.893151 kubelet[2958]: I1212 18:39:55.893105 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0815ce36621f1f634b8a5cfe6736bb10-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"0815ce36621f1f634b8a5cfe6736bb10\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" Dec 12 18:39:55.893274 kubelet[2958]: I1212 18:39:55.893210 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0815ce36621f1f634b8a5cfe6736bb10-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"0815ce36621f1f634b8a5cfe6736bb10\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" Dec 12 18:39:55.893274 kubelet[2958]: I1212 18:39:55.893243 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/215748cac87e0862edbd20ec9758273a-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-16\" (UID: \"215748cac87e0862edbd20ec9758273a\") " pod="kube-system/kube-scheduler-ip-172-31-29-16" Dec 12 18:39:55.893274 kubelet[2958]: I1212 18:39:55.893261 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e5fb53e6a3403fc6af39bdc769a1fee-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-16\" (UID: \"1e5fb53e6a3403fc6af39bdc769a1fee\") " pod="kube-system/kube-apiserver-ip-172-31-29-16" Dec 12 18:39:55.893362 kubelet[2958]: I1212 18:39:55.893292 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e5fb53e6a3403fc6af39bdc769a1fee-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-16\" (UID: \"1e5fb53e6a3403fc6af39bdc769a1fee\") " pod="kube-system/kube-apiserver-ip-172-31-29-16" Dec 12 18:39:55.893362 kubelet[2958]: I1212 18:39:55.893314 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e5fb53e6a3403fc6af39bdc769a1fee-ca-certs\") pod \"kube-apiserver-ip-172-31-29-16\" (UID: \"1e5fb53e6a3403fc6af39bdc769a1fee\") " pod="kube-system/kube-apiserver-ip-172-31-29-16" Dec 12 18:39:55.893362 kubelet[2958]: I1212 18:39:55.893328 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0815ce36621f1f634b8a5cfe6736bb10-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"0815ce36621f1f634b8a5cfe6736bb10\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" Dec 12 18:39:55.893362 kubelet[2958]: I1212 18:39:55.893344 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0815ce36621f1f634b8a5cfe6736bb10-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"0815ce36621f1f634b8a5cfe6736bb10\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" Dec 12 18:39:55.893457 kubelet[2958]: I1212 18:39:55.893375 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0815ce36621f1f634b8a5cfe6736bb10-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"0815ce36621f1f634b8a5cfe6736bb10\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" Dec 12 18:39:55.895594 kubelet[2958]: E1212 18:39:55.895543 2958 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" Dec 12 18:39:55.900829 systemd[1]: Created slice kubepods-burstable-pod0815ce36621f1f634b8a5cfe6736bb10.slice - libcontainer container kubepods-burstable-pod0815ce36621f1f634b8a5cfe6736bb10.slice. Dec 12 18:39:55.909803 kubelet[2958]: E1212 18:39:55.909719 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": dial tcp 172.31.29.16:6443: connect: connection refused" interval="400ms" Dec 12 18:39:55.910701 kubelet[2958]: E1212 18:39:55.910670 2958 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" Dec 12 18:39:55.914610 systemd[1]: Created slice kubepods-burstable-pod215748cac87e0862edbd20ec9758273a.slice - libcontainer container kubepods-burstable-pod215748cac87e0862edbd20ec9758273a.slice. Dec 12 18:39:55.916809 kubelet[2958]: E1212 18:39:55.916778 2958 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" Dec 12 18:39:55.923902 kubelet[2958]: I1212 18:39:55.923679 2958 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" Dec 12 18:39:55.924314 kubelet[2958]: E1212 18:39:55.924279 2958 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.16:6443/api/v1/nodes\": dial tcp 172.31.29.16:6443: connect: connection refused" node="ip-172-31-29-16" Dec 12 18:39:56.127109 kubelet[2958]: I1212 18:39:56.126495 2958 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" Dec 12 18:39:56.127109 kubelet[2958]: E1212 18:39:56.127049 2958 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.16:6443/api/v1/nodes\": dial tcp 172.31.29.16:6443: connect: connection refused" node="ip-172-31-29-16" Dec 12 18:39:56.199911 containerd[1990]: time="2025-12-12T18:39:56.199666926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-16,Uid:1e5fb53e6a3403fc6af39bdc769a1fee,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:56.214274 containerd[1990]: time="2025-12-12T18:39:56.214216422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-16,Uid:0815ce36621f1f634b8a5cfe6736bb10,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:56.220399 containerd[1990]: time="2025-12-12T18:39:56.220355412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-16,Uid:215748cac87e0862edbd20ec9758273a,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:56.310514 kubelet[2958]: E1212 18:39:56.310456 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": dial tcp 172.31.29.16:6443: connect: connection refused" interval="800ms" Dec 12 18:39:56.529983 kubelet[2958]: I1212 18:39:56.529873 2958 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" Dec 12 18:39:56.530500 kubelet[2958]: E1212 18:39:56.530464 2958 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.16:6443/api/v1/nodes\": dial tcp 172.31.29.16:6443: connect: connection refused" node="ip-172-31-29-16" Dec 12 18:39:56.621335 kubelet[2958]: E1212 18:39:56.621197 2958 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-16&limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:39:56.744823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082367226.mount: Deactivated successfully. Dec 12 18:39:56.769262 containerd[1990]: time="2025-12-12T18:39:56.769204555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:39:56.778614 kubelet[2958]: E1212 18:39:56.778572 2958 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:39:56.778882 containerd[1990]: time="2025-12-12T18:39:56.778561730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:39:56.793406 containerd[1990]: time="2025-12-12T18:39:56.793227356Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:39:56.795599 containerd[1990]: time="2025-12-12T18:39:56.795507080Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:39:56.796846 containerd[1990]: time="2025-12-12T18:39:56.796814309Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:39:56.800319 containerd[1990]: time="2025-12-12T18:39:56.800276927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:39:56.801015 containerd[1990]: time="2025-12-12T18:39:56.800963172Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 599.025681ms" Dec 12 18:39:56.803083 containerd[1990]: time="2025-12-12T18:39:56.803031851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:39:56.803313 containerd[1990]: time="2025-12-12T18:39:56.803151358Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:39:56.809408 containerd[1990]: time="2025-12-12T18:39:56.809361226Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 593.829325ms" Dec 12 18:39:56.810468 containerd[1990]: time="2025-12-12T18:39:56.810420422Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 588.564404ms" Dec 12 18:39:56.951579 containerd[1990]: time="2025-12-12T18:39:56.949832596Z" level=info msg="connecting to shim 068c2a9343d6032af57f6009976296c7e55ff733009dac150957d1fd159e9fd8" address="unix:///run/containerd/s/26dced8ea666f94dd987dbe9d04a94306d5c9e4da90e288d2bda259c378bdb35" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:56.951579 containerd[1990]: time="2025-12-12T18:39:56.950363108Z" level=info msg="connecting to shim cdfb7e4ccc89fa286a5ef4e7f8439590747d5baf91dc5c1ed3cf9307b07695c5" address="unix:///run/containerd/s/d27da8adcce9186c7635810e9e49df4fc4a087b4d340e6a97126e3a75eeb6679" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:56.958812 containerd[1990]: time="2025-12-12T18:39:56.958759952Z" level=info msg="connecting to shim bee67bf8680fdd17231ad5c48b09eca705707dc9c80523a8ed234bb11493b56f" address="unix:///run/containerd/s/e5516a5355ea4034f362be41a02cb87caec5c9caac0b7e0cc242e697d8bdf75f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:57.083937 systemd[1]: Started cri-containerd-068c2a9343d6032af57f6009976296c7e55ff733009dac150957d1fd159e9fd8.scope - libcontainer container 068c2a9343d6032af57f6009976296c7e55ff733009dac150957d1fd159e9fd8. Dec 12 18:39:57.086360 systemd[1]: Started cri-containerd-bee67bf8680fdd17231ad5c48b09eca705707dc9c80523a8ed234bb11493b56f.scope - libcontainer container bee67bf8680fdd17231ad5c48b09eca705707dc9c80523a8ed234bb11493b56f. Dec 12 18:39:57.088262 systemd[1]: Started cri-containerd-cdfb7e4ccc89fa286a5ef4e7f8439590747d5baf91dc5c1ed3cf9307b07695c5.scope - libcontainer container cdfb7e4ccc89fa286a5ef4e7f8439590747d5baf91dc5c1ed3cf9307b07695c5. Dec 12 18:39:57.112493 kubelet[2958]: E1212 18:39:57.112445 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": dial tcp 172.31.29.16:6443: connect: connection refused" interval="1.6s" Dec 12 18:39:57.170616 kubelet[2958]: E1212 18:39:57.170533 2958 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:39:57.184995 kubelet[2958]: E1212 18:39:57.184954 2958 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:39:57.205521 containerd[1990]: time="2025-12-12T18:39:57.205322107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-16,Uid:1e5fb53e6a3403fc6af39bdc769a1fee,Namespace:kube-system,Attempt:0,} returns sandbox id \"068c2a9343d6032af57f6009976296c7e55ff733009dac150957d1fd159e9fd8\"" Dec 12 18:39:57.219474 containerd[1990]: time="2025-12-12T18:39:57.219364413Z" level=info msg="CreateContainer within sandbox \"068c2a9343d6032af57f6009976296c7e55ff733009dac150957d1fd159e9fd8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:39:57.222621 containerd[1990]: time="2025-12-12T18:39:57.222537364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-16,Uid:0815ce36621f1f634b8a5cfe6736bb10,Namespace:kube-system,Attempt:0,} returns sandbox id \"bee67bf8680fdd17231ad5c48b09eca705707dc9c80523a8ed234bb11493b56f\"" Dec 12 18:39:57.227447 containerd[1990]: time="2025-12-12T18:39:57.227407211Z" level=info msg="CreateContainer within sandbox \"bee67bf8680fdd17231ad5c48b09eca705707dc9c80523a8ed234bb11493b56f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:39:57.259412 containerd[1990]: time="2025-12-12T18:39:57.259345698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-16,Uid:215748cac87e0862edbd20ec9758273a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdfb7e4ccc89fa286a5ef4e7f8439590747d5baf91dc5c1ed3cf9307b07695c5\"" Dec 12 18:39:57.265896 containerd[1990]: time="2025-12-12T18:39:57.265850677Z" level=info msg="CreateContainer within sandbox \"cdfb7e4ccc89fa286a5ef4e7f8439590747d5baf91dc5c1ed3cf9307b07695c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:39:57.276127 containerd[1990]: time="2025-12-12T18:39:57.276066224Z" level=info msg="Container ebe7b58e9925df5864c8595c9c89aa55899f731a168b31afa6748348d120e9ee: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:57.276355 containerd[1990]: time="2025-12-12T18:39:57.276324099Z" level=info msg="Container b29736da306e43a4361fe71677c80378ec9b4b27da12b689ece38b446bfa7cbc: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:57.277012 containerd[1990]: time="2025-12-12T18:39:57.276870720Z" level=info msg="Container 58675a4a5a9bd843128f3b43ea052c04911ffa13fa5382e91dab4a7018702b72: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:57.300798 containerd[1990]: time="2025-12-12T18:39:57.300748982Z" level=info msg="CreateContainer within sandbox \"068c2a9343d6032af57f6009976296c7e55ff733009dac150957d1fd159e9fd8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ebe7b58e9925df5864c8595c9c89aa55899f731a168b31afa6748348d120e9ee\"" Dec 12 18:39:57.303037 containerd[1990]: time="2025-12-12T18:39:57.302185119Z" level=info msg="CreateContainer within sandbox \"bee67bf8680fdd17231ad5c48b09eca705707dc9c80523a8ed234bb11493b56f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b29736da306e43a4361fe71677c80378ec9b4b27da12b689ece38b446bfa7cbc\"" Dec 12 18:39:57.303037 containerd[1990]: time="2025-12-12T18:39:57.302400074Z" level=info msg="StartContainer for \"ebe7b58e9925df5864c8595c9c89aa55899f731a168b31afa6748348d120e9ee\"" Dec 12 18:39:57.304246 containerd[1990]: time="2025-12-12T18:39:57.304182008Z" level=info msg="connecting to shim ebe7b58e9925df5864c8595c9c89aa55899f731a168b31afa6748348d120e9ee" address="unix:///run/containerd/s/26dced8ea666f94dd987dbe9d04a94306d5c9e4da90e288d2bda259c378bdb35" protocol=ttrpc version=3 Dec 12 18:39:57.305581 containerd[1990]: time="2025-12-12T18:39:57.305367072Z" level=info msg="StartContainer for \"b29736da306e43a4361fe71677c80378ec9b4b27da12b689ece38b446bfa7cbc\"" Dec 12 18:39:57.306303 containerd[1990]: time="2025-12-12T18:39:57.306278148Z" level=info msg="CreateContainer within sandbox \"cdfb7e4ccc89fa286a5ef4e7f8439590747d5baf91dc5c1ed3cf9307b07695c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"58675a4a5a9bd843128f3b43ea052c04911ffa13fa5382e91dab4a7018702b72\"" Dec 12 18:39:57.306626 containerd[1990]: time="2025-12-12T18:39:57.306599767Z" level=info msg="connecting to shim b29736da306e43a4361fe71677c80378ec9b4b27da12b689ece38b446bfa7cbc" address="unix:///run/containerd/s/e5516a5355ea4034f362be41a02cb87caec5c9caac0b7e0cc242e697d8bdf75f" protocol=ttrpc version=3 Dec 12 18:39:57.307166 containerd[1990]: time="2025-12-12T18:39:57.307147278Z" level=info msg="StartContainer for \"58675a4a5a9bd843128f3b43ea052c04911ffa13fa5382e91dab4a7018702b72\"" Dec 12 18:39:57.308576 containerd[1990]: time="2025-12-12T18:39:57.308443955Z" level=info msg="connecting to shim 58675a4a5a9bd843128f3b43ea052c04911ffa13fa5382e91dab4a7018702b72" address="unix:///run/containerd/s/d27da8adcce9186c7635810e9e49df4fc4a087b4d340e6a97126e3a75eeb6679" protocol=ttrpc version=3 Dec 12 18:39:57.333783 kubelet[2958]: I1212 18:39:57.333727 2958 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" Dec 12 18:39:57.336238 kubelet[2958]: E1212 18:39:57.334534 2958 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.16:6443/api/v1/nodes\": dial tcp 172.31.29.16:6443: connect: connection refused" node="ip-172-31-29-16" Dec 12 18:39:57.337926 systemd[1]: Started cri-containerd-b29736da306e43a4361fe71677c80378ec9b4b27da12b689ece38b446bfa7cbc.scope - libcontainer container b29736da306e43a4361fe71677c80378ec9b4b27da12b689ece38b446bfa7cbc. Dec 12 18:39:57.359832 systemd[1]: Started cri-containerd-ebe7b58e9925df5864c8595c9c89aa55899f731a168b31afa6748348d120e9ee.scope - libcontainer container ebe7b58e9925df5864c8595c9c89aa55899f731a168b31afa6748348d120e9ee. Dec 12 18:39:57.368005 systemd[1]: Started cri-containerd-58675a4a5a9bd843128f3b43ea052c04911ffa13fa5382e91dab4a7018702b72.scope - libcontainer container 58675a4a5a9bd843128f3b43ea052c04911ffa13fa5382e91dab4a7018702b72. Dec 12 18:39:57.480258 containerd[1990]: time="2025-12-12T18:39:57.480207005Z" level=info msg="StartContainer for \"b29736da306e43a4361fe71677c80378ec9b4b27da12b689ece38b446bfa7cbc\" returns successfully" Dec 12 18:39:57.482579 containerd[1990]: time="2025-12-12T18:39:57.482528988Z" level=info msg="StartContainer for \"ebe7b58e9925df5864c8595c9c89aa55899f731a168b31afa6748348d120e9ee\" returns successfully" Dec 12 18:39:57.493676 containerd[1990]: time="2025-12-12T18:39:57.493519196Z" level=info msg="StartContainer for \"58675a4a5a9bd843128f3b43ea052c04911ffa13fa5382e91dab4a7018702b72\" returns successfully" Dec 12 18:39:57.749433 kubelet[2958]: E1212 18:39:57.749322 2958 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.29.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:39:57.792247 kubelet[2958]: E1212 18:39:57.792161 2958 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" Dec 12 18:39:57.793823 kubelet[2958]: E1212 18:39:57.793796 2958 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" Dec 12 18:39:57.801708 kubelet[2958]: E1212 18:39:57.801034 2958 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" Dec 12 18:39:58.713196 kubelet[2958]: E1212 18:39:58.713132 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": dial tcp 172.31.29.16:6443: connect: connection refused" interval="3.2s" Dec 12 18:39:58.803841 kubelet[2958]: E1212 18:39:58.802148 2958 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" Dec 12 18:39:58.805203 kubelet[2958]: E1212 18:39:58.805028 2958 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" Dec 12 18:39:58.937442 kubelet[2958]: I1212 18:39:58.937181 2958 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" Dec 12 18:39:58.937732 kubelet[2958]: E1212 18:39:58.937709 2958 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.16:6443/api/v1/nodes\": dial tcp 172.31.29.16:6443: connect: connection refused" node="ip-172-31-29-16" Dec 12 18:39:59.099258 kubelet[2958]: E1212 18:39:59.099206 2958 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:39:59.539446 kubelet[2958]: E1212 18:39:59.539166 2958 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" Dec 12 18:39:59.571266 kubelet[2958]: E1212 18:39:59.571217 2958 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-16&limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:39:59.804313 kubelet[2958]: E1212 18:39:59.804276 2958 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" Dec 12 18:39:59.809113 kubelet[2958]: E1212 18:39:59.809016 2958 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:40:00.255500 kubelet[2958]: E1212 18:40:00.254858 2958 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:40:01.831800 kubelet[2958]: E1212 18:40:01.831715 2958 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.29.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:40:01.916647 kubelet[2958]: E1212 18:40:01.915463 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": dial tcp 172.31.29.16:6443: connect: connection refused" interval="6.4s" Dec 12 18:40:02.145644 kubelet[2958]: I1212 18:40:02.145420 2958 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" Dec 12 18:40:02.157212 kubelet[2958]: E1212 18:40:02.147725 2958 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.16:6443/api/v1/nodes\": dial tcp 172.31.29.16:6443: connect: connection refused" node="ip-172-31-29-16" Dec 12 18:40:03.944175 update_engine[1959]: I20251212 18:40:03.943403 1959 update_attempter.cc:509] Updating boot flags... Dec 12 18:40:04.204302 kubelet[2958]: E1212 18:40:04.204171 2958 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" Dec 12 18:40:05.828674 kubelet[2958]: E1212 18:40:05.828642 2958 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-16\" not found" Dec 12 18:40:06.652568 kubelet[2958]: I1212 18:40:06.652474 2958 apiserver.go:52] "Watching apiserver" Dec 12 18:40:06.692009 kubelet[2958]: I1212 18:40:06.691974 2958 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 18:40:06.900300 kubelet[2958]: E1212 18:40:06.900269 2958 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-29-16" not found Dec 12 18:40:07.273979 kubelet[2958]: E1212 18:40:07.273938 2958 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-29-16" not found Dec 12 18:40:07.738583 kubelet[2958]: E1212 18:40:07.738491 2958 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-29-16" not found Dec 12 18:40:08.322594 kubelet[2958]: E1212 18:40:08.322527 2958 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" Dec 12 18:40:08.550593 kubelet[2958]: I1212 18:40:08.550374 2958 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" Dec 12 18:40:08.556087 kubelet[2958]: I1212 18:40:08.556051 2958 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-16" Dec 12 18:40:08.592722 kubelet[2958]: I1212 18:40:08.592588 2958 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-16" Dec 12 18:40:08.607378 kubelet[2958]: I1212 18:40:08.607342 2958 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-16" Dec 12 18:40:08.614839 kubelet[2958]: I1212 18:40:08.614783 2958 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-16" Dec 12 18:40:08.949647 kubelet[2958]: I1212 18:40:08.948958 2958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-16" podStartSLOduration=0.948937099 podStartE2EDuration="948.937099ms" podCreationTimestamp="2025-12-12 18:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:40:08.948919583 +0000 UTC m=+14.614989877" watchObservedRunningTime="2025-12-12 18:40:08.948937099 +0000 UTC m=+14.615007376" Dec 12 18:40:08.969493 kubelet[2958]: I1212 18:40:08.969325 2958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-16" podStartSLOduration=0.969310357 podStartE2EDuration="969.310357ms" podCreationTimestamp="2025-12-12 18:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:40:08.969265476 +0000 UTC m=+14.635335765" watchObservedRunningTime="2025-12-12 18:40:08.969310357 +0000 UTC m=+14.635380652" Dec 12 18:40:08.982037 kubelet[2958]: I1212 18:40:08.981985 2958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-16" podStartSLOduration=0.981973141 podStartE2EDuration="981.973141ms" podCreationTimestamp="2025-12-12 18:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:40:08.981730045 +0000 UTC m=+14.647800341" watchObservedRunningTime="2025-12-12 18:40:08.981973141 +0000 UTC m=+14.648043433" Dec 12 18:40:09.312677 systemd[1]: Reload requested from client PID 3423 ('systemctl') (unit session-9.scope)... Dec 12 18:40:09.312696 systemd[1]: Reloading... Dec 12 18:40:09.420595 zram_generator::config[3467]: No configuration found. Dec 12 18:40:09.731893 systemd[1]: Reloading finished in 418 ms. Dec 12 18:40:09.763126 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:40:09.777382 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:40:09.777637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:40:09.777688 systemd[1]: kubelet.service: Consumed 1.917s CPU time, 121.5M memory peak. Dec 12 18:40:09.780660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:40:10.062476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:40:10.074091 (kubelet)[3527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:40:10.169075 kubelet[3527]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:40:10.169680 kubelet[3527]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:40:10.172870 kubelet[3527]: I1212 18:40:10.172136 3527 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:40:10.182631 kubelet[3527]: I1212 18:40:10.182600 3527 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 18:40:10.182772 kubelet[3527]: I1212 18:40:10.182764 3527 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:40:10.185128 kubelet[3527]: I1212 18:40:10.185085 3527 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 18:40:10.185128 kubelet[3527]: I1212 18:40:10.185118 3527 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:40:10.185542 kubelet[3527]: I1212 18:40:10.185513 3527 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:40:10.191250 kubelet[3527]: I1212 18:40:10.191209 3527 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 18:40:10.194073 kubelet[3527]: I1212 18:40:10.193842 3527 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:40:10.206826 kubelet[3527]: I1212 18:40:10.206797 3527 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:40:10.212599 kubelet[3527]: I1212 18:40:10.212472 3527 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 18:40:10.215585 kubelet[3527]: I1212 18:40:10.215272 3527 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:40:10.215585 kubelet[3527]: I1212 18:40:10.215323 3527 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-16","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:40:10.215585 kubelet[3527]: I1212 18:40:10.215483 3527 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:40:10.215585 kubelet[3527]: I1212 18:40:10.215493 3527 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 18:40:10.216010 kubelet[3527]: I1212 18:40:10.215525 3527 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 18:40:10.219058 kubelet[3527]: I1212 18:40:10.219030 3527 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:40:10.232571 kubelet[3527]: I1212 18:40:10.230694 3527 kubelet.go:475] "Attempting to sync node with API server" Dec 12 18:40:10.232571 kubelet[3527]: I1212 18:40:10.230747 3527 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:40:10.232571 kubelet[3527]: I1212 18:40:10.230777 3527 kubelet.go:387] "Adding apiserver pod source" Dec 12 18:40:10.232571 kubelet[3527]: I1212 18:40:10.230801 3527 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:40:10.236165 kubelet[3527]: I1212 18:40:10.236125 3527 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:40:10.236843 kubelet[3527]: I1212 18:40:10.236816 3527 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:40:10.236944 kubelet[3527]: I1212 18:40:10.236864 3527 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 18:40:10.250422 kubelet[3527]: I1212 18:40:10.249698 3527 server.go:1262] "Started kubelet" Dec 12 18:40:10.254235 kubelet[3527]: I1212 18:40:10.253540 3527 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:40:10.269102 kubelet[3527]: I1212 18:40:10.269073 3527 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:40:10.269817 kubelet[3527]: I1212 18:40:10.269779 3527 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:40:10.280575 kubelet[3527]: I1212 18:40:10.276694 3527 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:40:10.281670 kubelet[3527]: I1212 18:40:10.281633 3527 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 18:40:10.283038 kubelet[3527]: I1212 18:40:10.282886 3527 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:40:10.296427 kubelet[3527]: I1212 18:40:10.296187 3527 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 18:40:10.299577 kubelet[3527]: E1212 18:40:10.296857 3527 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-29-16\" not found" Dec 12 18:40:10.302580 kubelet[3527]: I1212 18:40:10.300054 3527 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 18:40:10.304949 kubelet[3527]: I1212 18:40:10.304251 3527 server.go:310] "Adding debug handlers to kubelet server" Dec 12 18:40:10.306569 kubelet[3527]: I1212 18:40:10.306168 3527 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:40:10.306569 kubelet[3527]: I1212 18:40:10.306316 3527 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:40:10.313407 kubelet[3527]: E1212 18:40:10.311451 3527 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:40:10.313407 kubelet[3527]: I1212 18:40:10.312719 3527 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:40:10.314625 kubelet[3527]: I1212 18:40:10.314601 3527 reconciler.go:29] "Reconciler: start to sync state" Dec 12 18:40:10.321202 kubelet[3527]: I1212 18:40:10.321059 3527 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 18:40:10.324673 kubelet[3527]: I1212 18:40:10.324611 3527 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 18:40:10.324673 kubelet[3527]: I1212 18:40:10.324673 3527 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 18:40:10.324865 kubelet[3527]: I1212 18:40:10.324715 3527 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 18:40:10.324865 kubelet[3527]: E1212 18:40:10.324770 3527 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:40:10.365467 sudo[3560]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 12 18:40:10.366665 sudo[3560]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 12 18:40:10.425201 kubelet[3527]: E1212 18:40:10.424826 3527 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 18:40:10.425527 kubelet[3527]: I1212 18:40:10.425380 3527 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:40:10.425527 kubelet[3527]: I1212 18:40:10.425397 3527 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:40:10.425527 kubelet[3527]: I1212 18:40:10.425420 3527 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:40:10.425933 kubelet[3527]: I1212 18:40:10.425645 3527 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:40:10.425933 kubelet[3527]: I1212 18:40:10.425658 3527 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:40:10.425933 kubelet[3527]: I1212 18:40:10.425681 3527 policy_none.go:49] "None policy: Start" Dec 12 18:40:10.425933 kubelet[3527]: I1212 18:40:10.425693 3527 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 18:40:10.425933 kubelet[3527]: I1212 18:40:10.425706 3527 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 18:40:10.425933 kubelet[3527]: I1212 18:40:10.425822 3527 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 12 18:40:10.425933 kubelet[3527]: I1212 18:40:10.425833 3527 policy_none.go:47] "Start" Dec 12 18:40:10.439046 kubelet[3527]: E1212 18:40:10.439010 3527 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:40:10.439236 kubelet[3527]: I1212 18:40:10.439214 3527 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:40:10.439317 kubelet[3527]: I1212 18:40:10.439235 3527 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:40:10.446576 kubelet[3527]: I1212 18:40:10.446283 3527 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:40:10.447213 kubelet[3527]: E1212 18:40:10.447183 3527 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:40:10.568026 kubelet[3527]: I1212 18:40:10.567831 3527 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" Dec 12 18:40:10.592958 kubelet[3527]: I1212 18:40:10.591440 3527 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-29-16" Dec 12 18:40:10.592958 kubelet[3527]: I1212 18:40:10.591623 3527 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-16" Dec 12 18:40:10.625992 kubelet[3527]: I1212 18:40:10.625953 3527 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-16" Dec 12 18:40:10.627411 kubelet[3527]: I1212 18:40:10.626721 3527 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-16" Dec 12 18:40:10.636353 kubelet[3527]: I1212 18:40:10.636320 3527 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-16" Dec 12 18:40:10.654417 kubelet[3527]: E1212 18:40:10.654361 3527 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-16\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-16" Dec 12 18:40:10.659685 kubelet[3527]: E1212 18:40:10.658567 3527 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-16\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-16" Dec 12 18:40:10.659685 kubelet[3527]: E1212 18:40:10.658641 3527 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-16\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-16" Dec 12 18:40:10.723721 kubelet[3527]: I1212 18:40:10.723570 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0815ce36621f1f634b8a5cfe6736bb10-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"0815ce36621f1f634b8a5cfe6736bb10\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" Dec 12 18:40:10.723879 kubelet[3527]: I1212 18:40:10.723747 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/215748cac87e0862edbd20ec9758273a-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-16\" (UID: \"215748cac87e0862edbd20ec9758273a\") " pod="kube-system/kube-scheduler-ip-172-31-29-16" Dec 12 18:40:10.723879 kubelet[3527]: I1212 18:40:10.723772 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e5fb53e6a3403fc6af39bdc769a1fee-ca-certs\") pod \"kube-apiserver-ip-172-31-29-16\" (UID: \"1e5fb53e6a3403fc6af39bdc769a1fee\") " pod="kube-system/kube-apiserver-ip-172-31-29-16" Dec 12 18:40:10.723879 kubelet[3527]: I1212 18:40:10.723797 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e5fb53e6a3403fc6af39bdc769a1fee-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-16\" (UID: \"1e5fb53e6a3403fc6af39bdc769a1fee\") " pod="kube-system/kube-apiserver-ip-172-31-29-16" Dec 12 18:40:10.723879 kubelet[3527]: I1212 18:40:10.723835 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0815ce36621f1f634b8a5cfe6736bb10-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"0815ce36621f1f634b8a5cfe6736bb10\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" Dec 12 18:40:10.723879 kubelet[3527]: I1212 18:40:10.723859 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e5fb53e6a3403fc6af39bdc769a1fee-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-16\" (UID: \"1e5fb53e6a3403fc6af39bdc769a1fee\") " pod="kube-system/kube-apiserver-ip-172-31-29-16" Dec 12 18:40:10.724061 kubelet[3527]: I1212 18:40:10.723880 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0815ce36621f1f634b8a5cfe6736bb10-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"0815ce36621f1f634b8a5cfe6736bb10\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" Dec 12 18:40:10.724061 kubelet[3527]: I1212 18:40:10.723901 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0815ce36621f1f634b8a5cfe6736bb10-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"0815ce36621f1f634b8a5cfe6736bb10\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" Dec 12 18:40:10.724061 kubelet[3527]: I1212 18:40:10.723924 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0815ce36621f1f634b8a5cfe6736bb10-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"0815ce36621f1f634b8a5cfe6736bb10\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" Dec 12 18:40:10.908064 sudo[3560]: pam_unix(sudo:session): session closed for user root Dec 12 18:40:11.239294 kubelet[3527]: I1212 18:40:11.239187 3527 apiserver.go:52] "Watching apiserver" Dec 12 18:40:11.299571 kubelet[3527]: I1212 18:40:11.299522 3527 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 18:40:11.381891 kubelet[3527]: I1212 18:40:11.381818 3527 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-16" Dec 12 18:40:11.403573 kubelet[3527]: E1212 18:40:11.402650 3527 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-16\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-16" Dec 12 18:40:12.595420 sudo[2390]: pam_unix(sudo:session): session closed for user root Dec 12 18:40:12.618975 sshd[2389]: Connection closed by 139.178.89.65 port 60954 Dec 12 18:40:12.620015 sshd-session[2384]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:12.624820 systemd-logind[1957]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:40:12.625573 systemd[1]: sshd@8-172.31.29.16:22-139.178.89.65:60954.service: Deactivated successfully. Dec 12 18:40:12.628355 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:40:12.628980 systemd[1]: session-9.scope: Consumed 5.994s CPU time, 214.1M memory peak. Dec 12 18:40:12.633333 systemd-logind[1957]: Removed session 9. Dec 12 18:40:15.304758 systemd[1]: Created slice kubepods-besteffort-podd45f6e21_3b4b_416a_850b_2e953056eadf.slice - libcontainer container kubepods-besteffort-podd45f6e21_3b4b_416a_850b_2e953056eadf.slice. Dec 12 18:40:15.312511 kubelet[3527]: I1212 18:40:15.312303 3527 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:40:15.316394 containerd[1990]: time="2025-12-12T18:40:15.314400178Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:40:15.320184 kubelet[3527]: I1212 18:40:15.318745 3527 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:40:15.322707 systemd[1]: Created slice kubepods-burstable-pod7a56f13a_5e20_487d_81bf_b0aa72ecd87b.slice - libcontainer container kubepods-burstable-pod7a56f13a_5e20_487d_81bf_b0aa72ecd87b.slice. Dec 12 18:40:15.354961 kubelet[3527]: I1212 18:40:15.354916 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d45f6e21-3b4b-416a-850b-2e953056eadf-kube-proxy\") pod \"kube-proxy-jmrtj\" (UID: \"d45f6e21-3b4b-416a-850b-2e953056eadf\") " pod="kube-system/kube-proxy-jmrtj" Dec 12 18:40:15.355141 kubelet[3527]: I1212 18:40:15.354970 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss8vk\" (UniqueName: \"kubernetes.io/projected/d45f6e21-3b4b-416a-850b-2e953056eadf-kube-api-access-ss8vk\") pod \"kube-proxy-jmrtj\" (UID: \"d45f6e21-3b4b-416a-850b-2e953056eadf\") " pod="kube-system/kube-proxy-jmrtj" Dec 12 18:40:15.355141 kubelet[3527]: I1212 18:40:15.354999 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-bpf-maps\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.355141 kubelet[3527]: I1212 18:40:15.355021 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-xtables-lock\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.355141 kubelet[3527]: I1212 18:40:15.355041 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-host-proc-sys-kernel\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.355141 kubelet[3527]: I1212 18:40:15.355059 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d45f6e21-3b4b-416a-850b-2e953056eadf-xtables-lock\") pod \"kube-proxy-jmrtj\" (UID: \"d45f6e21-3b4b-416a-850b-2e953056eadf\") " pod="kube-system/kube-proxy-jmrtj" Dec 12 18:40:15.355367 kubelet[3527]: I1212 18:40:15.355078 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d45f6e21-3b4b-416a-850b-2e953056eadf-lib-modules\") pod \"kube-proxy-jmrtj\" (UID: \"d45f6e21-3b4b-416a-850b-2e953056eadf\") " pod="kube-system/kube-proxy-jmrtj" Dec 12 18:40:15.355367 kubelet[3527]: I1212 18:40:15.355102 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-hostproc\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.355367 kubelet[3527]: I1212 18:40:15.355134 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-etc-cni-netd\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.355367 kubelet[3527]: I1212 18:40:15.355154 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-lib-modules\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.355367 kubelet[3527]: I1212 18:40:15.355178 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-clustermesh-secrets\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.355367 kubelet[3527]: I1212 18:40:15.355202 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cilium-config-path\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.355877 kubelet[3527]: I1212 18:40:15.355230 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvwhc\" (UniqueName: \"kubernetes.io/projected/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-kube-api-access-hvwhc\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.355877 kubelet[3527]: I1212 18:40:15.355265 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cilium-run\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.355877 kubelet[3527]: I1212 18:40:15.355287 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cilium-cgroup\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.355877 kubelet[3527]: I1212 18:40:15.355314 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cni-path\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.355877 kubelet[3527]: I1212 18:40:15.355338 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-host-proc-sys-net\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.355877 kubelet[3527]: I1212 18:40:15.355363 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-hubble-tls\") pod \"cilium-ztnps\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " pod="kube-system/cilium-ztnps" Dec 12 18:40:15.472146 kubelet[3527]: E1212 18:40:15.471993 3527 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 12 18:40:15.472146 kubelet[3527]: E1212 18:40:15.472021 3527 projected.go:196] Error preparing data for projected volume kube-api-access-ss8vk for pod kube-system/kube-proxy-jmrtj: configmap "kube-root-ca.crt" not found Dec 12 18:40:15.476378 kubelet[3527]: E1212 18:40:15.475288 3527 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d45f6e21-3b4b-416a-850b-2e953056eadf-kube-api-access-ss8vk podName:d45f6e21-3b4b-416a-850b-2e953056eadf nodeName:}" failed. No retries permitted until 2025-12-12 18:40:15.972063532 +0000 UTC m=+5.887843741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ss8vk" (UniqueName: "kubernetes.io/projected/d45f6e21-3b4b-416a-850b-2e953056eadf-kube-api-access-ss8vk") pod "kube-proxy-jmrtj" (UID: "d45f6e21-3b4b-416a-850b-2e953056eadf") : configmap "kube-root-ca.crt" not found Dec 12 18:40:15.476378 kubelet[3527]: E1212 18:40:15.476163 3527 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 12 18:40:15.476378 kubelet[3527]: E1212 18:40:15.476280 3527 projected.go:196] Error preparing data for projected volume kube-api-access-hvwhc for pod kube-system/cilium-ztnps: configmap "kube-root-ca.crt" not found Dec 12 18:40:15.476378 kubelet[3527]: E1212 18:40:15.476340 3527 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-kube-api-access-hvwhc podName:7a56f13a-5e20-487d-81bf-b0aa72ecd87b nodeName:}" failed. No retries permitted until 2025-12-12 18:40:15.976323774 +0000 UTC m=+5.892103969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hvwhc" (UniqueName: "kubernetes.io/projected/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-kube-api-access-hvwhc") pod "cilium-ztnps" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b") : configmap "kube-root-ca.crt" not found Dec 12 18:40:16.224355 containerd[1990]: time="2025-12-12T18:40:16.224311078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmrtj,Uid:d45f6e21-3b4b-416a-850b-2e953056eadf,Namespace:kube-system,Attempt:0,}" Dec 12 18:40:16.235206 containerd[1990]: time="2025-12-12T18:40:16.235153390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztnps,Uid:7a56f13a-5e20-487d-81bf-b0aa72ecd87b,Namespace:kube-system,Attempt:0,}" Dec 12 18:40:16.254087 systemd[1]: Created slice kubepods-besteffort-pod90a7f4fa_f4e9_41b9_af6a_ededbb9f8827.slice - libcontainer container kubepods-besteffort-pod90a7f4fa_f4e9_41b9_af6a_ededbb9f8827.slice. Dec 12 18:40:16.302646 containerd[1990]: time="2025-12-12T18:40:16.302568607Z" level=info msg="connecting to shim 4a0a43f9d140fe4849a4de5085966e051b93b7fabed63b5b4759ecbc973bdb99" address="unix:///run/containerd/s/e1669af4f15b05a08d2635de4bf02e1a73c8bb5af0c5c937cc6194d908b0433e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:40:16.310402 containerd[1990]: time="2025-12-12T18:40:16.310348537Z" level=info msg="connecting to shim cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae" address="unix:///run/containerd/s/36ea1f55c69898fc55148995b8261b5e965c00ca7a3e5e6ea1986959c8b9811d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:40:16.345765 systemd[1]: Started cri-containerd-4a0a43f9d140fe4849a4de5085966e051b93b7fabed63b5b4759ecbc973bdb99.scope - libcontainer container 4a0a43f9d140fe4849a4de5085966e051b93b7fabed63b5b4759ecbc973bdb99. Dec 12 18:40:16.350940 systemd[1]: Started cri-containerd-cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae.scope - libcontainer container cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae. Dec 12 18:40:16.378691 kubelet[3527]: I1212 18:40:16.378477 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90a7f4fa-f4e9-41b9-af6a-ededbb9f8827-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-wc5hz\" (UID: \"90a7f4fa-f4e9-41b9-af6a-ededbb9f8827\") " pod="kube-system/cilium-operator-6f9c7c5859-wc5hz" Dec 12 18:40:16.378691 kubelet[3527]: I1212 18:40:16.378533 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsm8m\" (UniqueName: \"kubernetes.io/projected/90a7f4fa-f4e9-41b9-af6a-ededbb9f8827-kube-api-access-gsm8m\") pod \"cilium-operator-6f9c7c5859-wc5hz\" (UID: \"90a7f4fa-f4e9-41b9-af6a-ededbb9f8827\") " pod="kube-system/cilium-operator-6f9c7c5859-wc5hz" Dec 12 18:40:16.410793 containerd[1990]: time="2025-12-12T18:40:16.410752809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmrtj,Uid:d45f6e21-3b4b-416a-850b-2e953056eadf,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a0a43f9d140fe4849a4de5085966e051b93b7fabed63b5b4759ecbc973bdb99\"" Dec 12 18:40:16.415951 containerd[1990]: time="2025-12-12T18:40:16.415900374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztnps,Uid:7a56f13a-5e20-487d-81bf-b0aa72ecd87b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\"" Dec 12 18:40:16.417592 containerd[1990]: time="2025-12-12T18:40:16.417565712Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 12 18:40:16.421022 containerd[1990]: time="2025-12-12T18:40:16.420981393Z" level=info msg="CreateContainer within sandbox \"4a0a43f9d140fe4849a4de5085966e051b93b7fabed63b5b4759ecbc973bdb99\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:40:16.462578 containerd[1990]: time="2025-12-12T18:40:16.461216719Z" level=info msg="Container 71b352105f17156681f5c7fd7742967831b1677e6fe9a816d551651b6956179a: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:40:16.476929 containerd[1990]: time="2025-12-12T18:40:16.476812351Z" level=info msg="CreateContainer within sandbox \"4a0a43f9d140fe4849a4de5085966e051b93b7fabed63b5b4759ecbc973bdb99\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"71b352105f17156681f5c7fd7742967831b1677e6fe9a816d551651b6956179a\"" Dec 12 18:40:16.477984 containerd[1990]: time="2025-12-12T18:40:16.477950525Z" level=info msg="StartContainer for \"71b352105f17156681f5c7fd7742967831b1677e6fe9a816d551651b6956179a\"" Dec 12 18:40:16.480310 containerd[1990]: time="2025-12-12T18:40:16.479368181Z" level=info msg="connecting to shim 71b352105f17156681f5c7fd7742967831b1677e6fe9a816d551651b6956179a" address="unix:///run/containerd/s/e1669af4f15b05a08d2635de4bf02e1a73c8bb5af0c5c937cc6194d908b0433e" protocol=ttrpc version=3 Dec 12 18:40:16.528269 systemd[1]: Started cri-containerd-71b352105f17156681f5c7fd7742967831b1677e6fe9a816d551651b6956179a.scope - libcontainer container 71b352105f17156681f5c7fd7742967831b1677e6fe9a816d551651b6956179a. Dec 12 18:40:16.593386 containerd[1990]: time="2025-12-12T18:40:16.593339371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-wc5hz,Uid:90a7f4fa-f4e9-41b9-af6a-ededbb9f8827,Namespace:kube-system,Attempt:0,}" Dec 12 18:40:16.630828 containerd[1990]: time="2025-12-12T18:40:16.630753451Z" level=info msg="StartContainer for \"71b352105f17156681f5c7fd7742967831b1677e6fe9a816d551651b6956179a\" returns successfully" Dec 12 18:40:16.640856 containerd[1990]: time="2025-12-12T18:40:16.640781446Z" level=info msg="connecting to shim d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317" address="unix:///run/containerd/s/5021866a67e3543f563f34d8e51bd9958b3eb8023022d80e28013ee569dbaf9c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:40:16.678842 systemd[1]: Started cri-containerd-d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317.scope - libcontainer container d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317. Dec 12 18:40:16.745107 containerd[1990]: time="2025-12-12T18:40:16.744991859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-wc5hz,Uid:90a7f4fa-f4e9-41b9-af6a-ededbb9f8827,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\"" Dec 12 18:40:17.424131 kubelet[3527]: I1212 18:40:17.423419 3527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jmrtj" podStartSLOduration=2.423396131 podStartE2EDuration="2.423396131s" podCreationTimestamp="2025-12-12 18:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:40:17.422987566 +0000 UTC m=+7.338767796" watchObservedRunningTime="2025-12-12 18:40:17.423396131 +0000 UTC m=+7.339176347" Dec 12 18:40:21.751855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount186728724.mount: Deactivated successfully. Dec 12 18:40:24.381270 containerd[1990]: time="2025-12-12T18:40:24.381191085Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:40:24.383049 containerd[1990]: time="2025-12-12T18:40:24.383013837Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 12 18:40:24.420212 containerd[1990]: time="2025-12-12T18:40:24.419898563Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:40:24.421597 containerd[1990]: time="2025-12-12T18:40:24.421538620Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.003941009s" Dec 12 18:40:24.421597 containerd[1990]: time="2025-12-12T18:40:24.421594502Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 12 18:40:24.431967 containerd[1990]: time="2025-12-12T18:40:24.431837654Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 12 18:40:24.438163 containerd[1990]: time="2025-12-12T18:40:24.438117843Z" level=info msg="CreateContainer within sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 18:40:24.484576 containerd[1990]: time="2025-12-12T18:40:24.481634566Z" level=info msg="Container b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:40:24.518203 containerd[1990]: time="2025-12-12T18:40:24.518153479Z" level=info msg="CreateContainer within sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c\"" Dec 12 18:40:24.520683 containerd[1990]: time="2025-12-12T18:40:24.520642843Z" level=info msg="StartContainer for \"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c\"" Dec 12 18:40:24.521853 containerd[1990]: time="2025-12-12T18:40:24.521795821Z" level=info msg="connecting to shim b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c" address="unix:///run/containerd/s/36ea1f55c69898fc55148995b8261b5e965c00ca7a3e5e6ea1986959c8b9811d" protocol=ttrpc version=3 Dec 12 18:40:24.572147 systemd[1]: Started cri-containerd-b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c.scope - libcontainer container b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c. Dec 12 18:40:24.641478 containerd[1990]: time="2025-12-12T18:40:24.641278445Z" level=info msg="StartContainer for \"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c\" returns successfully" Dec 12 18:40:24.654431 systemd[1]: cri-containerd-b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c.scope: Deactivated successfully. Dec 12 18:40:24.677285 containerd[1990]: time="2025-12-12T18:40:24.677208383Z" level=info msg="received container exit event container_id:\"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c\" id:\"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c\" pid:3959 exited_at:{seconds:1765564824 nanos:658627853}" Dec 12 18:40:24.709806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c-rootfs.mount: Deactivated successfully. Dec 12 18:40:25.467320 containerd[1990]: time="2025-12-12T18:40:25.467215696Z" level=info msg="CreateContainer within sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 18:40:25.483865 containerd[1990]: time="2025-12-12T18:40:25.483798967Z" level=info msg="Container e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:40:25.492457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3652793147.mount: Deactivated successfully. Dec 12 18:40:25.505911 containerd[1990]: time="2025-12-12T18:40:25.505744922Z" level=info msg="CreateContainer within sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4\"" Dec 12 18:40:25.508036 containerd[1990]: time="2025-12-12T18:40:25.507987262Z" level=info msg="StartContainer for \"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4\"" Dec 12 18:40:25.509303 containerd[1990]: time="2025-12-12T18:40:25.509260842Z" level=info msg="connecting to shim e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4" address="unix:///run/containerd/s/36ea1f55c69898fc55148995b8261b5e965c00ca7a3e5e6ea1986959c8b9811d" protocol=ttrpc version=3 Dec 12 18:40:25.540187 systemd[1]: Started cri-containerd-e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4.scope - libcontainer container e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4. Dec 12 18:40:25.590842 containerd[1990]: time="2025-12-12T18:40:25.590782808Z" level=info msg="StartContainer for \"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4\" returns successfully" Dec 12 18:40:25.607101 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:40:25.607414 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:40:25.608369 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:40:25.612667 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:40:25.615284 containerd[1990]: time="2025-12-12T18:40:25.615048499Z" level=info msg="received container exit event container_id:\"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4\" id:\"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4\" pid:4005 exited_at:{seconds:1765564825 nanos:613217288}" Dec 12 18:40:25.616173 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:40:25.617053 systemd[1]: cri-containerd-e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4.scope: Deactivated successfully. Dec 12 18:40:25.660592 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:40:26.474376 containerd[1990]: time="2025-12-12T18:40:26.472794376Z" level=info msg="CreateContainer within sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 18:40:26.486218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4-rootfs.mount: Deactivated successfully. Dec 12 18:40:26.515718 containerd[1990]: time="2025-12-12T18:40:26.514701589Z" level=info msg="Container 56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:40:26.527427 containerd[1990]: time="2025-12-12T18:40:26.527386799Z" level=info msg="CreateContainer within sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12\"" Dec 12 18:40:26.529219 containerd[1990]: time="2025-12-12T18:40:26.528691377Z" level=info msg="StartContainer for \"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12\"" Dec 12 18:40:26.532467 containerd[1990]: time="2025-12-12T18:40:26.532430284Z" level=info msg="connecting to shim 56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12" address="unix:///run/containerd/s/36ea1f55c69898fc55148995b8261b5e965c00ca7a3e5e6ea1986959c8b9811d" protocol=ttrpc version=3 Dec 12 18:40:26.591102 systemd[1]: Started cri-containerd-56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12.scope - libcontainer container 56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12. Dec 12 18:40:26.690856 containerd[1990]: time="2025-12-12T18:40:26.690810057Z" level=info msg="StartContainer for \"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12\" returns successfully" Dec 12 18:40:26.772393 containerd[1990]: time="2025-12-12T18:40:26.772258990Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:40:26.773582 containerd[1990]: time="2025-12-12T18:40:26.773523631Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 12 18:40:26.775415 containerd[1990]: time="2025-12-12T18:40:26.775132697Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:40:26.776998 containerd[1990]: time="2025-12-12T18:40:26.776957899Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.345074418s" Dec 12 18:40:26.777104 containerd[1990]: time="2025-12-12T18:40:26.776999530Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 12 18:40:26.782473 containerd[1990]: time="2025-12-12T18:40:26.782432295Z" level=info msg="CreateContainer within sandbox \"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 12 18:40:26.793147 containerd[1990]: time="2025-12-12T18:40:26.793094853Z" level=info msg="Container a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:40:26.802671 systemd[1]: cri-containerd-56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12.scope: Deactivated successfully. Dec 12 18:40:26.803479 systemd[1]: cri-containerd-56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12.scope: Consumed 36ms CPU time, 5.8M memory peak, 1M read from disk. Dec 12 18:40:26.808420 containerd[1990]: time="2025-12-12T18:40:26.808369962Z" level=info msg="CreateContainer within sandbox \"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9\"" Dec 12 18:40:26.810878 containerd[1990]: time="2025-12-12T18:40:26.810838201Z" level=info msg="received container exit event container_id:\"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12\" id:\"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12\" pid:4066 exited_at:{seconds:1765564826 nanos:809891995}" Dec 12 18:40:26.811294 containerd[1990]: time="2025-12-12T18:40:26.811128888Z" level=info msg="StartContainer for \"a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9\"" Dec 12 18:40:26.813757 containerd[1990]: time="2025-12-12T18:40:26.813567980Z" level=info msg="connecting to shim a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9" address="unix:///run/containerd/s/5021866a67e3543f563f34d8e51bd9958b3eb8023022d80e28013ee569dbaf9c" protocol=ttrpc version=3 Dec 12 18:40:26.846774 systemd[1]: Started cri-containerd-a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9.scope - libcontainer container a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9. Dec 12 18:40:26.905613 containerd[1990]: time="2025-12-12T18:40:26.905532722Z" level=info msg="StartContainer for \"a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9\" returns successfully" Dec 12 18:40:27.493312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12-rootfs.mount: Deactivated successfully. Dec 12 18:40:27.512758 containerd[1990]: time="2025-12-12T18:40:27.512712536Z" level=info msg="CreateContainer within sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 18:40:27.528561 containerd[1990]: time="2025-12-12T18:40:27.527700288Z" level=info msg="Container d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:40:27.534426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2331606666.mount: Deactivated successfully. Dec 12 18:40:27.544390 containerd[1990]: time="2025-12-12T18:40:27.544341457Z" level=info msg="CreateContainer within sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d\"" Dec 12 18:40:27.545671 containerd[1990]: time="2025-12-12T18:40:27.545016724Z" level=info msg="StartContainer for \"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d\"" Dec 12 18:40:27.546119 containerd[1990]: time="2025-12-12T18:40:27.546085886Z" level=info msg="connecting to shim d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d" address="unix:///run/containerd/s/36ea1f55c69898fc55148995b8261b5e965c00ca7a3e5e6ea1986959c8b9811d" protocol=ttrpc version=3 Dec 12 18:40:27.605806 systemd[1]: Started cri-containerd-d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d.scope - libcontainer container d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d. Dec 12 18:40:27.743066 containerd[1990]: time="2025-12-12T18:40:27.743010629Z" level=info msg="StartContainer for \"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d\" returns successfully" Dec 12 18:40:27.773119 systemd[1]: cri-containerd-d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d.scope: Deactivated successfully. Dec 12 18:40:27.777010 containerd[1990]: time="2025-12-12T18:40:27.776546160Z" level=info msg="received container exit event container_id:\"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d\" id:\"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d\" pid:4144 exited_at:{seconds:1765564827 nanos:776190840}" Dec 12 18:40:27.819483 kubelet[3527]: I1212 18:40:27.819408 3527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-wc5hz" podStartSLOduration=1.788325419 podStartE2EDuration="11.819373944s" podCreationTimestamp="2025-12-12 18:40:16 +0000 UTC" firstStartedPulling="2025-12-12 18:40:16.746684857 +0000 UTC m=+6.662465048" lastFinishedPulling="2025-12-12 18:40:26.777733381 +0000 UTC m=+16.693513573" observedRunningTime="2025-12-12 18:40:27.633492421 +0000 UTC m=+17.549272634" watchObservedRunningTime="2025-12-12 18:40:27.819373944 +0000 UTC m=+17.735154158" Dec 12 18:40:27.832204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d-rootfs.mount: Deactivated successfully. Dec 12 18:40:28.515899 containerd[1990]: time="2025-12-12T18:40:28.515856211Z" level=info msg="CreateContainer within sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 18:40:28.535698 containerd[1990]: time="2025-12-12T18:40:28.535577776Z" level=info msg="Container b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:40:28.549639 containerd[1990]: time="2025-12-12T18:40:28.549592947Z" level=info msg="CreateContainer within sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\"" Dec 12 18:40:28.551020 containerd[1990]: time="2025-12-12T18:40:28.550681493Z" level=info msg="StartContainer for \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\"" Dec 12 18:40:28.552128 containerd[1990]: time="2025-12-12T18:40:28.552005262Z" level=info msg="connecting to shim b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f" address="unix:///run/containerd/s/36ea1f55c69898fc55148995b8261b5e965c00ca7a3e5e6ea1986959c8b9811d" protocol=ttrpc version=3 Dec 12 18:40:28.578811 systemd[1]: Started cri-containerd-b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f.scope - libcontainer container b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f. Dec 12 18:40:28.653912 containerd[1990]: time="2025-12-12T18:40:28.653867340Z" level=info msg="StartContainer for \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\" returns successfully" Dec 12 18:40:29.196940 kubelet[3527]: I1212 18:40:29.196858 3527 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 12 18:40:29.276278 systemd[1]: Created slice kubepods-burstable-pod20a29317_4eda_4d06_b000_6b7e0f3e4b71.slice - libcontainer container kubepods-burstable-pod20a29317_4eda_4d06_b000_6b7e0f3e4b71.slice. Dec 12 18:40:29.288636 systemd[1]: Created slice kubepods-burstable-pod939de082_c1c0_4a5d_8057_21433eb39010.slice - libcontainer container kubepods-burstable-pod939de082_c1c0_4a5d_8057_21433eb39010.slice. Dec 12 18:40:29.396674 kubelet[3527]: I1212 18:40:29.396619 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a29317-4eda-4d06-b000-6b7e0f3e4b71-config-volume\") pod \"coredns-66bc5c9577-ltxvv\" (UID: \"20a29317-4eda-4d06-b000-6b7e0f3e4b71\") " pod="kube-system/coredns-66bc5c9577-ltxvv" Dec 12 18:40:29.396674 kubelet[3527]: I1212 18:40:29.396665 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8tsn\" (UniqueName: \"kubernetes.io/projected/20a29317-4eda-4d06-b000-6b7e0f3e4b71-kube-api-access-g8tsn\") pod \"coredns-66bc5c9577-ltxvv\" (UID: \"20a29317-4eda-4d06-b000-6b7e0f3e4b71\") " pod="kube-system/coredns-66bc5c9577-ltxvv" Dec 12 18:40:29.396856 kubelet[3527]: I1212 18:40:29.396689 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/939de082-c1c0-4a5d-8057-21433eb39010-config-volume\") pod \"coredns-66bc5c9577-cg725\" (UID: \"939de082-c1c0-4a5d-8057-21433eb39010\") " pod="kube-system/coredns-66bc5c9577-cg725" Dec 12 18:40:29.396856 kubelet[3527]: I1212 18:40:29.396708 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x8b6\" (UniqueName: \"kubernetes.io/projected/939de082-c1c0-4a5d-8057-21433eb39010-kube-api-access-7x8b6\") pod \"coredns-66bc5c9577-cg725\" (UID: \"939de082-c1c0-4a5d-8057-21433eb39010\") " pod="kube-system/coredns-66bc5c9577-cg725" Dec 12 18:40:29.547369 kubelet[3527]: I1212 18:40:29.546845 3527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ztnps" podStartSLOduration=6.532506879 podStartE2EDuration="14.546830865s" podCreationTimestamp="2025-12-12 18:40:15 +0000 UTC" firstStartedPulling="2025-12-12 18:40:16.417161703 +0000 UTC m=+6.332941898" lastFinishedPulling="2025-12-12 18:40:24.431485673 +0000 UTC m=+14.347265884" observedRunningTime="2025-12-12 18:40:29.546536931 +0000 UTC m=+19.462317145" watchObservedRunningTime="2025-12-12 18:40:29.546830865 +0000 UTC m=+19.462611078" Dec 12 18:40:29.585054 containerd[1990]: time="2025-12-12T18:40:29.584976902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ltxvv,Uid:20a29317-4eda-4d06-b000-6b7e0f3e4b71,Namespace:kube-system,Attempt:0,}" Dec 12 18:40:29.599940 containerd[1990]: time="2025-12-12T18:40:29.599862284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cg725,Uid:939de082-c1c0-4a5d-8057-21433eb39010,Namespace:kube-system,Attempt:0,}" Dec 12 18:40:35.526064 systemd-networkd[1856]: cilium_host: Link UP Dec 12 18:40:35.526851 systemd-networkd[1856]: cilium_net: Link UP Dec 12 18:40:35.527570 systemd-networkd[1856]: cilium_net: Gained carrier Dec 12 18:40:35.527820 systemd-networkd[1856]: cilium_host: Gained carrier Dec 12 18:40:35.528870 (udev-worker)[4271]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:40:35.529962 (udev-worker)[4305]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:40:35.715930 systemd-networkd[1856]: cilium_host: Gained IPv6LL Dec 12 18:40:35.844823 systemd-networkd[1856]: cilium_vxlan: Link UP Dec 12 18:40:35.844833 systemd-networkd[1856]: cilium_vxlan: Gained carrier Dec 12 18:40:35.996170 systemd-networkd[1856]: cilium_net: Gained IPv6LL Dec 12 18:40:37.459989 systemd-networkd[1856]: cilium_vxlan: Gained IPv6LL Dec 12 18:40:38.163637 kernel: NET: Registered PF_ALG protocol family Dec 12 18:40:39.170698 (udev-worker)[4311]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:40:39.193810 systemd-networkd[1856]: lxc_health: Link UP Dec 12 18:40:39.204029 systemd-networkd[1856]: lxc_health: Gained carrier Dec 12 18:40:39.678874 systemd-networkd[1856]: lxc9dab143594f5: Link UP Dec 12 18:40:39.689585 kernel: eth0: renamed from tmp4a66f Dec 12 18:40:39.692695 systemd-networkd[1856]: lxc9dab143594f5: Gained carrier Dec 12 18:40:39.711239 (udev-worker)[4312]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:40:39.714143 systemd-networkd[1856]: lxcaeaabb7527e3: Link UP Dec 12 18:40:39.723576 kernel: eth0: renamed from tmp225ce Dec 12 18:40:39.724956 systemd-networkd[1856]: lxcaeaabb7527e3: Gained carrier Dec 12 18:40:40.980641 systemd-networkd[1856]: lxc_health: Gained IPv6LL Dec 12 18:40:41.044312 systemd-networkd[1856]: lxc9dab143594f5: Gained IPv6LL Dec 12 18:40:41.364840 systemd-networkd[1856]: lxcaeaabb7527e3: Gained IPv6LL Dec 12 18:40:42.452334 kubelet[3527]: I1212 18:40:42.452179 3527 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:40:44.331806 containerd[1990]: time="2025-12-12T18:40:44.331757191Z" level=info msg="connecting to shim 225ced5e64c1f401ccc2a39e220c12a5cf7ef0b0eb78ccc4cddefedb57e6e7b4" address="unix:///run/containerd/s/d99de38964b7864883875321221f53cb6bca3e0ca0fe343c1922fe5a01f22783" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:40:44.336606 containerd[1990]: time="2025-12-12T18:40:44.336532132Z" level=info msg="connecting to shim 4a66fb269bf405cfa97ac98283b191851c95dfcbe86ed1a102ede3182ce32422" address="unix:///run/containerd/s/3497311b31e7283c1ae8faea7d6a63877bca36691083779dc2eafdaef2419cd3" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:40:44.417052 systemd[1]: Started cri-containerd-4a66fb269bf405cfa97ac98283b191851c95dfcbe86ed1a102ede3182ce32422.scope - libcontainer container 4a66fb269bf405cfa97ac98283b191851c95dfcbe86ed1a102ede3182ce32422. Dec 12 18:40:44.426965 systemd[1]: Started cri-containerd-225ced5e64c1f401ccc2a39e220c12a5cf7ef0b0eb78ccc4cddefedb57e6e7b4.scope - libcontainer container 225ced5e64c1f401ccc2a39e220c12a5cf7ef0b0eb78ccc4cddefedb57e6e7b4. Dec 12 18:40:44.520775 containerd[1990]: time="2025-12-12T18:40:44.520717861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ltxvv,Uid:20a29317-4eda-4d06-b000-6b7e0f3e4b71,Namespace:kube-system,Attempt:0,} returns sandbox id \"225ced5e64c1f401ccc2a39e220c12a5cf7ef0b0eb78ccc4cddefedb57e6e7b4\"" Dec 12 18:40:44.521057 containerd[1990]: time="2025-12-12T18:40:44.520822692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cg725,Uid:939de082-c1c0-4a5d-8057-21433eb39010,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a66fb269bf405cfa97ac98283b191851c95dfcbe86ed1a102ede3182ce32422\"" Dec 12 18:40:44.528778 containerd[1990]: time="2025-12-12T18:40:44.528719912Z" level=info msg="CreateContainer within sandbox \"4a66fb269bf405cfa97ac98283b191851c95dfcbe86ed1a102ede3182ce32422\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:40:44.530756 containerd[1990]: time="2025-12-12T18:40:44.530637865Z" level=info msg="CreateContainer within sandbox \"225ced5e64c1f401ccc2a39e220c12a5cf7ef0b0eb78ccc4cddefedb57e6e7b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:40:44.653344 containerd[1990]: time="2025-12-12T18:40:44.652701554Z" level=info msg="Container 4bb9f3ba501870a557b7fac373a8e946c5cfed78c52e60758f2c93033562741b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:40:44.653613 containerd[1990]: time="2025-12-12T18:40:44.652724142Z" level=info msg="Container db7802fb05f254d022c3491cf807b6f5c18124c0da8a1c68848d407a2920bfa3: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:40:44.663634 containerd[1990]: time="2025-12-12T18:40:44.662191405Z" level=info msg="CreateContainer within sandbox \"225ced5e64c1f401ccc2a39e220c12a5cf7ef0b0eb78ccc4cddefedb57e6e7b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4bb9f3ba501870a557b7fac373a8e946c5cfed78c52e60758f2c93033562741b\"" Dec 12 18:40:44.663865 containerd[1990]: time="2025-12-12T18:40:44.663777587Z" level=info msg="StartContainer for \"4bb9f3ba501870a557b7fac373a8e946c5cfed78c52e60758f2c93033562741b\"" Dec 12 18:40:44.665294 containerd[1990]: time="2025-12-12T18:40:44.665260045Z" level=info msg="connecting to shim 4bb9f3ba501870a557b7fac373a8e946c5cfed78c52e60758f2c93033562741b" address="unix:///run/containerd/s/d99de38964b7864883875321221f53cb6bca3e0ca0fe343c1922fe5a01f22783" protocol=ttrpc version=3 Dec 12 18:40:44.666305 containerd[1990]: time="2025-12-12T18:40:44.666210730Z" level=info msg="CreateContainer within sandbox \"4a66fb269bf405cfa97ac98283b191851c95dfcbe86ed1a102ede3182ce32422\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db7802fb05f254d022c3491cf807b6f5c18124c0da8a1c68848d407a2920bfa3\"" Dec 12 18:40:44.667279 containerd[1990]: time="2025-12-12T18:40:44.667220940Z" level=info msg="StartContainer for \"db7802fb05f254d022c3491cf807b6f5c18124c0da8a1c68848d407a2920bfa3\"" Dec 12 18:40:44.669921 containerd[1990]: time="2025-12-12T18:40:44.669801336Z" level=info msg="connecting to shim db7802fb05f254d022c3491cf807b6f5c18124c0da8a1c68848d407a2920bfa3" address="unix:///run/containerd/s/3497311b31e7283c1ae8faea7d6a63877bca36691083779dc2eafdaef2419cd3" protocol=ttrpc version=3 Dec 12 18:40:44.694820 systemd[1]: Started cri-containerd-4bb9f3ba501870a557b7fac373a8e946c5cfed78c52e60758f2c93033562741b.scope - libcontainer container 4bb9f3ba501870a557b7fac373a8e946c5cfed78c52e60758f2c93033562741b. Dec 12 18:40:44.699904 systemd[1]: Started cri-containerd-db7802fb05f254d022c3491cf807b6f5c18124c0da8a1c68848d407a2920bfa3.scope - libcontainer container db7802fb05f254d022c3491cf807b6f5c18124c0da8a1c68848d407a2920bfa3. Dec 12 18:40:44.811836 containerd[1990]: time="2025-12-12T18:40:44.811765914Z" level=info msg="StartContainer for \"db7802fb05f254d022c3491cf807b6f5c18124c0da8a1c68848d407a2920bfa3\" returns successfully" Dec 12 18:40:44.812165 containerd[1990]: time="2025-12-12T18:40:44.812142265Z" level=info msg="StartContainer for \"4bb9f3ba501870a557b7fac373a8e946c5cfed78c52e60758f2c93033562741b\" returns successfully" Dec 12 18:40:45.291023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707334614.mount: Deactivated successfully. Dec 12 18:40:45.291163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904790301.mount: Deactivated successfully. Dec 12 18:40:45.600078 kubelet[3527]: I1212 18:40:45.597839 3527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ltxvv" podStartSLOduration=29.597733785 podStartE2EDuration="29.597733785s" podCreationTimestamp="2025-12-12 18:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:40:45.594289178 +0000 UTC m=+35.510069407" watchObservedRunningTime="2025-12-12 18:40:45.597733785 +0000 UTC m=+35.513514001" Dec 12 18:40:45.616308 kubelet[3527]: I1212 18:40:45.616254 3527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cg725" podStartSLOduration=29.616238662 podStartE2EDuration="29.616238662s" podCreationTimestamp="2025-12-12 18:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:40:45.61590232 +0000 UTC m=+35.531682534" watchObservedRunningTime="2025-12-12 18:40:45.616238662 +0000 UTC m=+35.532018872" Dec 12 18:40:46.821757 systemd[1]: Started sshd@9-172.31.29.16:22-139.178.89.65:46084.service - OpenSSH per-connection server daemon (139.178.89.65:46084). Dec 12 18:40:47.038008 sshd[4831]: Accepted publickey for core from 139.178.89.65 port 46084 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:40:47.073809 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:47.080402 systemd-logind[1957]: New session 10 of user core. Dec 12 18:40:47.085825 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:40:47.286265 ntpd[2224]: Listen normally on 6 cilium_host 192.168.0.28:123 Dec 12 18:40:47.286319 ntpd[2224]: Listen normally on 7 cilium_net [fe80::6c66:59ff:fe74:8ecb%4]:123 Dec 12 18:40:47.287683 ntpd[2224]: 12 Dec 18:40:47 ntpd[2224]: Listen normally on 6 cilium_host 192.168.0.28:123 Dec 12 18:40:47.287683 ntpd[2224]: 12 Dec 18:40:47 ntpd[2224]: Listen normally on 7 cilium_net [fe80::6c66:59ff:fe74:8ecb%4]:123 Dec 12 18:40:47.287683 ntpd[2224]: 12 Dec 18:40:47 ntpd[2224]: Listen normally on 8 cilium_host [fe80::98a8:feff:fe48:1295%5]:123 Dec 12 18:40:47.287683 ntpd[2224]: 12 Dec 18:40:47 ntpd[2224]: Listen normally on 9 cilium_vxlan [fe80::5406:d1ff:fe1f:27d7%6]:123 Dec 12 18:40:47.287683 ntpd[2224]: 12 Dec 18:40:47 ntpd[2224]: Listen normally on 10 lxc_health [fe80::688e:57ff:fe31:af3d%8]:123 Dec 12 18:40:47.287683 ntpd[2224]: 12 Dec 18:40:47 ntpd[2224]: Listen normally on 11 lxc9dab143594f5 [fe80::2867:a1ff:fe42:b642%10]:123 Dec 12 18:40:47.287683 ntpd[2224]: 12 Dec 18:40:47 ntpd[2224]: Listen normally on 12 lxcaeaabb7527e3 [fe80::b4cd:67ff:fe93:3919%12]:123 Dec 12 18:40:47.286350 ntpd[2224]: Listen normally on 8 cilium_host [fe80::98a8:feff:fe48:1295%5]:123 Dec 12 18:40:47.286371 ntpd[2224]: Listen normally on 9 cilium_vxlan [fe80::5406:d1ff:fe1f:27d7%6]:123 Dec 12 18:40:47.286402 ntpd[2224]: Listen normally on 10 lxc_health [fe80::688e:57ff:fe31:af3d%8]:123 Dec 12 18:40:47.286490 ntpd[2224]: Listen normally on 11 lxc9dab143594f5 [fe80::2867:a1ff:fe42:b642%10]:123 Dec 12 18:40:47.286513 ntpd[2224]: Listen normally on 12 lxcaeaabb7527e3 [fe80::b4cd:67ff:fe93:3919%12]:123 Dec 12 18:40:47.892058 sshd[4834]: Connection closed by 139.178.89.65 port 46084 Dec 12 18:40:47.892751 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:47.900324 systemd[1]: sshd@9-172.31.29.16:22-139.178.89.65:46084.service: Deactivated successfully. Dec 12 18:40:47.905434 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:40:47.906627 systemd-logind[1957]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:40:47.908957 systemd-logind[1957]: Removed session 10. Dec 12 18:40:52.931736 systemd[1]: Started sshd@10-172.31.29.16:22-139.178.89.65:57008.service - OpenSSH per-connection server daemon (139.178.89.65:57008). Dec 12 18:40:53.145592 sshd[4861]: Accepted publickey for core from 139.178.89.65 port 57008 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:40:53.148831 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:53.163896 systemd-logind[1957]: New session 11 of user core. Dec 12 18:40:53.174189 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:40:53.438269 sshd[4864]: Connection closed by 139.178.89.65 port 57008 Dec 12 18:40:53.438881 sshd-session[4861]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:53.443308 systemd[1]: sshd@10-172.31.29.16:22-139.178.89.65:57008.service: Deactivated successfully. Dec 12 18:40:53.445497 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:40:53.446980 systemd-logind[1957]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:40:53.449391 systemd-logind[1957]: Removed session 11. Dec 12 18:40:58.473685 systemd[1]: Started sshd@11-172.31.29.16:22-139.178.89.65:57014.service - OpenSSH per-connection server daemon (139.178.89.65:57014). Dec 12 18:40:58.652169 sshd[4883]: Accepted publickey for core from 139.178.89.65 port 57014 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:40:58.653667 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:58.661489 systemd-logind[1957]: New session 12 of user core. Dec 12 18:40:58.665778 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:40:58.891181 sshd[4886]: Connection closed by 139.178.89.65 port 57014 Dec 12 18:40:58.892338 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:58.898000 systemd[1]: sshd@11-172.31.29.16:22-139.178.89.65:57014.service: Deactivated successfully. Dec 12 18:40:58.900493 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:40:58.902801 systemd-logind[1957]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:40:58.904903 systemd-logind[1957]: Removed session 12. Dec 12 18:41:03.956624 systemd[1]: Started sshd@12-172.31.29.16:22-139.178.89.65:60378.service - OpenSSH per-connection server daemon (139.178.89.65:60378). Dec 12 18:41:04.289017 sshd[4899]: Accepted publickey for core from 139.178.89.65 port 60378 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:04.291047 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:04.301667 systemd-logind[1957]: New session 13 of user core. Dec 12 18:41:04.310854 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:41:04.574378 sshd[4902]: Connection closed by 139.178.89.65 port 60378 Dec 12 18:41:04.575274 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:04.582031 systemd[1]: sshd@12-172.31.29.16:22-139.178.89.65:60378.service: Deactivated successfully. Dec 12 18:41:04.584885 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:41:04.587180 systemd-logind[1957]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:41:04.589426 systemd-logind[1957]: Removed session 13. Dec 12 18:41:04.610933 systemd[1]: Started sshd@13-172.31.29.16:22-139.178.89.65:60384.service - OpenSSH per-connection server daemon (139.178.89.65:60384). Dec 12 18:41:04.804288 sshd[4915]: Accepted publickey for core from 139.178.89.65 port 60384 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:04.807025 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:04.816374 systemd-logind[1957]: New session 14 of user core. Dec 12 18:41:04.824117 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:41:05.189045 sshd[4918]: Connection closed by 139.178.89.65 port 60384 Dec 12 18:41:05.189389 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:05.196303 systemd-logind[1957]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:41:05.199240 systemd[1]: sshd@13-172.31.29.16:22-139.178.89.65:60384.service: Deactivated successfully. Dec 12 18:41:05.205363 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:41:05.231310 systemd-logind[1957]: Removed session 14. Dec 12 18:41:05.232879 systemd[1]: Started sshd@14-172.31.29.16:22-139.178.89.65:60398.service - OpenSSH per-connection server daemon (139.178.89.65:60398). Dec 12 18:41:05.466481 sshd[4928]: Accepted publickey for core from 139.178.89.65 port 60398 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:05.471829 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:05.478499 systemd-logind[1957]: New session 15 of user core. Dec 12 18:41:05.484133 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:41:05.717687 sshd[4931]: Connection closed by 139.178.89.65 port 60398 Dec 12 18:41:05.718716 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:05.723662 systemd[1]: sshd@14-172.31.29.16:22-139.178.89.65:60398.service: Deactivated successfully. Dec 12 18:41:05.726061 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:41:05.727182 systemd-logind[1957]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:41:05.729254 systemd-logind[1957]: Removed session 15. Dec 12 18:41:10.757210 systemd[1]: Started sshd@15-172.31.29.16:22-139.178.89.65:54638.service - OpenSSH per-connection server daemon (139.178.89.65:54638). Dec 12 18:41:10.932664 sshd[4946]: Accepted publickey for core from 139.178.89.65 port 54638 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:10.934024 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:10.939620 systemd-logind[1957]: New session 16 of user core. Dec 12 18:41:10.946808 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:41:11.140615 sshd[4949]: Connection closed by 139.178.89.65 port 54638 Dec 12 18:41:11.141197 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:11.145619 systemd-logind[1957]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:41:11.146342 systemd[1]: sshd@15-172.31.29.16:22-139.178.89.65:54638.service: Deactivated successfully. Dec 12 18:41:11.148632 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:41:11.150123 systemd-logind[1957]: Removed session 16. Dec 12 18:41:16.183023 systemd[1]: Started sshd@16-172.31.29.16:22-139.178.89.65:54650.service - OpenSSH per-connection server daemon (139.178.89.65:54650). Dec 12 18:41:16.360972 sshd[4962]: Accepted publickey for core from 139.178.89.65 port 54650 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:16.362971 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:16.369021 systemd-logind[1957]: New session 17 of user core. Dec 12 18:41:16.378827 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:41:16.567853 sshd[4965]: Connection closed by 139.178.89.65 port 54650 Dec 12 18:41:16.568864 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:16.574576 systemd-logind[1957]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:41:16.575135 systemd[1]: sshd@16-172.31.29.16:22-139.178.89.65:54650.service: Deactivated successfully. Dec 12 18:41:16.578595 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:41:16.580535 systemd-logind[1957]: Removed session 17. Dec 12 18:41:16.604038 systemd[1]: Started sshd@17-172.31.29.16:22-139.178.89.65:54662.service - OpenSSH per-connection server daemon (139.178.89.65:54662). Dec 12 18:41:16.776793 sshd[4977]: Accepted publickey for core from 139.178.89.65 port 54662 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:16.778124 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:16.783617 systemd-logind[1957]: New session 18 of user core. Dec 12 18:41:16.790803 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:41:17.417187 sshd[4980]: Connection closed by 139.178.89.65 port 54662 Dec 12 18:41:17.449602 sshd-session[4977]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:17.467717 systemd[1]: Started sshd@18-172.31.29.16:22-139.178.89.65:54668.service - OpenSSH per-connection server daemon (139.178.89.65:54668). Dec 12 18:41:17.483534 systemd[1]: sshd@17-172.31.29.16:22-139.178.89.65:54662.service: Deactivated successfully. Dec 12 18:41:17.488934 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:41:17.491935 systemd-logind[1957]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:41:17.494720 systemd-logind[1957]: Removed session 18. Dec 12 18:41:17.672120 sshd[4987]: Accepted publickey for core from 139.178.89.65 port 54668 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:17.673600 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:17.680385 systemd-logind[1957]: New session 19 of user core. Dec 12 18:41:17.685011 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:41:18.658823 sshd[4993]: Connection closed by 139.178.89.65 port 54668 Dec 12 18:41:18.659904 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:18.668869 systemd[1]: sshd@18-172.31.29.16:22-139.178.89.65:54668.service: Deactivated successfully. Dec 12 18:41:18.674148 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:41:18.676230 systemd-logind[1957]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:41:18.681889 systemd-logind[1957]: Removed session 19. Dec 12 18:41:18.700889 systemd[1]: Started sshd@19-172.31.29.16:22-139.178.89.65:54674.service - OpenSSH per-connection server daemon (139.178.89.65:54674). Dec 12 18:41:18.894559 sshd[5010]: Accepted publickey for core from 139.178.89.65 port 54674 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:18.896622 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:18.902778 systemd-logind[1957]: New session 20 of user core. Dec 12 18:41:18.909805 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:41:19.350690 sshd[5013]: Connection closed by 139.178.89.65 port 54674 Dec 12 18:41:19.352674 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:19.356163 systemd[1]: sshd@19-172.31.29.16:22-139.178.89.65:54674.service: Deactivated successfully. Dec 12 18:41:19.359245 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:41:19.362830 systemd-logind[1957]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:41:19.364193 systemd-logind[1957]: Removed session 20. Dec 12 18:41:19.385796 systemd[1]: Started sshd@20-172.31.29.16:22-139.178.89.65:54690.service - OpenSSH per-connection server daemon (139.178.89.65:54690). Dec 12 18:41:19.555466 sshd[5023]: Accepted publickey for core from 139.178.89.65 port 54690 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:19.557001 sshd-session[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:19.562518 systemd-logind[1957]: New session 21 of user core. Dec 12 18:41:19.571828 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:41:19.750114 sshd[5026]: Connection closed by 139.178.89.65 port 54690 Dec 12 18:41:19.750647 sshd-session[5023]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:19.754627 systemd[1]: sshd@20-172.31.29.16:22-139.178.89.65:54690.service: Deactivated successfully. Dec 12 18:41:19.757478 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:41:19.759881 systemd-logind[1957]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:41:19.762255 systemd-logind[1957]: Removed session 21. Dec 12 18:41:24.784791 systemd[1]: Started sshd@21-172.31.29.16:22-139.178.89.65:33870.service - OpenSSH per-connection server daemon (139.178.89.65:33870). Dec 12 18:41:24.960713 sshd[5041]: Accepted publickey for core from 139.178.89.65 port 33870 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:24.962009 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:24.968190 systemd-logind[1957]: New session 22 of user core. Dec 12 18:41:24.978381 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:41:25.168092 sshd[5045]: Connection closed by 139.178.89.65 port 33870 Dec 12 18:41:25.169832 sshd-session[5041]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:25.174122 systemd-logind[1957]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:41:25.174865 systemd[1]: sshd@21-172.31.29.16:22-139.178.89.65:33870.service: Deactivated successfully. Dec 12 18:41:25.177220 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:41:25.179282 systemd-logind[1957]: Removed session 22. Dec 12 18:41:30.206947 systemd[1]: Started sshd@22-172.31.29.16:22-139.178.89.65:47108.service - OpenSSH per-connection server daemon (139.178.89.65:47108). Dec 12 18:41:30.396939 sshd[5058]: Accepted publickey for core from 139.178.89.65 port 47108 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:30.398895 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:30.404612 systemd-logind[1957]: New session 23 of user core. Dec 12 18:41:30.409828 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:41:30.660154 sshd[5061]: Connection closed by 139.178.89.65 port 47108 Dec 12 18:41:30.660826 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:30.665249 systemd[1]: sshd@22-172.31.29.16:22-139.178.89.65:47108.service: Deactivated successfully. Dec 12 18:41:30.668057 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:41:30.670057 systemd-logind[1957]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:41:30.673299 systemd-logind[1957]: Removed session 23. Dec 12 18:41:35.705245 systemd[1]: Started sshd@23-172.31.29.16:22-139.178.89.65:47114.service - OpenSSH per-connection server daemon (139.178.89.65:47114). Dec 12 18:41:35.893140 sshd[5072]: Accepted publickey for core from 139.178.89.65 port 47114 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:35.894443 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:35.900539 systemd-logind[1957]: New session 24 of user core. Dec 12 18:41:35.905785 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:41:36.095817 sshd[5075]: Connection closed by 139.178.89.65 port 47114 Dec 12 18:41:36.096501 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:36.101066 systemd[1]: sshd@23-172.31.29.16:22-139.178.89.65:47114.service: Deactivated successfully. Dec 12 18:41:36.104326 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:41:36.105153 systemd-logind[1957]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:41:36.108119 systemd-logind[1957]: Removed session 24. Dec 12 18:41:36.129265 systemd[1]: Started sshd@24-172.31.29.16:22-139.178.89.65:47126.service - OpenSSH per-connection server daemon (139.178.89.65:47126). Dec 12 18:41:36.317219 sshd[5087]: Accepted publickey for core from 139.178.89.65 port 47126 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:36.321529 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:36.343312 systemd-logind[1957]: New session 25 of user core. Dec 12 18:41:36.353821 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 18:41:38.601885 containerd[1990]: time="2025-12-12T18:41:38.600714933Z" level=info msg="StopContainer for \"a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9\" with timeout 30 (s)" Dec 12 18:41:38.604578 containerd[1990]: time="2025-12-12T18:41:38.603576611Z" level=info msg="Stop container \"a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9\" with signal terminated" Dec 12 18:41:38.642754 systemd[1]: cri-containerd-a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9.scope: Deactivated successfully. Dec 12 18:41:38.646594 containerd[1990]: time="2025-12-12T18:41:38.645523055Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:41:38.649331 containerd[1990]: time="2025-12-12T18:41:38.649291624Z" level=info msg="received container exit event container_id:\"a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9\" id:\"a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9\" pid:4109 exited_at:{seconds:1765564898 nanos:648367154}" Dec 12 18:41:38.658828 containerd[1990]: time="2025-12-12T18:41:38.658785159Z" level=info msg="StopContainer for \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\" with timeout 2 (s)" Dec 12 18:41:38.659264 containerd[1990]: time="2025-12-12T18:41:38.659069142Z" level=info msg="Stop container \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\" with signal terminated" Dec 12 18:41:38.677580 systemd-networkd[1856]: lxc_health: Link DOWN Dec 12 18:41:38.677591 systemd-networkd[1856]: lxc_health: Lost carrier Dec 12 18:41:38.693778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9-rootfs.mount: Deactivated successfully. Dec 12 18:41:38.713009 systemd[1]: cri-containerd-b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f.scope: Deactivated successfully. Dec 12 18:41:38.713405 systemd[1]: cri-containerd-b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f.scope: Consumed 8.202s CPU time, 227.4M memory peak, 106.5M read from disk, 13.3M written to disk. Dec 12 18:41:38.717307 containerd[1990]: time="2025-12-12T18:41:38.717261463Z" level=info msg="StopContainer for \"a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9\" returns successfully" Dec 12 18:41:38.718393 containerd[1990]: time="2025-12-12T18:41:38.718353546Z" level=info msg="StopPodSandbox for \"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\"" Dec 12 18:41:38.719026 containerd[1990]: time="2025-12-12T18:41:38.718992539Z" level=info msg="received container exit event container_id:\"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\" id:\"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\" pid:4180 exited_at:{seconds:1765564898 nanos:718534629}" Dec 12 18:41:38.721365 containerd[1990]: time="2025-12-12T18:41:38.721302048Z" level=info msg="Container to stop \"a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:41:38.736313 systemd[1]: cri-containerd-d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317.scope: Deactivated successfully. Dec 12 18:41:38.740409 containerd[1990]: time="2025-12-12T18:41:38.740330561Z" level=info msg="received sandbox exit event container_id:\"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\" id:\"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\" exit_status:137 exited_at:{seconds:1765564898 nanos:739370728}" monitor_name=podsandbox Dec 12 18:41:38.762705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f-rootfs.mount: Deactivated successfully. Dec 12 18:41:38.784686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317-rootfs.mount: Deactivated successfully. Dec 12 18:41:38.788068 containerd[1990]: time="2025-12-12T18:41:38.788026652Z" level=info msg="StopContainer for \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\" returns successfully" Dec 12 18:41:38.789966 containerd[1990]: time="2025-12-12T18:41:38.789926880Z" level=info msg="StopPodSandbox for \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\"" Dec 12 18:41:38.790093 containerd[1990]: time="2025-12-12T18:41:38.790008428Z" level=info msg="Container to stop \"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:41:38.790093 containerd[1990]: time="2025-12-12T18:41:38.790024702Z" level=info msg="Container to stop \"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:41:38.790093 containerd[1990]: time="2025-12-12T18:41:38.790039192Z" level=info msg="Container to stop \"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:41:38.790093 containerd[1990]: time="2025-12-12T18:41:38.790051706Z" level=info msg="Container to stop \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:41:38.790093 containerd[1990]: time="2025-12-12T18:41:38.790066374Z" level=info msg="Container to stop \"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:41:38.790512 containerd[1990]: time="2025-12-12T18:41:38.790440873Z" level=info msg="shim disconnected" id=d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317 namespace=k8s.io Dec 12 18:41:38.790512 containerd[1990]: time="2025-12-12T18:41:38.790469056Z" level=warning msg="cleaning up after shim disconnected" id=d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317 namespace=k8s.io Dec 12 18:41:38.790512 containerd[1990]: time="2025-12-12T18:41:38.790479252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 18:41:38.803017 systemd[1]: cri-containerd-cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae.scope: Deactivated successfully. Dec 12 18:41:38.809475 containerd[1990]: time="2025-12-12T18:41:38.809403301Z" level=info msg="received sandbox exit event container_id:\"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" id:\"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" exit_status:137 exited_at:{seconds:1765564898 nanos:808828582}" monitor_name=podsandbox Dec 12 18:41:38.820095 containerd[1990]: time="2025-12-12T18:41:38.819945090Z" level=info msg="received sandbox container exit event sandbox_id:\"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\" exit_status:137 exited_at:{seconds:1765564898 nanos:739370728}" monitor_name=criService Dec 12 18:41:38.822349 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317-shm.mount: Deactivated successfully. Dec 12 18:41:38.824128 containerd[1990]: time="2025-12-12T18:41:38.823743919Z" level=info msg="TearDown network for sandbox \"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\" successfully" Dec 12 18:41:38.824128 containerd[1990]: time="2025-12-12T18:41:38.823777293Z" level=info msg="StopPodSandbox for \"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\" returns successfully" Dec 12 18:41:38.855344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae-rootfs.mount: Deactivated successfully. Dec 12 18:41:38.864173 kubelet[3527]: I1212 18:41:38.864091 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90a7f4fa-f4e9-41b9-af6a-ededbb9f8827-cilium-config-path\") pod \"90a7f4fa-f4e9-41b9-af6a-ededbb9f8827\" (UID: \"90a7f4fa-f4e9-41b9-af6a-ededbb9f8827\") " Dec 12 18:41:38.866658 kubelet[3527]: I1212 18:41:38.864472 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsm8m\" (UniqueName: \"kubernetes.io/projected/90a7f4fa-f4e9-41b9-af6a-ededbb9f8827-kube-api-access-gsm8m\") pod \"90a7f4fa-f4e9-41b9-af6a-ededbb9f8827\" (UID: \"90a7f4fa-f4e9-41b9-af6a-ededbb9f8827\") " Dec 12 18:41:38.870584 kubelet[3527]: I1212 18:41:38.869707 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90a7f4fa-f4e9-41b9-af6a-ededbb9f8827-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "90a7f4fa-f4e9-41b9-af6a-ededbb9f8827" (UID: "90a7f4fa-f4e9-41b9-af6a-ededbb9f8827"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:41:38.870724 containerd[1990]: time="2025-12-12T18:41:38.870458284Z" level=info msg="shim disconnected" id=cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae namespace=k8s.io Dec 12 18:41:38.870724 containerd[1990]: time="2025-12-12T18:41:38.870495831Z" level=warning msg="cleaning up after shim disconnected" id=cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae namespace=k8s.io Dec 12 18:41:38.870724 containerd[1990]: time="2025-12-12T18:41:38.870507842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 18:41:38.878411 kubelet[3527]: I1212 18:41:38.878350 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a7f4fa-f4e9-41b9-af6a-ededbb9f8827-kube-api-access-gsm8m" (OuterVolumeSpecName: "kube-api-access-gsm8m") pod "90a7f4fa-f4e9-41b9-af6a-ededbb9f8827" (UID: "90a7f4fa-f4e9-41b9-af6a-ededbb9f8827"). InnerVolumeSpecName "kube-api-access-gsm8m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:41:38.888465 containerd[1990]: time="2025-12-12T18:41:38.888404690Z" level=info msg="received sandbox container exit event sandbox_id:\"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" exit_status:137 exited_at:{seconds:1765564898 nanos:808828582}" monitor_name=criService Dec 12 18:41:38.888955 containerd[1990]: time="2025-12-12T18:41:38.888845637Z" level=info msg="TearDown network for sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" successfully" Dec 12 18:41:38.888955 containerd[1990]: time="2025-12-12T18:41:38.888890080Z" level=info msg="StopPodSandbox for \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" returns successfully" Dec 12 18:41:38.966153 kubelet[3527]: I1212 18:41:38.966083 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cilium-cgroup\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966153 kubelet[3527]: I1212 18:41:38.966145 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-host-proc-sys-kernel\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966153 kubelet[3527]: I1212 18:41:38.966167 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-clustermesh-secrets\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966392 kubelet[3527]: I1212 18:41:38.966185 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-etc-cni-netd\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966392 kubelet[3527]: I1212 18:41:38.966202 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cilium-config-path\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966392 kubelet[3527]: I1212 18:41:38.966216 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cilium-run\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966392 kubelet[3527]: I1212 18:41:38.966235 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-hostproc\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966392 kubelet[3527]: I1212 18:41:38.966248 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-xtables-lock\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966392 kubelet[3527]: I1212 18:41:38.966261 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cni-path\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966577 kubelet[3527]: I1212 18:41:38.966277 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-bpf-maps\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966577 kubelet[3527]: I1212 18:41:38.966295 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvwhc\" (UniqueName: \"kubernetes.io/projected/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-kube-api-access-hvwhc\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966577 kubelet[3527]: I1212 18:41:38.966313 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-host-proc-sys-net\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966577 kubelet[3527]: I1212 18:41:38.966327 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-lib-modules\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966577 kubelet[3527]: I1212 18:41:38.966343 3527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-hubble-tls\") pod \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\" (UID: \"7a56f13a-5e20-487d-81bf-b0aa72ecd87b\") " Dec 12 18:41:38.966577 kubelet[3527]: I1212 18:41:38.966385 3527 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gsm8m\" (UniqueName: \"kubernetes.io/projected/90a7f4fa-f4e9-41b9-af6a-ededbb9f8827-kube-api-access-gsm8m\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:38.966734 kubelet[3527]: I1212 18:41:38.966396 3527 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90a7f4fa-f4e9-41b9-af6a-ededbb9f8827-cilium-config-path\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:38.967650 kubelet[3527]: I1212 18:41:38.966805 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-hostproc" (OuterVolumeSpecName: "hostproc") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:41:38.967650 kubelet[3527]: I1212 18:41:38.966867 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:41:38.967650 kubelet[3527]: I1212 18:41:38.966883 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:41:38.967969 kubelet[3527]: I1212 18:41:38.967941 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:41:38.968028 kubelet[3527]: I1212 18:41:38.967983 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cni-path" (OuterVolumeSpecName: "cni-path") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:41:38.968028 kubelet[3527]: I1212 18:41:38.967998 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:41:38.968328 kubelet[3527]: I1212 18:41:38.968278 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:41:38.968702 kubelet[3527]: I1212 18:41:38.968664 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:41:38.968802 kubelet[3527]: I1212 18:41:38.968789 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:41:38.969052 kubelet[3527]: I1212 18:41:38.969029 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:41:38.970321 kubelet[3527]: I1212 18:41:38.970293 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:41:38.973164 kubelet[3527]: I1212 18:41:38.973109 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:41:38.973164 kubelet[3527]: I1212 18:41:38.973131 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-kube-api-access-hvwhc" (OuterVolumeSpecName: "kube-api-access-hvwhc") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "kube-api-access-hvwhc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:41:38.973362 kubelet[3527]: I1212 18:41:38.973319 3527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7a56f13a-5e20-487d-81bf-b0aa72ecd87b" (UID: "7a56f13a-5e20-487d-81bf-b0aa72ecd87b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:41:39.066992 kubelet[3527]: I1212 18:41:39.066934 3527 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-xtables-lock\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.066992 kubelet[3527]: I1212 18:41:39.066972 3527 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cni-path\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.066992 kubelet[3527]: I1212 18:41:39.066981 3527 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-bpf-maps\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.066992 kubelet[3527]: I1212 18:41:39.066989 3527 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hvwhc\" (UniqueName: \"kubernetes.io/projected/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-kube-api-access-hvwhc\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.066992 kubelet[3527]: I1212 18:41:39.067000 3527 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-host-proc-sys-net\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.066992 kubelet[3527]: I1212 18:41:39.067009 3527 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-lib-modules\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.067693 kubelet[3527]: I1212 18:41:39.067016 3527 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-hubble-tls\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.067693 kubelet[3527]: I1212 18:41:39.067024 3527 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cilium-cgroup\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.067693 kubelet[3527]: I1212 18:41:39.067030 3527 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-host-proc-sys-kernel\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.067693 kubelet[3527]: I1212 18:41:39.067037 3527 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-clustermesh-secrets\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.067693 kubelet[3527]: I1212 18:41:39.067044 3527 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-etc-cni-netd\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.067693 kubelet[3527]: I1212 18:41:39.067051 3527 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cilium-config-path\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.067693 kubelet[3527]: I1212 18:41:39.067059 3527 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-cilium-run\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.067693 kubelet[3527]: I1212 18:41:39.067066 3527 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a56f13a-5e20-487d-81bf-b0aa72ecd87b-hostproc\") on node \"ip-172-31-29-16\" DevicePath \"\"" Dec 12 18:41:39.690792 systemd[1]: var-lib-kubelet-pods-90a7f4fa\x2df4e9\x2d41b9\x2daf6a\x2dededbb9f8827-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgsm8m.mount: Deactivated successfully. Dec 12 18:41:39.690910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae-shm.mount: Deactivated successfully. Dec 12 18:41:39.690977 systemd[1]: var-lib-kubelet-pods-7a56f13a\x2d5e20\x2d487d\x2d81bf\x2db0aa72ecd87b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhvwhc.mount: Deactivated successfully. Dec 12 18:41:39.691060 systemd[1]: var-lib-kubelet-pods-7a56f13a\x2d5e20\x2d487d\x2d81bf\x2db0aa72ecd87b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 12 18:41:39.691633 systemd[1]: var-lib-kubelet-pods-7a56f13a\x2d5e20\x2d487d\x2d81bf\x2db0aa72ecd87b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 12 18:41:39.783426 kubelet[3527]: I1212 18:41:39.783225 3527 scope.go:117] "RemoveContainer" containerID="a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9" Dec 12 18:41:39.791291 systemd[1]: Removed slice kubepods-besteffort-pod90a7f4fa_f4e9_41b9_af6a_ededbb9f8827.slice - libcontainer container kubepods-besteffort-pod90a7f4fa_f4e9_41b9_af6a_ededbb9f8827.slice. Dec 12 18:41:39.792300 containerd[1990]: time="2025-12-12T18:41:39.792264541Z" level=info msg="RemoveContainer for \"a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9\"" Dec 12 18:41:39.801615 containerd[1990]: time="2025-12-12T18:41:39.800850566Z" level=info msg="RemoveContainer for \"a121391cb46448f7e79bf8fa806ee1c38a630404f451c1b56e0fecbd41c961a9\" returns successfully" Dec 12 18:41:39.809164 kubelet[3527]: I1212 18:41:39.809128 3527 scope.go:117] "RemoveContainer" containerID="b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f" Dec 12 18:41:39.814677 containerd[1990]: time="2025-12-12T18:41:39.814412731Z" level=info msg="RemoveContainer for \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\"" Dec 12 18:41:39.820411 systemd[1]: Removed slice kubepods-burstable-pod7a56f13a_5e20_487d_81bf_b0aa72ecd87b.slice - libcontainer container kubepods-burstable-pod7a56f13a_5e20_487d_81bf_b0aa72ecd87b.slice. Dec 12 18:41:39.820750 systemd[1]: kubepods-burstable-pod7a56f13a_5e20_487d_81bf_b0aa72ecd87b.slice: Consumed 8.334s CPU time, 227.8M memory peak, 107.6M read from disk, 13.3M written to disk. Dec 12 18:41:39.827697 containerd[1990]: time="2025-12-12T18:41:39.827646635Z" level=info msg="RemoveContainer for \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\" returns successfully" Dec 12 18:41:39.828286 kubelet[3527]: I1212 18:41:39.828161 3527 scope.go:117] "RemoveContainer" containerID="d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d" Dec 12 18:41:39.832293 containerd[1990]: time="2025-12-12T18:41:39.831758796Z" level=info msg="RemoveContainer for \"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d\"" Dec 12 18:41:39.842254 containerd[1990]: time="2025-12-12T18:41:39.842209038Z" level=info msg="RemoveContainer for \"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d\" returns successfully" Dec 12 18:41:39.843376 kubelet[3527]: I1212 18:41:39.843338 3527 scope.go:117] "RemoveContainer" containerID="56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12" Dec 12 18:41:39.848693 containerd[1990]: time="2025-12-12T18:41:39.848648434Z" level=info msg="RemoveContainer for \"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12\"" Dec 12 18:41:39.857443 containerd[1990]: time="2025-12-12T18:41:39.856704160Z" level=info msg="RemoveContainer for \"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12\" returns successfully" Dec 12 18:41:39.857630 kubelet[3527]: I1212 18:41:39.856961 3527 scope.go:117] "RemoveContainer" containerID="e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4" Dec 12 18:41:39.862895 containerd[1990]: time="2025-12-12T18:41:39.862774335Z" level=info msg="RemoveContainer for \"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4\"" Dec 12 18:41:39.869865 containerd[1990]: time="2025-12-12T18:41:39.869821402Z" level=info msg="RemoveContainer for \"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4\" returns successfully" Dec 12 18:41:39.870235 kubelet[3527]: I1212 18:41:39.870205 3527 scope.go:117] "RemoveContainer" containerID="b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c" Dec 12 18:41:39.872346 containerd[1990]: time="2025-12-12T18:41:39.872307084Z" level=info msg="RemoveContainer for \"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c\"" Dec 12 18:41:39.877896 containerd[1990]: time="2025-12-12T18:41:39.877855084Z" level=info msg="RemoveContainer for \"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c\" returns successfully" Dec 12 18:41:39.878293 kubelet[3527]: I1212 18:41:39.878125 3527 scope.go:117] "RemoveContainer" containerID="b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f" Dec 12 18:41:39.878797 containerd[1990]: time="2025-12-12T18:41:39.878746424Z" level=error msg="ContainerStatus for \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\": not found" Dec 12 18:41:39.878975 kubelet[3527]: E1212 18:41:39.878948 3527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\": not found" containerID="b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f" Dec 12 18:41:39.879042 kubelet[3527]: I1212 18:41:39.878985 3527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f"} err="failed to get container status \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7891f525e67da8b8e8d0ed11163382a34fe56b808988c47091c6476e2e0fc0f\": not found" Dec 12 18:41:39.879042 kubelet[3527]: I1212 18:41:39.879024 3527 scope.go:117] "RemoveContainer" containerID="d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d" Dec 12 18:41:39.879397 containerd[1990]: time="2025-12-12T18:41:39.879349772Z" level=error msg="ContainerStatus for \"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d\": not found" Dec 12 18:41:39.879527 kubelet[3527]: E1212 18:41:39.879488 3527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d\": not found" containerID="d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d" Dec 12 18:41:39.879527 kubelet[3527]: I1212 18:41:39.879518 3527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d"} err="failed to get container status \"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9603939a5954916a5dffaa85e8f3ffcd47fbd466abbf1be27407340bbf6054d\": not found" Dec 12 18:41:39.879675 kubelet[3527]: I1212 18:41:39.879533 3527 scope.go:117] "RemoveContainer" containerID="56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12" Dec 12 18:41:39.879819 containerd[1990]: time="2025-12-12T18:41:39.879772984Z" level=error msg="ContainerStatus for \"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12\": not found" Dec 12 18:41:39.880000 kubelet[3527]: E1212 18:41:39.879984 3527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12\": not found" containerID="56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12" Dec 12 18:41:39.880082 kubelet[3527]: I1212 18:41:39.880062 3527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12"} err="failed to get container status \"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12\": rpc error: code = NotFound desc = an error occurred when try to find container \"56dfd593e5976f4801a5e9a2f7112f71efd37c0affdd9ee7385b2b66c694fd12\": not found" Dec 12 18:41:39.880153 kubelet[3527]: I1212 18:41:39.880080 3527 scope.go:117] "RemoveContainer" containerID="e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4" Dec 12 18:41:39.880279 containerd[1990]: time="2025-12-12T18:41:39.880250348Z" level=error msg="ContainerStatus for \"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4\": not found" Dec 12 18:41:39.880354 kubelet[3527]: E1212 18:41:39.880344 3527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4\": not found" containerID="e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4" Dec 12 18:41:39.880390 kubelet[3527]: I1212 18:41:39.880363 3527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4"} err="failed to get container status \"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"e507d11284d2742d493bf65dae48dd9177e54899e6f5381581ae301a59a446a4\": not found" Dec 12 18:41:39.880390 kubelet[3527]: I1212 18:41:39.880378 3527 scope.go:117] "RemoveContainer" containerID="b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c" Dec 12 18:41:39.880616 containerd[1990]: time="2025-12-12T18:41:39.880578452Z" level=error msg="ContainerStatus for \"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c\": not found" Dec 12 18:41:39.880840 kubelet[3527]: E1212 18:41:39.880712 3527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c\": not found" containerID="b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c" Dec 12 18:41:39.880840 kubelet[3527]: I1212 18:41:39.880738 3527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c"} err="failed to get container status \"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9d823ca8f4f8a32b0de5f384d28c37b32a083ed132efc81c40137332a2cdc9c\": not found" Dec 12 18:41:40.327911 kubelet[3527]: I1212 18:41:40.327853 3527 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a56f13a-5e20-487d-81bf-b0aa72ecd87b" path="/var/lib/kubelet/pods/7a56f13a-5e20-487d-81bf-b0aa72ecd87b/volumes" Dec 12 18:41:40.328450 kubelet[3527]: I1212 18:41:40.328404 3527 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a7f4fa-f4e9-41b9-af6a-ededbb9f8827" path="/var/lib/kubelet/pods/90a7f4fa-f4e9-41b9-af6a-ededbb9f8827/volumes" Dec 12 18:41:40.473333 kubelet[3527]: E1212 18:41:40.473291 3527 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 18:41:40.521128 sshd[5090]: Connection closed by 139.178.89.65 port 47126 Dec 12 18:41:40.521769 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:40.527409 systemd[1]: sshd@24-172.31.29.16:22-139.178.89.65:47126.service: Deactivated successfully. Dec 12 18:41:40.530383 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 18:41:40.532200 systemd-logind[1957]: Session 25 logged out. Waiting for processes to exit. Dec 12 18:41:40.534130 systemd-logind[1957]: Removed session 25. Dec 12 18:41:40.554297 systemd[1]: Started sshd@25-172.31.29.16:22-139.178.89.65:59136.service - OpenSSH per-connection server daemon (139.178.89.65:59136). Dec 12 18:41:40.748605 sshd[5233]: Accepted publickey for core from 139.178.89.65 port 59136 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:40.749586 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:40.755528 systemd-logind[1957]: New session 26 of user core. Dec 12 18:41:40.760868 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 18:41:41.286183 ntpd[2224]: Deleting 10 lxc_health, [fe80::688e:57ff:fe31:af3d%8]:123, stats: received=0, sent=0, dropped=0, active_time=54 secs Dec 12 18:41:41.286728 ntpd[2224]: 12 Dec 18:41:41 ntpd[2224]: Deleting 10 lxc_health, [fe80::688e:57ff:fe31:af3d%8]:123, stats: received=0, sent=0, dropped=0, active_time=54 secs Dec 12 18:41:41.905995 sshd[5236]: Connection closed by 139.178.89.65 port 59136 Dec 12 18:41:41.907117 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:41.912731 systemd-logind[1957]: Session 26 logged out. Waiting for processes to exit. Dec 12 18:41:41.915049 systemd[1]: sshd@25-172.31.29.16:22-139.178.89.65:59136.service: Deactivated successfully. Dec 12 18:41:41.918155 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 18:41:41.921563 systemd-logind[1957]: Removed session 26. Dec 12 18:41:41.944048 systemd[1]: Started sshd@26-172.31.29.16:22-139.178.89.65:59138.service - OpenSSH per-connection server daemon (139.178.89.65:59138). Dec 12 18:41:42.040672 systemd[1]: Created slice kubepods-burstable-pod40e9437e_77d7_4b77_bc6d_947ef0c7dee4.slice - libcontainer container kubepods-burstable-pod40e9437e_77d7_4b77_bc6d_947ef0c7dee4.slice. Dec 12 18:41:42.086892 kubelet[3527]: I1212 18:41:42.086677 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-hostproc\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.086892 kubelet[3527]: I1212 18:41:42.086772 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-lib-modules\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.086892 kubelet[3527]: I1212 18:41:42.086800 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzlft\" (UniqueName: \"kubernetes.io/projected/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-kube-api-access-pzlft\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.086892 kubelet[3527]: I1212 18:41:42.086857 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-cilium-cgroup\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.089150 kubelet[3527]: I1212 18:41:42.088015 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-xtables-lock\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.089150 kubelet[3527]: I1212 18:41:42.088784 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-clustermesh-secrets\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.089150 kubelet[3527]: I1212 18:41:42.088809 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-hubble-tls\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.089150 kubelet[3527]: I1212 18:41:42.088865 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-cilium-config-path\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.089150 kubelet[3527]: I1212 18:41:42.088885 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-cilium-ipsec-secrets\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.089150 kubelet[3527]: I1212 18:41:42.088988 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-bpf-maps\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.089758 kubelet[3527]: I1212 18:41:42.089012 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-host-proc-sys-kernel\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.089758 kubelet[3527]: I1212 18:41:42.089069 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-cni-path\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.089758 kubelet[3527]: I1212 18:41:42.089093 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-etc-cni-netd\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.089758 kubelet[3527]: I1212 18:41:42.089243 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-host-proc-sys-net\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.089758 kubelet[3527]: I1212 18:41:42.089274 3527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40e9437e-77d7-4b77-bc6d-947ef0c7dee4-cilium-run\") pod \"cilium-fssf2\" (UID: \"40e9437e-77d7-4b77-bc6d-947ef0c7dee4\") " pod="kube-system/cilium-fssf2" Dec 12 18:41:42.119538 kubelet[3527]: I1212 18:41:42.119273 3527 setters.go:543] "Node became not ready" node="ip-172-31-29-16" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T18:41:42Z","lastTransitionTime":"2025-12-12T18:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 12 18:41:42.140905 sshd[5246]: Accepted publickey for core from 139.178.89.65 port 59138 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:42.143392 sshd-session[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:42.154634 systemd-logind[1957]: New session 27 of user core. Dec 12 18:41:42.159784 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 12 18:41:42.287305 sshd[5249]: Connection closed by 139.178.89.65 port 59138 Dec 12 18:41:42.288253 sshd-session[5246]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:42.294096 systemd[1]: sshd@26-172.31.29.16:22-139.178.89.65:59138.service: Deactivated successfully. Dec 12 18:41:42.297475 systemd[1]: session-27.scope: Deactivated successfully. Dec 12 18:41:42.298727 systemd-logind[1957]: Session 27 logged out. Waiting for processes to exit. Dec 12 18:41:42.301050 systemd-logind[1957]: Removed session 27. Dec 12 18:41:42.319447 systemd[1]: Started sshd@27-172.31.29.16:22-139.178.89.65:59152.service - OpenSSH per-connection server daemon (139.178.89.65:59152). Dec 12 18:41:42.352274 containerd[1990]: time="2025-12-12T18:41:42.352236626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fssf2,Uid:40e9437e-77d7-4b77-bc6d-947ef0c7dee4,Namespace:kube-system,Attempt:0,}" Dec 12 18:41:42.386031 containerd[1990]: time="2025-12-12T18:41:42.385977320Z" level=info msg="connecting to shim 411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7" address="unix:///run/containerd/s/c3eb4d8c53c8839062ed567c06c65f026651789a9a70887cc1129524080d6e9b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:41:42.420797 systemd[1]: Started cri-containerd-411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7.scope - libcontainer container 411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7. Dec 12 18:41:42.454496 containerd[1990]: time="2025-12-12T18:41:42.453972662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fssf2,Uid:40e9437e-77d7-4b77-bc6d-947ef0c7dee4,Namespace:kube-system,Attempt:0,} returns sandbox id \"411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7\"" Dec 12 18:41:42.463013 containerd[1990]: time="2025-12-12T18:41:42.462951060Z" level=info msg="CreateContainer within sandbox \"411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 18:41:42.473925 containerd[1990]: time="2025-12-12T18:41:42.473812174Z" level=info msg="Container 088e81159cd8630fe7465a0283426c37dfd9bcbf9ef185a7840e6a29df5f713a: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:41:42.484652 containerd[1990]: time="2025-12-12T18:41:42.484537650Z" level=info msg="CreateContainer within sandbox \"411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"088e81159cd8630fe7465a0283426c37dfd9bcbf9ef185a7840e6a29df5f713a\"" Dec 12 18:41:42.485570 containerd[1990]: time="2025-12-12T18:41:42.485382078Z" level=info msg="StartContainer for \"088e81159cd8630fe7465a0283426c37dfd9bcbf9ef185a7840e6a29df5f713a\"" Dec 12 18:41:42.486719 containerd[1990]: time="2025-12-12T18:41:42.486682328Z" level=info msg="connecting to shim 088e81159cd8630fe7465a0283426c37dfd9bcbf9ef185a7840e6a29df5f713a" address="unix:///run/containerd/s/c3eb4d8c53c8839062ed567c06c65f026651789a9a70887cc1129524080d6e9b" protocol=ttrpc version=3 Dec 12 18:41:42.506041 sshd[5260]: Accepted publickey for core from 139.178.89.65 port 59152 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:41:42.510019 sshd-session[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:42.510045 systemd[1]: Started cri-containerd-088e81159cd8630fe7465a0283426c37dfd9bcbf9ef185a7840e6a29df5f713a.scope - libcontainer container 088e81159cd8630fe7465a0283426c37dfd9bcbf9ef185a7840e6a29df5f713a. Dec 12 18:41:42.520252 systemd-logind[1957]: New session 28 of user core. Dec 12 18:41:42.525942 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 12 18:41:42.564351 containerd[1990]: time="2025-12-12T18:41:42.564293401Z" level=info msg="StartContainer for \"088e81159cd8630fe7465a0283426c37dfd9bcbf9ef185a7840e6a29df5f713a\" returns successfully" Dec 12 18:41:42.860701 systemd[1]: cri-containerd-088e81159cd8630fe7465a0283426c37dfd9bcbf9ef185a7840e6a29df5f713a.scope: Deactivated successfully. Dec 12 18:41:42.861073 systemd[1]: cri-containerd-088e81159cd8630fe7465a0283426c37dfd9bcbf9ef185a7840e6a29df5f713a.scope: Consumed 30ms CPU time, 9.6M memory peak, 3.1M read from disk. Dec 12 18:41:42.862976 containerd[1990]: time="2025-12-12T18:41:42.862933687Z" level=info msg="received container exit event container_id:\"088e81159cd8630fe7465a0283426c37dfd9bcbf9ef185a7840e6a29df5f713a\" id:\"088e81159cd8630fe7465a0283426c37dfd9bcbf9ef185a7840e6a29df5f713a\" pid:5322 exited_at:{seconds:1765564902 nanos:862144638}" Dec 12 18:41:43.834399 containerd[1990]: time="2025-12-12T18:41:43.834353427Z" level=info msg="CreateContainer within sandbox \"411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 18:41:43.850928 containerd[1990]: time="2025-12-12T18:41:43.849697715Z" level=info msg="Container 9cfb3e144345871ac57fc3d7511fd5cf78d8834c41941f0068b09ee719f52b19: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:41:43.862769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1953777523.mount: Deactivated successfully. Dec 12 18:41:43.866018 containerd[1990]: time="2025-12-12T18:41:43.865819311Z" level=info msg="CreateContainer within sandbox \"411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9cfb3e144345871ac57fc3d7511fd5cf78d8834c41941f0068b09ee719f52b19\"" Dec 12 18:41:43.869364 containerd[1990]: time="2025-12-12T18:41:43.869134705Z" level=info msg="StartContainer for \"9cfb3e144345871ac57fc3d7511fd5cf78d8834c41941f0068b09ee719f52b19\"" Dec 12 18:41:43.873101 containerd[1990]: time="2025-12-12T18:41:43.873015643Z" level=info msg="connecting to shim 9cfb3e144345871ac57fc3d7511fd5cf78d8834c41941f0068b09ee719f52b19" address="unix:///run/containerd/s/c3eb4d8c53c8839062ed567c06c65f026651789a9a70887cc1129524080d6e9b" protocol=ttrpc version=3 Dec 12 18:41:43.906793 systemd[1]: Started cri-containerd-9cfb3e144345871ac57fc3d7511fd5cf78d8834c41941f0068b09ee719f52b19.scope - libcontainer container 9cfb3e144345871ac57fc3d7511fd5cf78d8834c41941f0068b09ee719f52b19. Dec 12 18:41:43.949793 containerd[1990]: time="2025-12-12T18:41:43.949709972Z" level=info msg="StartContainer for \"9cfb3e144345871ac57fc3d7511fd5cf78d8834c41941f0068b09ee719f52b19\" returns successfully" Dec 12 18:41:44.334123 systemd[1]: cri-containerd-9cfb3e144345871ac57fc3d7511fd5cf78d8834c41941f0068b09ee719f52b19.scope: Deactivated successfully. Dec 12 18:41:44.335054 systemd[1]: cri-containerd-9cfb3e144345871ac57fc3d7511fd5cf78d8834c41941f0068b09ee719f52b19.scope: Consumed 23ms CPU time, 7.2M memory peak, 1.9M read from disk. Dec 12 18:41:44.337131 containerd[1990]: time="2025-12-12T18:41:44.336956258Z" level=info msg="received container exit event container_id:\"9cfb3e144345871ac57fc3d7511fd5cf78d8834c41941f0068b09ee719f52b19\" id:\"9cfb3e144345871ac57fc3d7511fd5cf78d8834c41941f0068b09ee719f52b19\" pid:5374 exited_at:{seconds:1765564904 nanos:336130119}" Dec 12 18:41:44.369021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cfb3e144345871ac57fc3d7511fd5cf78d8834c41941f0068b09ee719f52b19-rootfs.mount: Deactivated successfully. Dec 12 18:41:44.842357 containerd[1990]: time="2025-12-12T18:41:44.842273826Z" level=info msg="CreateContainer within sandbox \"411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 18:41:44.863867 containerd[1990]: time="2025-12-12T18:41:44.862784689Z" level=info msg="Container 7b344012546e916c4f13d8b57f92d2359347308519bd464388def09eb084acb9: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:41:44.867754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560740174.mount: Deactivated successfully. Dec 12 18:41:44.876896 containerd[1990]: time="2025-12-12T18:41:44.876838292Z" level=info msg="CreateContainer within sandbox \"411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7b344012546e916c4f13d8b57f92d2359347308519bd464388def09eb084acb9\"" Dec 12 18:41:44.878084 containerd[1990]: time="2025-12-12T18:41:44.877981562Z" level=info msg="StartContainer for \"7b344012546e916c4f13d8b57f92d2359347308519bd464388def09eb084acb9\"" Dec 12 18:41:44.879528 containerd[1990]: time="2025-12-12T18:41:44.879494781Z" level=info msg="connecting to shim 7b344012546e916c4f13d8b57f92d2359347308519bd464388def09eb084acb9" address="unix:///run/containerd/s/c3eb4d8c53c8839062ed567c06c65f026651789a9a70887cc1129524080d6e9b" protocol=ttrpc version=3 Dec 12 18:41:44.907812 systemd[1]: Started cri-containerd-7b344012546e916c4f13d8b57f92d2359347308519bd464388def09eb084acb9.scope - libcontainer container 7b344012546e916c4f13d8b57f92d2359347308519bd464388def09eb084acb9. Dec 12 18:41:44.979391 containerd[1990]: time="2025-12-12T18:41:44.979345963Z" level=info msg="StartContainer for \"7b344012546e916c4f13d8b57f92d2359347308519bd464388def09eb084acb9\" returns successfully" Dec 12 18:41:45.122396 systemd[1]: cri-containerd-7b344012546e916c4f13d8b57f92d2359347308519bd464388def09eb084acb9.scope: Deactivated successfully. Dec 12 18:41:45.122775 systemd[1]: cri-containerd-7b344012546e916c4f13d8b57f92d2359347308519bd464388def09eb084acb9.scope: Consumed 33ms CPU time, 6.1M memory peak, 1.1M read from disk. Dec 12 18:41:45.126371 containerd[1990]: time="2025-12-12T18:41:45.125898530Z" level=info msg="received container exit event container_id:\"7b344012546e916c4f13d8b57f92d2359347308519bd464388def09eb084acb9\" id:\"7b344012546e916c4f13d8b57f92d2359347308519bd464388def09eb084acb9\" pid:5418 exited_at:{seconds:1765564905 nanos:125270416}" Dec 12 18:41:45.165025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b344012546e916c4f13d8b57f92d2359347308519bd464388def09eb084acb9-rootfs.mount: Deactivated successfully. Dec 12 18:41:45.475369 kubelet[3527]: E1212 18:41:45.475212 3527 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 18:41:45.846254 containerd[1990]: time="2025-12-12T18:41:45.846204422Z" level=info msg="CreateContainer within sandbox \"411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 18:41:45.868871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871817232.mount: Deactivated successfully. Dec 12 18:41:45.870997 containerd[1990]: time="2025-12-12T18:41:45.870956843Z" level=info msg="Container 7edafe9cc4ceb7eafeb0f84ab781e635241b9f23f8d6e7a98480b483b448901a: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:41:45.882885 containerd[1990]: time="2025-12-12T18:41:45.882842215Z" level=info msg="CreateContainer within sandbox \"411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7edafe9cc4ceb7eafeb0f84ab781e635241b9f23f8d6e7a98480b483b448901a\"" Dec 12 18:41:45.884378 containerd[1990]: time="2025-12-12T18:41:45.883878307Z" level=info msg="StartContainer for \"7edafe9cc4ceb7eafeb0f84ab781e635241b9f23f8d6e7a98480b483b448901a\"" Dec 12 18:41:45.885151 containerd[1990]: time="2025-12-12T18:41:45.885126019Z" level=info msg="connecting to shim 7edafe9cc4ceb7eafeb0f84ab781e635241b9f23f8d6e7a98480b483b448901a" address="unix:///run/containerd/s/c3eb4d8c53c8839062ed567c06c65f026651789a9a70887cc1129524080d6e9b" protocol=ttrpc version=3 Dec 12 18:41:45.917826 systemd[1]: Started cri-containerd-7edafe9cc4ceb7eafeb0f84ab781e635241b9f23f8d6e7a98480b483b448901a.scope - libcontainer container 7edafe9cc4ceb7eafeb0f84ab781e635241b9f23f8d6e7a98480b483b448901a. Dec 12 18:41:45.954838 systemd[1]: cri-containerd-7edafe9cc4ceb7eafeb0f84ab781e635241b9f23f8d6e7a98480b483b448901a.scope: Deactivated successfully. Dec 12 18:41:45.960916 containerd[1990]: time="2025-12-12T18:41:45.959919933Z" level=info msg="received container exit event container_id:\"7edafe9cc4ceb7eafeb0f84ab781e635241b9f23f8d6e7a98480b483b448901a\" id:\"7edafe9cc4ceb7eafeb0f84ab781e635241b9f23f8d6e7a98480b483b448901a\" pid:5459 exited_at:{seconds:1765564905 nanos:957304104}" Dec 12 18:41:45.968528 containerd[1990]: time="2025-12-12T18:41:45.968492501Z" level=info msg="StartContainer for \"7edafe9cc4ceb7eafeb0f84ab781e635241b9f23f8d6e7a98480b483b448901a\" returns successfully" Dec 12 18:41:45.984375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7edafe9cc4ceb7eafeb0f84ab781e635241b9f23f8d6e7a98480b483b448901a-rootfs.mount: Deactivated successfully. Dec 12 18:41:46.854989 containerd[1990]: time="2025-12-12T18:41:46.854931986Z" level=info msg="CreateContainer within sandbox \"411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 18:41:46.880585 containerd[1990]: time="2025-12-12T18:41:46.880330750Z" level=info msg="Container b1cf929d21dd6b83a3d8b4b760f78b7a6365c77d7bd53ca6f90458a05b4aa686: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:41:46.902887 containerd[1990]: time="2025-12-12T18:41:46.902814352Z" level=info msg="CreateContainer within sandbox \"411e58914d52d151533288df8035b3f45f1872405fb629509ea8ae21a93df0e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b1cf929d21dd6b83a3d8b4b760f78b7a6365c77d7bd53ca6f90458a05b4aa686\"" Dec 12 18:41:46.906594 containerd[1990]: time="2025-12-12T18:41:46.905697835Z" level=info msg="StartContainer for \"b1cf929d21dd6b83a3d8b4b760f78b7a6365c77d7bd53ca6f90458a05b4aa686\"" Dec 12 18:41:46.907971 containerd[1990]: time="2025-12-12T18:41:46.907790968Z" level=info msg="connecting to shim b1cf929d21dd6b83a3d8b4b760f78b7a6365c77d7bd53ca6f90458a05b4aa686" address="unix:///run/containerd/s/c3eb4d8c53c8839062ed567c06c65f026651789a9a70887cc1129524080d6e9b" protocol=ttrpc version=3 Dec 12 18:41:46.954895 systemd[1]: Started cri-containerd-b1cf929d21dd6b83a3d8b4b760f78b7a6365c77d7bd53ca6f90458a05b4aa686.scope - libcontainer container b1cf929d21dd6b83a3d8b4b760f78b7a6365c77d7bd53ca6f90458a05b4aa686. Dec 12 18:41:47.016507 containerd[1990]: time="2025-12-12T18:41:47.016407028Z" level=info msg="StartContainer for \"b1cf929d21dd6b83a3d8b4b760f78b7a6365c77d7bd53ca6f90458a05b4aa686\" returns successfully" Dec 12 18:41:47.871758 kubelet[3527]: I1212 18:41:47.871690 3527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fssf2" podStartSLOduration=6.871528112 podStartE2EDuration="6.871528112s" podCreationTimestamp="2025-12-12 18:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:41:47.871152107 +0000 UTC m=+97.786932322" watchObservedRunningTime="2025-12-12 18:41:47.871528112 +0000 UTC m=+97.787308328" Dec 12 18:41:49.106661 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Dec 12 18:41:49.466512 kubelet[3527]: E1212 18:41:49.466359 3527 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:48940->127.0.0.1:39695: read tcp 127.0.0.1:48940->127.0.0.1:39695: read: connection reset by peer Dec 12 18:41:52.465974 (udev-worker)[6073]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:41:52.466922 systemd-networkd[1856]: lxc_health: Link UP Dec 12 18:41:52.478867 systemd-networkd[1856]: lxc_health: Gained carrier Dec 12 18:41:52.481066 (udev-worker)[6074]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:41:53.684687 systemd-networkd[1856]: lxc_health: Gained IPv6LL Dec 12 18:41:56.286253 ntpd[2224]: Listen normally on 13 lxc_health [fe80::1408:2ff:fe9f:689e%14]:123 Dec 12 18:41:56.287142 ntpd[2224]: 12 Dec 18:41:56 ntpd[2224]: Listen normally on 13 lxc_health [fe80::1408:2ff:fe9f:689e%14]:123 Dec 12 18:41:58.242098 sshd[5328]: Connection closed by 139.178.89.65 port 59152 Dec 12 18:41:58.304219 sshd-session[5260]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:58.309361 systemd[1]: sshd@27-172.31.29.16:22-139.178.89.65:59152.service: Deactivated successfully. Dec 12 18:41:58.312251 systemd[1]: session-28.scope: Deactivated successfully. Dec 12 18:41:58.313269 systemd-logind[1957]: Session 28 logged out. Waiting for processes to exit. Dec 12 18:41:58.315481 systemd-logind[1957]: Removed session 28. Dec 12 18:42:10.317412 containerd[1990]: time="2025-12-12T18:42:10.317362607Z" level=info msg="StopPodSandbox for \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\"" Dec 12 18:42:10.319197 containerd[1990]: time="2025-12-12T18:42:10.317535515Z" level=info msg="TearDown network for sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" successfully" Dec 12 18:42:10.319197 containerd[1990]: time="2025-12-12T18:42:10.317572973Z" level=info msg="StopPodSandbox for \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" returns successfully" Dec 12 18:42:10.319197 containerd[1990]: time="2025-12-12T18:42:10.318061405Z" level=info msg="RemovePodSandbox for \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\"" Dec 12 18:42:10.319197 containerd[1990]: time="2025-12-12T18:42:10.318088464Z" level=info msg="Forcibly stopping sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\"" Dec 12 18:42:10.319197 containerd[1990]: time="2025-12-12T18:42:10.318181113Z" level=info msg="TearDown network for sandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" successfully" Dec 12 18:42:10.320573 containerd[1990]: time="2025-12-12T18:42:10.320524410Z" level=info msg="Ensure that sandbox cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae in task-service has been cleanup successfully" Dec 12 18:42:10.327573 containerd[1990]: time="2025-12-12T18:42:10.327261309Z" level=info msg="RemovePodSandbox \"cfccfc4ceb5af41356d706211bfb17b40d9a7200ccc8a56aa9b63886f5958dae\" returns successfully" Dec 12 18:42:10.328139 containerd[1990]: time="2025-12-12T18:42:10.328093034Z" level=info msg="StopPodSandbox for \"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\"" Dec 12 18:42:10.329036 containerd[1990]: time="2025-12-12T18:42:10.328950995Z" level=info msg="TearDown network for sandbox \"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\" successfully" Dec 12 18:42:10.329036 containerd[1990]: time="2025-12-12T18:42:10.328975712Z" level=info msg="StopPodSandbox for \"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\" returns successfully" Dec 12 18:42:10.329591 containerd[1990]: time="2025-12-12T18:42:10.329404092Z" level=info msg="RemovePodSandbox for \"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\"" Dec 12 18:42:10.329591 containerd[1990]: time="2025-12-12T18:42:10.329438996Z" level=info msg="Forcibly stopping sandbox \"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\"" Dec 12 18:42:10.329591 containerd[1990]: time="2025-12-12T18:42:10.329535793Z" level=info msg="TearDown network for sandbox \"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\" successfully" Dec 12 18:42:10.330766 containerd[1990]: time="2025-12-12T18:42:10.330731895Z" level=info msg="Ensure that sandbox d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317 in task-service has been cleanup successfully" Dec 12 18:42:10.338054 containerd[1990]: time="2025-12-12T18:42:10.337807423Z" level=info msg="RemovePodSandbox \"d1188285dfa2fdb62f708da105b044e9500e312e9d14fc08ef1fc9fc33053317\" returns successfully" Dec 12 18:42:12.327105 kubelet[3527]: E1212 18:42:12.326438 3527 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": context deadline exceeded" Dec 12 18:42:12.376444 systemd[1]: cri-containerd-b29736da306e43a4361fe71677c80378ec9b4b27da12b689ece38b446bfa7cbc.scope: Deactivated successfully. Dec 12 18:42:12.376868 systemd[1]: cri-containerd-b29736da306e43a4361fe71677c80378ec9b4b27da12b689ece38b446bfa7cbc.scope: Consumed 3.589s CPU time, 72.8M memory peak, 20.9M read from disk. Dec 12 18:42:12.388497 containerd[1990]: time="2025-12-12T18:42:12.388450313Z" level=info msg="received container exit event container_id:\"b29736da306e43a4361fe71677c80378ec9b4b27da12b689ece38b446bfa7cbc\" id:\"b29736da306e43a4361fe71677c80378ec9b4b27da12b689ece38b446bfa7cbc\" pid:3170 exit_status:1 exited_at:{seconds:1765564932 nanos:388108118}" Dec 12 18:42:12.434206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b29736da306e43a4361fe71677c80378ec9b4b27da12b689ece38b446bfa7cbc-rootfs.mount: Deactivated successfully. Dec 12 18:42:12.960304 kubelet[3527]: I1212 18:42:12.960193 3527 scope.go:117] "RemoveContainer" containerID="b29736da306e43a4361fe71677c80378ec9b4b27da12b689ece38b446bfa7cbc" Dec 12 18:42:12.964504 containerd[1990]: time="2025-12-12T18:42:12.964459126Z" level=info msg="CreateContainer within sandbox \"bee67bf8680fdd17231ad5c48b09eca705707dc9c80523a8ed234bb11493b56f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 12 18:42:13.064149 containerd[1990]: time="2025-12-12T18:42:13.062274625Z" level=info msg="Container 79037dbede3c3ee979b71953666618315b6c536285a762773ef080d49c3c831d: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:13.068062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3733175837.mount: Deactivated successfully. Dec 12 18:42:13.079305 containerd[1990]: time="2025-12-12T18:42:13.079238891Z" level=info msg="CreateContainer within sandbox \"bee67bf8680fdd17231ad5c48b09eca705707dc9c80523a8ed234bb11493b56f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"79037dbede3c3ee979b71953666618315b6c536285a762773ef080d49c3c831d\"" Dec 12 18:42:13.080715 containerd[1990]: time="2025-12-12T18:42:13.080614695Z" level=info msg="StartContainer for \"79037dbede3c3ee979b71953666618315b6c536285a762773ef080d49c3c831d\"" Dec 12 18:42:13.087570 containerd[1990]: time="2025-12-12T18:42:13.087321663Z" level=info msg="connecting to shim 79037dbede3c3ee979b71953666618315b6c536285a762773ef080d49c3c831d" address="unix:///run/containerd/s/e5516a5355ea4034f362be41a02cb87caec5c9caac0b7e0cc242e697d8bdf75f" protocol=ttrpc version=3 Dec 12 18:42:13.125873 systemd[1]: Started cri-containerd-79037dbede3c3ee979b71953666618315b6c536285a762773ef080d49c3c831d.scope - libcontainer container 79037dbede3c3ee979b71953666618315b6c536285a762773ef080d49c3c831d. Dec 12 18:42:13.189685 containerd[1990]: time="2025-12-12T18:42:13.189636394Z" level=info msg="StartContainer for \"79037dbede3c3ee979b71953666618315b6c536285a762773ef080d49c3c831d\" returns successfully" Dec 12 18:42:18.494768 systemd[1]: cri-containerd-58675a4a5a9bd843128f3b43ea052c04911ffa13fa5382e91dab4a7018702b72.scope: Deactivated successfully. Dec 12 18:42:18.495975 systemd[1]: cri-containerd-58675a4a5a9bd843128f3b43ea052c04911ffa13fa5382e91dab4a7018702b72.scope: Consumed 3.066s CPU time, 30.1M memory peak, 11.1M read from disk. Dec 12 18:42:18.500359 containerd[1990]: time="2025-12-12T18:42:18.500288023Z" level=info msg="received container exit event container_id:\"58675a4a5a9bd843128f3b43ea052c04911ffa13fa5382e91dab4a7018702b72\" id:\"58675a4a5a9bd843128f3b43ea052c04911ffa13fa5382e91dab4a7018702b72\" pid:3186 exit_status:1 exited_at:{seconds:1765564938 nanos:499845089}" Dec 12 18:42:18.530251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58675a4a5a9bd843128f3b43ea052c04911ffa13fa5382e91dab4a7018702b72-rootfs.mount: Deactivated successfully. Dec 12 18:42:18.979788 kubelet[3527]: I1212 18:42:18.979755 3527 scope.go:117] "RemoveContainer" containerID="58675a4a5a9bd843128f3b43ea052c04911ffa13fa5382e91dab4a7018702b72" Dec 12 18:42:18.982232 containerd[1990]: time="2025-12-12T18:42:18.982051500Z" level=info msg="CreateContainer within sandbox \"cdfb7e4ccc89fa286a5ef4e7f8439590747d5baf91dc5c1ed3cf9307b07695c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 12 18:42:19.010584 containerd[1990]: time="2025-12-12T18:42:18.998929873Z" level=info msg="Container 670ff90d4194e03bedae2b51d0c30f80a4523f7778c930d7cde066d8d30fefb7: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:19.013543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832729925.mount: Deactivated successfully. Dec 12 18:42:19.029815 containerd[1990]: time="2025-12-12T18:42:19.029759446Z" level=info msg="CreateContainer within sandbox \"cdfb7e4ccc89fa286a5ef4e7f8439590747d5baf91dc5c1ed3cf9307b07695c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"670ff90d4194e03bedae2b51d0c30f80a4523f7778c930d7cde066d8d30fefb7\"" Dec 12 18:42:19.030687 containerd[1990]: time="2025-12-12T18:42:19.030650730Z" level=info msg="StartContainer for \"670ff90d4194e03bedae2b51d0c30f80a4523f7778c930d7cde066d8d30fefb7\"" Dec 12 18:42:19.032295 containerd[1990]: time="2025-12-12T18:42:19.032249641Z" level=info msg="connecting to shim 670ff90d4194e03bedae2b51d0c30f80a4523f7778c930d7cde066d8d30fefb7" address="unix:///run/containerd/s/d27da8adcce9186c7635810e9e49df4fc4a087b4d340e6a97126e3a75eeb6679" protocol=ttrpc version=3 Dec 12 18:42:19.068823 systemd[1]: Started cri-containerd-670ff90d4194e03bedae2b51d0c30f80a4523f7778c930d7cde066d8d30fefb7.scope - libcontainer container 670ff90d4194e03bedae2b51d0c30f80a4523f7778c930d7cde066d8d30fefb7. Dec 12 18:42:19.134096 containerd[1990]: time="2025-12-12T18:42:19.134048506Z" level=info msg="StartContainer for \"670ff90d4194e03bedae2b51d0c30f80a4523f7778c930d7cde066d8d30fefb7\" returns successfully" Dec 12 18:42:22.327631 kubelet[3527]: E1212 18:42:22.327566 3527 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 18:42:32.328728 kubelet[3527]: E1212 18:42:32.328651 3527 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": context deadline exceeded"