13
0

Voltage-Controlled Magnetoelectric Devices for Neuromorphic Diffusion Process

Abstract

Stochastic diffusion processes are pervasive in nature, from the seemingly erratic Brownian motion to the complex interactions of synaptically-coupled spiking neurons. Recently, drawing inspiration from Langevin dynamics, neuromorphic diffusion models were proposed and have become one of the major breakthroughs in the field of generative artificial intelligence. Unlike discriminative models that have been well developed to tackle classification or regression tasks, diffusion models as well as other generative models such as ChatGPT aim at creating content based upon contexts learned. However, the more complex algorithms of these models result in high computational costs using today's technologies, creating a bottleneck in their efficiency, and impeding further development. Here, we develop a spintronic voltage-controlled magnetoelectric memory hardware for the neuromorphic diffusion process. The in-memory computing capability of our spintronic devices goes beyond current Von Neumann architecture, where memory and computing units are separated. Together with the non-volatility of magnetic memory, we can achieve high-speed and low-cost computing, which is desirable for the increasing scale of generative models in the current era. We experimentally demonstrate that the hardware-based true random diffusion process can be implemented for image generation and achieve comparable image quality to software-based training as measured by the Frechet inception distance (FID) score, achieving ~10^3 better energy-per-bit-per-area over traditional hardware.

View on arXiv
@article{cheng2025_2407.12261,
  title={ Voltage-Controlled Magnetoelectric Devices for Neuromorphic Diffusion Process },
  author={ Yang Cheng and Qingyuan Shu and Albert Lee and Haoran He and Ivy Zhu and Minzhang Chen and Renhe Chen and Zirui Wang and Hantao Zhang and Chih-Yao Wang and Shan-Yi Yang and Yu-Chen Hsin and Cheng-Yi Shih and Hsin-Han Lee and Ran Cheng and Kang L. Wang },
  journal={arXiv preprint arXiv:2407.12261},
  year={ 2025 }
}
Comments on this paper