15
17

NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems

Jason Yik
Korneel Van den Berghe
Douwe den Blanken
Younes Bouhadjar
Maxime Fabre
Paul Hueber
Denis Kleyko
Noah Pacik-Nelson
Pao-Sheng Vincent Sun
Guangzhi Tang
Shenqi Wang
Biyan Zhou
Soikat Hasan Ahmed
George Vathakkattil Joseph
Benedetto Leto
Aurora Micheli
Anurag Kumar Mishra
Gregor Lenz
Benedetto Leto
Zergham Ahmed
Mahmoud Akl
Brian Anderson
Andreas G. Andreou
Chiara Bartolozzi
Arindam Basu
Petrut Bogdan
Sander M. Bohté
Sonia Buckley
Gert Cauwenberghs
Elisabetta Chicca
Federico Corradi
Guido de Croon
Andreea Danielescu
Anurag Daram
Mike Davies
Yiğit Demirağ
Jason Eshraghian
Tobias Fischer
Jeremy Forest
Vittorio Fra
Steve Furber
P. Michael Furlong
William Gilpin
Aditya Gilra
Hector A. Gonzalez
Giacomo Indiveri
Siddharth Joshi
Vedant Karia
Lyes Khacef
James C. Knight
Laura Kriener
Rajkumar Kubendran
Lyes Khacef
Yao-Hong Liu
Shih-Chii Liu
Haoyuan Ma
Rajit Manohar
Josep Maria Margarit-Taulé
Christian Mayr
Konstantinos Michmizos
Dylan R. Muir
Emre Neftci
Thomas Nowotny
Fabrizio Ottati
Ayca Ozcelikkale
Priyadarshini Panda
Jongkil Park
Melika Payvand
Christian Pehle
Mihai A. Petrovici
Alessandro Pierro
Christoph Posch
Alpha Renner
Yulia Sandamirskaya
Clemens J. S. Schaefer
André van Schaik
Johannes Schemmel
Samuel Schmidgall
Catherine D. Schuman
Jae-sun Seo
Sadique Sheik
Sumit Bam Shrestha
Manolis Sifalakis
Amos Sironi
Matthew P. Stewart
Kenneth Stewart
Terrence C. Stewart
Philipp Stratmann
Jonathan Timcheck
Nergis Tömen
Gianvito Urgese
Marian Verhelst
Craig M. Vineyard
Bernhard Vogginger
Amirreza Yousefzadeh
Fatima Tuz Zohora
Charlotte Frenkel
Vijay Janapa Reddi
Charlotte Frenkel
Vijay Janapa Reddi
Abstract

Neuromorphic computing shows promise for advancing computing efficiency and capabilities of AI applications using brain-inspired principles. However, the neuromorphic research field currently lacks standardized benchmarks, making it difficult to accurately measure technological advancements, compare performance with conventional methods, and identify promising future research directions. Prior neuromorphic computing benchmark efforts have not seen widespread adoption due to a lack of inclusive, actionable, and iterative benchmark design and guidelines. To address these shortcomings, we present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems. NeuroBench is a collaboratively-designed effort from an open community of researchers across industry and academia, aiming to provide a representative structure for standardizing the evaluation of neuromorphic approaches. The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings. In this article, we outline tasks and guidelines for benchmarks across multiple application domains, and present initial performance baselines across neuromorphic and conventional approaches for both benchmark tracks. NeuroBench is intended to continually expand its benchmarks and features to foster and track the progress made by the research community.

View on arXiv
Comments on this paper