14

Training slow silicon neurons to control extremely fast robots with spiking reinforcement learning

Irene Ambrosini
Ingo Blakowski
Dmitrii Zendrikov
Cristiano Capone
Luna Gava
Giacomo Indiveri
Chiara De Luca
Chiara Bartolozzi
Main:4 Pages
2 Figures
Bibliography:1 Pages
1 Tables
Abstract

Air hockey demands split-second decisions at high puck velocities, a challenge we address with a compact network of spiking neurons running on a mixed-signal analog/digital neuromorphic processor. By co-designing hardware and learning algorithms, we train the system to achieve successful puck interactions through reinforcement learning in a remarkably small number of trials. The network leverages fixed random connectivity to capture the task's temporal structure and adopts a local e-prop learning rule in the readout layer to exploit event-driven activity for fast and efficient learning. The result is real-time learning with a setup comprising a computer and the neuromorphic chip in-the-loop, enabling practical training of spiking neural networks for robotic autonomous systems. This work bridges neuroscience-inspired hardware with real-world robotic control, showing that brain-inspired approaches can tackle fast-paced interaction tasks while supporting always-on learning in intelligent machines.

View on arXiv
Comments on this paper