ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.02264
7
0

NLP4Neuro: Sequence-to-sequence learning for neural population decoding

3 July 2025
Jacob J. Morra
Kaitlyn E. Fouke
Kexin Hang
Zichen He
Owen Traubert
Timothy W. Dunn
Eva A. Naumann
ArXiv (abs)PDFHTML
Main:9 Pages
9 Figures
Bibliography:4 Pages
6 Tables
Appendix:4 Pages
Abstract

Delineating how animal behavior arises from neural activity is a foundational goal of neuroscience. However, as the computations underlying behavior unfold in networks of thousands of individual neurons across the entire brain, this presents challenges for investigating neural roles and computational mechanisms in large, densely wired mammalian brains during behavior. Transformers, the backbones of modern large language models (LLMs), have become powerful tools for neural decoding from smaller neural populations. These modern LLMs have benefited from extensive pre-training, and their sequence-to-sequence learning has been shown to generalize to novel tasks and data modalities, which may also confer advantages for neural decoding from larger, brain-wide activity recordings. Here, we present a systematic evaluation of off-the-shelf LLMs to decode behavior from brain-wide populations, termed NLP4Neuro, which we used to test LLMs on simultaneous calcium imaging and behavior recordings in larval zebrafish exposed to visual motion stimuli. Through NLP4Neuro, we found that LLMs become better at neural decoding when they use pre-trained weights learned from textual natural language data. Moreover, we found that a recent mixture-of-experts LLM, DeepSeek Coder-7b, significantly improved behavioral decoding accuracy, predicted tail movements over long timescales, and provided anatomically consistent highly interpretable readouts of neuron salience. NLP4Neuro demonstrates that LLMs are highly capable of informing brain-wide neural circuit dissection.

View on arXiv
@article{morra2025_2507.02264,
  title={ NLP4Neuro: Sequence-to-sequence learning for neural population decoding },
  author={ Jacob J. Morra and Kaitlyn E. Fouke and Kexin Hang and Zichen He and Owen Traubert and Timothy W. Dunn and Eva A. Naumann },
  journal={arXiv preprint arXiv:2507.02264},
  year={ 2025 }
}
Comments on this paper