ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.07507
28
4

Thought2Text: Text Generation from EEG Signal using Large Language Models (LLMs)

10 October 2024
Abhijit Mishra
Shreya Shukla
Jose Torres
Jacek Gwizdka
Shounak Roychowdhury
ArXivPDFHTML
Abstract

Decoding and expressing brain activity in a comprehensible form is a challenging frontier in AI. This paper presents Thought2Text, which uses instruction-tuned Large Language Models (LLMs) fine-tuned with EEG data to achieve this goal. The approach involves three stages: (1) training an EEG encoder for visual feature extraction, (2) fine-tuning LLMs on image and text data, enabling multimodal description generation, and (3) further fine-tuning on EEG embeddings to generate text directly from EEG during inference. Experiments on a public EEG dataset collected for six subjects with image stimuli and text captions demonstrate the efficacy of multimodal LLMs (LLaMA-v3, Mistral-v0.3, Qwen2.5), validated using traditional language generation evaluation metrics, as well as fluency and adequacy measures. This approach marks a significant advancement towards portable, low-cost "thoughts-to-text" technology with potential applications in both neuroscience and natural language processing.

View on arXiv
@article{mishra2025_2410.07507,
  title={ Thought2Text: Text Generation from EEG Signal using Large Language Models (LLMs) },
  author={ Abhijit Mishra and Shreya Shukla and Jose Torres and Jacek Gwizdka and Shounak Roychowdhury },
  journal={arXiv preprint arXiv:2410.07507},
  year={ 2025 }
}
Comments on this paper