ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.02311
5
0

Perception Activator: An intuitive and portable framework for brain cognitive exploration

3 July 2025
Le Xu
Qi Zhang
Qixian Zhang
Hongyun Zhang
Duoqian Miao
Cairong Zhao
ArXiv (abs)PDFHTML
Main:12 Pages
9 Figures
Bibliography:2 Pages
3 Tables
Appendix:3 Pages
Abstract

Recent advances in brain-vision decoding have driven significant progress, reconstructing with high fidelity perceived visual stimuli from neural activity, e.g., functional magnetic resonance imaging (fMRI), in the human visual cortex. Most existing methods decode the brain signal using a two-level strategy, i.e., pixel-level and semantic-level. However, these methods rely heavily on low-level pixel alignment yet lack sufficient and fine-grained semantic alignment, resulting in obvious reconstruction distortions of multiple semantic objects. To better understand the brain's visual perception patterns and how current decoding models process semantic objects, we have developed an experimental framework that uses fMRI representations as intervention conditions. By injecting these representations into multi-scale image features via cross-attention, we compare both downstream performance and intermediate feature changes on object detection and instance segmentation tasks with and without fMRI information. Our results demonstrate that incorporating fMRI signals enhances the accuracy of downstream detection and segmentation, confirming that fMRI contains rich multi-object semantic cues and coarse spatial localization information-elements that current models have yet to fully exploit or integrate.

View on arXiv
@article{xu2025_2507.02311,
  title={ Perception Activator: An intuitive and portable framework for brain cognitive exploration },
  author={ Le Xu and Qi Zhang and Qixian Zhang and Hongyun Zhang and Duoqian Miao and Cairong Zhao },
  journal={arXiv preprint arXiv:2507.02311},
  year={ 2025 }
}
Comments on this paper