ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17353
34
0

NdLinear Is All You Need for Representation Learning

21 March 2025
Alex Reneau
Jerry Yao-Chieh Hu
Zhongfang Zhuang
Ting-Chun Liu
    HAI
ArXivPDFHTML
Abstract

Many high-impact machine learning tasks involve multi-dimensional data (e.g., images, volumetric medical scans, multivariate time-series). Yet, most neural architectures flatten inputs, discarding critical cross-dimension information. We introduce NdLinear, a novel linear transformation that preserves these structures without extra overhead. By operating separately along each dimension, NdLinear captures dependencies that standard fully connected layers overlook. Extensive experiments across convolutional, recurrent, and transformer-based networks show significant improvements in representational power and parameter efficiency. Crucially, NdLinear serves as a foundational building block for large-scale foundation models by operating on any unimodal or multimodal data in its native form. This removes the need for flattening or modality-specific preprocessing. Ndlinear rethinks core architectural priorities beyond attention, enabling more expressive, context-aware models at scale. We propose NdLinear as a drop-in replacement for standard linear layers -- marking an important step toward next-generation neural architectures.

View on arXiv
@article{reneau2025_2503.17353,
  title={ NdLinear Is All You Need for Representation Learning },
  author={ Alex Reneau and Jerry Yao-Chieh Hu and Zhongfang Zhuang and Ting-Chun Liu },
  journal={arXiv preprint arXiv:2503.17353},
  year={ 2025 }
}
Comments on this paper