23
1

Banyan: Improved Representation Learning with Explicit Structure

Abstract

We present Banyan, a model that efficiently learns semantic representations by leveraging explicit hierarchical structure. While transformers excel at scale, they struggle in low-resource settings. Conversely recent structured models have shown promise as efficient learners, but lack performance. Banyan bridges this gap with two key innovations: an entangled hierarchical tree structure and diagonalized message passing, enabling it to outperform larger transformer models with just 14 non-embedding parameters. It excels in low-resource settings, offering a viable alternative for under-represented languages and highlighting its potential for efficient, interpretable NLP in resource-constrained environments.

View on arXiv
@article{opper2025_2407.17771,
  title={ Banyan: Improved Representation Learning with Explicit Structure },
  author={ Mattia Opper and N. Siddharth },
  journal={arXiv preprint arXiv:2407.17771},
  year={ 2025 }
}
Comments on this paper