ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.02199
53
0

When Dimensionality Hurts: The Role of LLM Embedding Compression for Noisy Regression Tasks

4 February 2025
Felix Drinkall
J. Pierrehumbert
Stefan Zohren
ArXivPDFHTML
Abstract

Large language models (LLMs) have shown remarkable success in language modelling due to scaling laws found in model size and the hidden dimension of the model's text representation. Yet, we demonstrate that compressed representations of text can yield better performance in LLM-based regression tasks. In this paper, we compare the relative performance of embedding compression in three different signal-to-noise contexts: financial return prediction, writing quality assessment and review scoring. Our results show that compressing embeddings, in a minimally supervised manner using an autoencoder's hidden representation, can mitigate overfitting and improve performance on noisy tasks, such as financial return prediction; but that compression reduces performance on tasks that have high causal dependencies between the input and target data. Our results suggest that the success of interpretable compressed representations such as sentiment may be due to a regularising effect.

View on arXiv
@article{drinkall2025_2502.02199,
  title={ When Dimensionality Hurts: The Role of LLM Embedding Compression for Noisy Regression Tasks },
  author={ Felix Drinkall and Janet B. Pierrehumbert and Stefan Zohren },
  journal={arXiv preprint arXiv:2502.02199},
  year={ 2025 }
}
Comments on this paper