ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.00047
59
0

HadamRNN: Binary and Sparse Ternary Orthogonal RNNs

28 January 2025
Armand Foucault
Franck Mamalet
François Malgouyres
    MQ
ArXivPDFHTML
Abstract

Binary and sparse ternary weights in neural networks enable faster computations and lighter representations, facilitating their use on edge devices with limited computational power. Meanwhile, vanilla RNNs are highly sensitive to changes in their recurrent weights, making the binarization and ternarization of these weights inherently challenging. To date, no method has successfully achieved binarization or ternarization of vanilla RNN weights. We present a new approach leveraging the properties of Hadamard matrices to parameterize a subset of binary and sparse ternary orthogonal matrices. This method enables the training of orthogonal RNNs (ORNNs) with binary and sparse ternary recurrent weights, effectively creating a specific class of binary and sparse ternary vanilla RNNs. The resulting ORNNs, called HadamRNN and Block-HadamRNN, are evaluated on benchmarks such as the copy task, permuted and sequential MNIST tasks, the IMDB dataset, two GLUE benchmarks, and two IoT benchmarks. Despite binarization or sparse ternarization, these RNNs maintain performance levels comparable to state-of-the-art full-precision models, highlighting the effectiveness of our approach. Notably, our approach is the first solution with binary recurrent weights capable of tackling the copy task over 1000 timesteps.

View on arXiv
@article{foucault2025_2502.00047,
  title={ HadamRNN: Binary and Sparse Ternary Orthogonal RNNs },
  author={ Armand Foucault and Franck Mamalet and François Malgouyres },
  journal={arXiv preprint arXiv:2502.00047},
  year={ 2025 }
}
Comments on this paper