ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.08170
21
2

Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach

14 November 2023
G. Marchetti
Gabriele Cesa
Kumar Pratik
Arash Behboodi
ArXivPDFHTML
Abstract

Lattice reduction is a combinatorial optimization problem aimed at finding the most orthogonal basis in a given lattice. The Lenstra-Lenstra-Lovász (LLL) algorithm is the best algorithm in the literature for solving this problem. In light of recent research on algorithm discovery, in this work, we would like to answer this question: is it possible to parametrize the algorithm space for lattice reduction problem with neural networks and find an algorithm without supervised data? Our strategy is to use equivariant and invariant parametrizations and train in a self-supervised way. We design a deep neural model outputting factorized unimodular matrices and train it in a self-supervised manner by penalizing non-orthogonal lattice bases. We incorporate the symmetries of lattice reduction into the model by making it invariant to isometries and scaling of the ambient space and equivariant with respect to the hyperocrahedral group permuting and flipping the lattice basis elements. We show that this approach yields an algorithm with comparable complexity and performance to the LLL algorithm on a set of benchmarks. Additionally, motivated by certain applications for wireless communication, we extend our method to a convolutional architecture which performs joint reduction of spatially-correlated lattices arranged in a grid, thereby amortizing its cost over multiple lattices.

View on arXiv
@article{marchetti2025_2311.08170,
  title={ Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach },
  author={ Giovanni Luca Marchetti and Gabriele Cesa and Pratik Kumar and Arash Behboodi },
  journal={arXiv preprint arXiv:2311.08170},
  year={ 2025 }
}
Comments on this paper