ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.12142
133
1
v1v2 (latest)

LoRA meets Riemannion: Muon Optimizer for Parametrization-independent Low-Rank Adapters

16 July 2025
Vladimir Bogachev
Vladimir Aletov
Alexander Molozhavenko
Denis Bobkov
Vera Soboleva
Aibek Alanov
Maxim Rakhuba
ArXiv (abs)PDFHTMLHuggingFace (34 upvotes)
Main:8 Pages
4 Figures
Bibliography:6 Pages
7 Tables
Appendix:11 Pages
Abstract

This work presents a novel, fully Riemannian framework for Low-Rank Adaptation (LoRA) that geometrically treats low-rank adapters by optimizing them directly on the fixed-rank manifold. This formulation eliminates the parametrization ambiguity present in standard Euclidean optimizers. Our framework integrates three key components to achieve this: (1) we derive Riemannion, a new Riemannian optimizer on the fixed-rank matrix manifold that generalizes the recently proposed Muon optimizer; (2) we develop a Riemannian gradient-informed LoRA initialization, and (3) we provide an efficient implementation without prominent overhead that uses automatic differentiation to compute arising geometric operations while adhering to best practices in numerical linear algebra. Comprehensive experimental results on both LLM and diffusion model architectures demonstrate that our approach yields consistent and noticeable improvements in convergence speed and final task performance over both standard LoRA and its state-of-the-art modifications.

View on arXiv
Comments on this paper