ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.01212
40
0

Cooper: A Library for Constrained Optimization in Deep Learning

1 April 2025
Jose Gallego-Posada
Juan Ramirez
Meraj Hashemizadeh
Simon Lacoste-Julien
    ODL
    AI4CE
ArXivPDFHTML
Abstract

Cooper is an open-source package for solving constrained optimization problems involving deep learning models. Cooper implements several Lagrangian-based first-order update schemes, making it easy to combine constrained optimization algorithms with high-level features of PyTorch such as automatic differentiation, and specialized deep learning architectures and optimizers. Although Cooper is specifically designed for deep learning applications where gradients are estimated based on mini-batches, it is suitable for general non-convex continuous constrained optimization. Cooper's source code is available atthis https URL.

View on arXiv
@article{gallego-posada2025_2504.01212,
  title={ Cooper: A Library for Constrained Optimization in Deep Learning },
  author={ Jose Gallego-Posada and Juan Ramirez and Meraj Hashemizadeh and Simon Lacoste-Julien },
  journal={arXiv preprint arXiv:2504.01212},
  year={ 2025 }
}
Comments on this paper