A Universal Growth Rate for Learning with Smooth Surrogate LossesNeural Information Processing Systems (NeurIPS), 2024 |
On the inconsistency of separable losses for structured predictionConference of the European Chapter of the Association for Computational Linguistics (EACL), 2023 |
Sparse Continuous Distributions and Fenchel-Young Losses André F. T. Martins Marcos Vinícius Treviso António Farinhas P. Aguiar Mário A. T. Figueiredo Mathieu Blondel Vlad Niculae |
On the Consistency of Max-Margin LossesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2021 |