ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.01604
20
17

Going Beyond Classification Accuracy Metrics in Model Compression

3 December 2020
Vinu Joseph
Shoaib Ahmed Siddiqui
Aditya Bhaskara
Ganesh Gopalakrishnan
Saurav Muralidharan
M. Garland
Sheraz Ahmed
Andreas Dengel
ArXivPDFHTML
Abstract

With the rise in edge-computing devices, there has been an increasing demand to deploy energy and resource-efficient models. A large body of research has been devoted to developing methods that can reduce the size of the model considerably without affecting the standard metrics such as top-1 accuracy. However, these pruning approaches tend to result in a significant mismatch in other metrics such as fairness across classes and explainability. To combat such misalignment, we propose a novel multi-part loss function inspired by the knowledge-distillation literature. Through extensive experiments, we demonstrate the effectiveness of our approach across different compression algorithms, architectures, tasks as well as datasets. In particular, we obtain up to 4.1×4.1\times4.1× reduction in the number of prediction mismatches between the compressed and reference models, and up to 5.7×5.7\times5.7× in cases where the reference model makes the correct prediction; all while making no changes to the compression algorithm, and minor modifications to the loss function. Furthermore, we demonstrate how inducing simple alignment between the predictions of the models naturally improves the alignment on other metrics including fairness and attributions. Our framework can thus serve as a simple plug-and-play component for compression algorithms in the future.

View on arXiv
Comments on this paper