38
0

Aggregating empirical evidence from data strategy studies: a case on model quantization

Abstract

Background: As empirical software engineering evolves, more studies adopt data strategies-approaches that investigate digital artifacts such as models, source code, or system logs rather than relying on human subjects. Synthesizing results from such studies introduces new methodological challenges.Aims: This study assesses the effects of model quantization on correctness and resource efficiency in deep learning (DL) systems. Additionally, it explores the methodological implications of aggregating evidence from empirical studies that adopt data strategies.Method: We conducted a research synthesis of six primary studies that empirically evaluate model quantization. We applied the Structured Synthesis Method (SSM) to aggregate the findings, which combines qualitative and quantitative evidence through diagrammatic modeling. A total of 19 evidence models were extracted and aggregated.Results: The aggregated evidence indicates that model quantization weakly negatively affects correctness metrics while consistently improving resource efficiency metrics, including storage size, inference latency, and GPU energy consumption-a manageable trade-off for many DL deployment contexts. Evidence across quantization techniques remains fragmented, underscoring the need for more focused empirical studies per technique.Conclusions: Model quantization offers substantial efficiency benefits with minor trade-offs in correctness, making it a suitable optimization strategy for resource-constrained environments. This study also demonstrates the feasibility of using SSM to synthesize findings from data strategy-based research.

View on arXiv
@article{rey2025_2505.00816,
  title={ Aggregating empirical evidence from data strategy studies: a case on model quantization },
  author={ Santiago del Rey and Paulo Sérgio Medeiros dos Santos and Guilherme Horta Travassos and Xavier Franch and Silverio Martínez-Fernández },
  journal={arXiv preprint arXiv:2505.00816},
  year={ 2025 }
}
Comments on this paper