Concorde: Fast and Accurate CPU Performance Modeling with Compositional Analytical-ML Fusion

Cycle-level simulators such as gem5 are widely used in microarchitecture design, but they are prohibitively slow for large-scale design space explorations. We present Concorde, a new methodology for learning fast and accurate performance models of microarchitectures. Unlike existing simulators and learning approaches that emulate each instruction, Concorde predicts the behavior of a program based on compact performance distributions that capture the impact of different microarchitectural components. It derives these performance distributions using simple analytical models that estimate bounds on performance induced by each microarchitectural component, providing a simple yet rich representation of a program's performance characteristics across a large space of microarchitectural parameters. Experiments show that Concorde is more than five orders of magnitude faster than a reference cycle-level simulator, with about 2% average Cycles-Per-Instruction (CPI) prediction error across a range of SPEC, open-source, and proprietary benchmarks. This enables rapid design-space exploration and performance sensitivity analyses that are currently infeasible, e.g., in about an hour, we conducted a first-of-its-kind fine-grained performance attribution to different microarchitectural components across a diverse set of programs, requiring nearly 150 million CPI evaluations.
View on arXiv@article{nasr-esfahany2025_2503.23076, title={ Concorde: Fast and Accurate CPU Performance Modeling with Compositional Analytical-ML Fusion }, author={ Arash Nasr-Esfahany and Mohammad Alizadeh and Victor Lee and Hanna Alam and Brett W. Coon and David Culler and Vidushi Dadu and Martin Dixon and Henry M. Levy and Santosh Pandey and Parthasarathy Ranganathan and Amir Yazdanbakhsh }, journal={arXiv preprint arXiv:2503.23076}, year={ 2025 } }