ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.23197
23
0

Solution-aware vs global ReLU selection: partial MILP strikes back for DNN verification

31 July 2025
Yuke Liao
Blaise Genest
Kuldeep Meel
Shaan Aryaman
ArXiv (abs)PDFHTML
Main:19 Pages
9 Figures
Bibliography:4 Pages
14 Tables
Appendix:9 Pages
Abstract

To handle complex instances, we revisit a divide-and-conquer approach to break down the complexity: instead of few complex BaB calls, we rely on many small {\em partial} MILP calls. The crucial step is to select very few but very important ReLUs to treat using (costly) binary variables. The previous attempts were suboptimal in that respect. To select these important ReLU variables, we propose a novel {\em solution-aware} ReLU scoring ({\sf SAS}), as well as adapt the BaB-SR and BaB-FSB branching functions as {\em global} ReLU scoring ({\sf GS}) functions. We compare them theoretically as well as experimentally, and {\sf SAS} is more efficient at selecting a set of variables to open using binary variables. Compared with previous attempts, SAS reduces the number of binary variables by around 6 times, while maintaining the same level of accuracy. Implemented in {\em Hybrid MILP}, calling first α,β\alpha,\betaα,β-CROWN with a short time-out to solve easier instances, and then partial MILP, produces a very accurate yet efficient verifier, reducing by up to 40%40\%40% the number of undecided instances to low levels (8−15%8-15\%8−15%), while keeping a reasonable runtime (46s−417s46s-417s46s−417s on average per instance), even for fairly large CNNs with 2 million parameters.

View on arXiv
Comments on this paper