Modular addition is, on its face, a simple operation: given elements in , compute their sum modulo . Yet, scalable machine learning solutions to this problem remain elusive: prior work trains ML models that sum elements mod . Promising applications of ML models for cryptanalysis-which often involve modular arithmetic with large and -motivate reconsideration of this problem. This work proposes three changes to the modular addition model training pipeline: more diverse training data, an angular embedding, and a custom loss function. With these changes, we demonstrate success with our approach for , a case which is interesting for cryptographic applications, and a significant increase in and over prior work. These techniques also generalize to other modular arithmetic problems, motivating future work.
View on arXiv