207

DeepDFA: Automata Learning through Neural Probabilistic Relaxations

European Conference on Artificial Intelligence (ECAI), 2024
Main:7 Pages
4 Figures
Bibliography:1 Pages
15 Tables
Appendix:9 Pages
Abstract

In this work, we introduce DeepDFA, a novel approach to identifying Deterministic Finite Automata (DFAs) from traces, harnessing a differentiable yet discrete model. Inspired by both the probabilistic relaxation of DFAs and Recurrent Neural Networks (RNNs), our model offers interpretability post-training, alongside reduced complexity and enhanced training efficiency compared to traditional RNNs. Moreover, by leveraging gradient-based optimization, our method surpasses combinatorial approaches in both scalability and noise resilience. Validation experiments conducted on target regular languages of varying size and complexity demonstrate that our approach is accurate, fast, and robust to noise in both the input symbols and the output labels of training data, integrating the strengths of both logical grammar induction and deep learning.

View on arXiv
Comments on this paper