97
0

Random DFAs are Efficiently PAC Learnable

Abstract

We address the problem of learning a deterministic finite-state automaton (DFA) from labeled examples in the PAC model. This is well-known to be a computationally intractable problem, even if improper learning is allowed. Despite the pessimistic hardness results, a growing body of theoretical and empirical evidence suggests that a random automaton is not nearly as hard to learn as a worst-case one. In particular, {\em typical} automata (whose graph topology may be adversarial but each state is marked as accepting or rejecting by independent coin flips) were shown to be learnable in 1997 by Freund et al., under a distribution on Σ\Sigma^* which is uniform conditioned on string length. We extend the work of Freund et al. in several directions. Our main result is a randomized algorithm with expected running time O~(mn4)\tilde O(m n^4) for learning typical (in the sense of Freund et al.) nn-state DFAs from mm labeled strings. The algorithm uses AdaBoost as the main workhorse, and is simpler to implement and analyze than that of Freund et al. We prove a PAC-type generalization error bound of O~(n3/m)\tilde O(n^3/m), which holds for arbitrary distributions on Σ\Sigma^* (the Freund et al. result only holds for ``random walk'' distributions). Thus, we give the first efficient algorithm for PAC-learning random DFAs. Our approach is quite general and holds potential for other concept classes as well as noisy label settings.

View on arXiv
Comments on this paper