Recursive Tree Grammar Autoencoders
Machine learning on tree data has been mostly focused on trees as input. Much less research has covered trees as output, like in chemical molecule optimization or hint generation for intelligent tutoring systems. In this work, we propose recursive tree grammar autoencoders (RTG-AEs), which encode trees via a bottom-up parser and decode trees via a tree grammar, both controlled by a neural network that minimizes the variational autoencoder loss. The resulting encoding and decoding functions can then be employed in subsequent tasks, such as optimization and time series prediction. Our key message is that combining grammar knowledge with recursive processing improves performance compared to using either grammar knowledge or non-sequential processing, but not both. In particular, we show experimentally that our model improves autoencoding error, training time, and optimization score on four benchmark datasets compared to baselines from the literature.
View on arXiv