Auto-encoding Molecules: Graph-Matching Capabilities Matter
Autoencoders are effective deep learning models that can function as generative models and learn latent representations for downstream tasks. The use of graph autoencoders - with both encoder and decoder implemented as message passing networks - is intriguing due to their ability to generate permutation-invariant graph representations. However, this approach faces difficulties because decoding a graph structure from a single vector is challenging, and comparing input and output graphs requires an effective permutation-invariant similarity measure. As a result, many studies rely on approximate methods.In this work, we explore the effect of graph matching precision on the training behavior and generation capabilities of a Variational Autoencoder (VAE). Our contribution is two-fold: (1) we propose a transformer-based message passing graph decoder as an alternative to a graph neural network decoder, that is more robust and expressive by leveraging global attention mechanisms. (2) We show that the precision of graph matching has significant impact on training behavior and is essential for effective de novo (molecular) graph generation.Code is available atthis https URL
View on arXiv@article{cunow2025_2503.00426, title={ Auto-encoding Molecules: Graph-Matching Capabilities Matter }, author={ Magnus Cunow and Gerrit Großmann }, journal={arXiv preprint arXiv:2503.00426}, year={ 2025 } }