The field of deep generative modeling has grown rapidly in the last few years. With the availability of massive amounts of training data coupled with advances in scalable unsupervised learning paradigms, recent large-scale generative models show tremendous promise in synthesizing high-resolution images and text, as well as structured data such as videos and molecules. However, we argue that current large-scale generative AI models exhibit several fundamental shortcomings that hinder their widespread adoption across domains. In this work, our objective is to identify these issues and highlight key unresolved challenges in modern generative AI paradigms that should be addressed to further enhance their capabilities, versatility, and reliability. By identifying these challenges, we aim to provide researchers with insights for exploring fruitful research directions, thus fostering the development of more robust and accessible generative AI solutions.
View on arXiv@article{manduchi2025_2403.00025, title={ On the Challenges and Opportunities in Generative AI }, author={ Laura Manduchi and Kushagra Pandey and Clara Meister and Robert Bamler and Ryan Cotterell and Sina Däubener and Sophie Fellenz and Asja Fischer and Thomas Gärtner and Matthias Kirchler and Marius Kloft and Yingzhen Li and Christoph Lippert and Gerard de Melo and Eric Nalisnick and Björn Ommer and Rajesh Ranganath and Maja Rudolph and Karen Ullrich and Guy Van den Broeck and Julia E Vogt and Yixin Wang and Florian Wenzel and Frank Wood and Stephan Mandt and Vincent Fortuin }, journal={arXiv preprint arXiv:2403.00025}, year={ 2025 } }