Constructing high-quality features is critical to any quantitative data analysis. While feature engineering was historically addressed by carefully hand-crafting data representations based on domain expertise, deep neural networks (DNNs) now offer a radically different approach. DNNs implicitly engineer features by transforming their input data into hidden feature vectors called embeddings. For embedding vectors produced by foundation models -- which are trained to be useful across many contexts -- we demonstrate that simple and well-studied dimensionality-reduction techniques such as Principal Component Analysis uncover inherent heterogeneity in input data concordant with human-understandable explanations. Of the many applications for this framework, we find empirical evidence that there is intrinsic separability between real samples and those generated by artificial intelligence (AI).
View on arXiv@article{vargas2025_2408.10437, title={ Understanding Generative AI Content with Embedding Models }, author={ Max Vargas and Reilly Cannon and Andrew Engel and Anand D. Sarwate and Tony Chiang }, journal={arXiv preprint arXiv:2408.10437}, year={ 2025 } }