Multimodal learning is an emerging research topic across multiple disciplines but has rarely been applied to planetary science. In this contribution, we identify that reflectance parameter estimation and image-based 3D reconstruction of lunar images can be formulated as a multimodal learning problem. We propose a single, unified transformer architecture trained to learn shared representations between multiple sources like grayscale images, digital elevation models, surface normals, and albedo maps. The architecture supports flexible translation from any input modality to any target modality. Predicting DEMs and albedo maps from grayscale images simultaneously solves the task of 3D reconstruction of planetary surfaces and disentangles photometric parameters and height information. Our results demonstrate that our foundation model learns physically plausible relations across these four modalities. Adding more input modalities in the future will enable tasks such as photometric normalization and co-registration.
View on arXiv@article{sander2025_2505.05644, title={ The Moon's Many Faces: A Single Unified Transformer for Multimodal Lunar Reconstruction }, author={ Tom Sander and Moritz Tenthoff and Kay Wohlfarth and Christian Wöhler }, journal={arXiv preprint arXiv:2505.05644}, year={ 2025 } }