Robotic manipulation requires explicit or implicit knowledge of the robot's joint positions. Precise proprioception is standard in high-quality industrial robots but is often unavailable in inexpensive robots operating in unstructured environments. In this paper, we ask: to what extent can a fast, single-pass regression architecture perform visual proprioception from a single external camera image, available even in the simplest manipulation settings? We explore several latent representations, including CNNs, VAEs, ViTs, and bags of uncalibrated fiducial markers, using fine-tuning techniques adapted to the limited data available. We evaluate the achievable accuracy through experiments on an inexpensive 6-DoF robot.
View on arXiv@article{sheikholeslami2025_2504.14634, title={ Latent Representations for Visual Proprioception in Inexpensive Robots }, author={ Sahara Sheikholeslami and Ladislau Bölöni }, journal={arXiv preprint arXiv:2504.14634}, year={ 2025 } }