Project Aria: A New Tool for Egocentric Multi-Modal AI Research
Jakob Engel
Kiran Somasundaram
Michael Goesele
Albert Sun
Alexander Gamino
Andrew Turner
Arjang Talattof
Arnie Yuan
Bilal Souti
Brighid Meredith
Cheng Peng
Chris Sweeney
Cole Wilson
Dan Barnes
Daniel DeTone
David Caruso
Derek Valleroy
Dinesh Ginjupalli
Duncan Frost
Edward Miller
Elias Mueggler
Evgeniy Oleinik
Fan Zhang
Guruprasad Somasundaram
Gustavo Solaira
Harry Lanaras
Henry Howard-Jenkins
Huixuan Tang
Hyo Jin Kim
Jaime Rivera
Ji Luo
Jing Dong
Julian Straub
Kevin Bailey
Kevin Eckenhoff
Lingni Ma
Luis Pesqueira
Mark Schwesinger
Maurizio Monge
Nan Yang
Nick Charron
Nikhil Raina
Omkar M. Parkhi
Peter Borschowa
Pierre Moulon
Prince Gupta
Raul Mur-Artal
Robbie Pennington
Sachin Kulkarni
Sagar Miglani
Santosh Gondi
Saransh Solanki
Sean Diener
Shangyi Cheng
Simon Green
Steve Saarinen
Suvam Patra
Tassos Mourikis
Thomas Whelan
Tripti Singh
Vasileios Balntas
Vijay Baiyya
Wilson Dreewes
Xiaqing Pan
Yang Lou
Yipu Zhao
Yusuf Mansour
Yuyang Zou
Zhaoyang Lv
Zijian Wang
Mingfei Yan
Carl Ren
R. D. Nardi
Richard A. Newcombe

Abstract
Egocentric, multi-modal data as available on future augmented reality (AR) devices provides unique challenges and opportunities for machine perception. These future devices will need to be all-day wearable in a socially acceptable form-factor to support always available, context-aware and personalized AI applications. Our team at Meta Reality Labs Research built the Aria device, an egocentric, multi-modal data recording and streaming device with the goal to foster and accelerate research in this area. In this paper, we describe the Aria device hardware including its sensor configuration and the corresponding software tools that enable recording and processing of such data.
View on arXivComments on this paper