699

Learning Human Activities and Object Affordances from RGB-D Videos

Abstract

Understanding human activities and object affordances are two very important skills, especially for personal robots which operate in human environments. In this work, we consider the problem of extracting a descriptive labeling of the sequence of sub-activities being performed by a human, and more importantly, of their interactions with the objects in the form of associated affordances. Given a RGB-D video, we jointly model the human activities and object affordances as a Markov Random Field where the nodes represent objects and sub-activities, and the edges represent the relationships between object affordances, their relations with sub-activities, and their evolution over time. We formulate the learning problem using a structural SVM approach, where labeling over various alternate temporal segmentations are considered as latent variables. We tested our method on a challenging dataset comprising 120 activity videos collected from four subjects, and obtained an end-to-end precision of 75.8% and recall of 74.2% for labeling the activities. We then demonstrate the use of such descriptive labeling in performing assistive tasks by a PR2 robot.

View on arXiv
Comments on this paper