Reducing Human-Robot Goal State Divergence with Environment Design

One of the most difficult challenges in creating successful human-AI collaborations is aligning a robot's behavior with a human user's expectations. When this fails to occur, a robot may misinterpret their specified goals, prompting it to perform actions with unanticipated, potentially dangerous side effects. To avoid this, we propose a new metric we call Goal State Divergence , which represents the difference between a robot's final goal state and the one a human user expected. In cases where cannot be directly calculated, we show how it can be approximated using maximal and minimal bounds. We then input the value into our novel human-robot goal alignment (HRGA) design problem, which identifies a minimal set of environment modifications that can prevent mismatches like this. To show the effectiveness of for reducing differences between human-robot goal states, we empirically evaluate our approach on several standard benchmarks.
View on arXiv