251

Pluralistic Alignment Over Time

Main:3 Pages
2 Figures
Bibliography:2 Pages
Appendix:1 Pages
Abstract

If an AI system makes decisions over time, how should we evaluate how aligned it is with a group of stakeholders (who may have conflicting values and preferences)? In this position paper, we advocate for consideration of temporal aspects including stakeholders' changing levels of satisfaction and their possibly temporally extended preferences. We suggest how a recent approach to evaluating fairness over time could be applied to a new form of pluralistic alignment: temporal pluralism, where the AI system reflects different stakeholders' values at different times.

View on arXiv
Comments on this paper