533
v1v2 (latest)

Provocations from the Humanities for Generative AI Research

Main:14 Pages
Bibliography:9 Pages
1 Tables
Appendix:1 Pages
Abstract

The effects of generative AI are experienced by a broad range of constituencies, but the disciplinary inputs to its development have been surprisingly narrow. Here we present a set of provocations from humanities researchers -- currently underrepresented in AI development -- intended to inform its future applications and enrich ongoing conversations about its uses, impact, and harms. Drawing from relevant humanities scholarship, along with foundational work in critical data studies, we elaborate eight claims with broad applicability to generative AI research: 1) Models make words, but people make meaning; 2) Generative AI requires an expanded definition of culture; 3) Generative AI can never be representative; 4) Bigger models are not always better models; 5) Not all training data is equivalent; 6) Openness is not an easy fix; 7) Limited access to compute enables corporate capture; and 8) AI universalism creates narrow human subjects. We also provide a working definition of humanities research, summarize some of its most salient theories and methods, and apply these theories and methods to the current landscape of AI. We conclude with a discussion of the importance of resisting the extraction of humanities research by computer science and related fields.

View on arXiv
Comments on this paper