12

Can Consumer Chatbots Reason? A Student-Led Field Experiment Embedded in an "AI-for-All" Undergraduate Course

Amarda Shehu
Adonyas Ababu
Asma Akbary
Griffin Allen
Aroush Baig
Tereana Battle
Elias Beall
Christopher Byrom
Matt Dean
Kate Demarco
Ethan Douglass
Luis Granados
Layla Hantush
Andy Hay
Eleanor Hay
Caleb Jackson
Jaewon Jang
Carter Jones
Quanyang Li
Adrian Lopez
Logan Massimo
Garrett McMullin
Ariana Mendoza Maldonado
Eman Mirza
Hadiya Muddasar
Sara Nuwayhid
Brandon Pak
Ashley Petty
Dryden Rancourt
Lily Rodriguez
Corbin Rogers
Jacob Schiek
Taeseo Seok
Aarav Sethi
Giovanni Vitela
Winston Williams
Jagan Yetukuri
Main:22 Pages
2 Figures
Bibliography:2 Pages
3 Tables
Appendix:4 Pages
Abstract

Claims about whether large language model (LLM) chatbots "reason" are typically debated using curated benchmarks and laboratory-style evaluation protocols. This paper offers a complementary perspective: a student-led field experiment embedded as a midterm project in UNIV 182 (AI4All) at George Mason University, a Mason Core course designed for undergraduates across disciplines with no expected prior STEM exposure. Student teams designed their own reasoning tasks, ran them on widely used consumer chatbots representative of current capabilities, and evaluated both (i) answer correctness and (ii) the validity of the chatbot's stated reasoning (for example, cases where an answer is correct but the explanation is not, or vice versa). Across eight teams that reported standardized scores, students contributed 80 original reasoning prompts spanning six categories: pattern completion, transformation rules, spatial/visual reasoning, quantitative reasoning, relational/logic reasoning, and analogical reasoning. These prompts yielded 320 model responses plus follow-up explanations. Aggregating team-level results, OpenAI GPT-5 and Claude 4.5 achieved the highest mean answer accuracy (86.2% and 83.8%), followed by Grok 4 (82.5%) and Perplexity (73.1%); explanation validity showed a similar ordering (81.2%, 80.0%, 77.5%, 66.2%). Qualitatively, teams converged on a consistent error signature: strong performance on short, structured math and pattern items but reduced reliability on spatial/visual reasoning and multi-step transformations, with frequent "sound right but reason wrong" explanations. The assignment's primary contribution is pedagogical: it operationalizes AI literacy as experimental practice (prompt design, measurement, rater disagreement, and interpretability/grounding) while producing a reusable, student-generated corpus of reasoning probes grounded in authentic end-user interaction.

View on arXiv
Comments on this paper