ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.15626
66
1

Aligning Generalisation Between Humans and Machines

23 November 2024
Filip Ilievski
Barbara Hammer
F. V. Harmelen
Benjamin Paassen
S. Saralajew
Ute Schmid
Michael Biehl
Marianna Bolognesi
Xin Luna Dong
Kiril Gashteovski
Pascal Hitzler
Giuseppe Marra
Pasquale Minervini
Martin Mundt
Axel-Cyrille Ngonga Ngomo
A. Oltramari
Gabriella Pasi
Zeynep G. Saribatur
Luciano Serafini
John Shawe-Taylor
Vered Shwartz
Gabriella Skitalinskaya
Clemens Stachl
Gido M. van de Ven
T. Villmann
ArXivPDFHTML
Abstract

Recent advances in AI -- including generative approaches -- have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals. The responsible use of AI increasingly shows the need for human-AI teaming, necessitating effective interaction between humans and machines. A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise. In cognitive science, human generalisation commonly involves abstraction and concept learning. In contrast, AI generalisation encompasses out-of-domain generalisation in machine learning, rule-based reasoning in symbolic AI, and abstraction in neuro-symbolic AI. In this perspective paper, we combine insights from AI and cognitive science to identify key commonalities and differences across three dimensions: notions of generalisation, methods for generalisation, and evaluation of generalisation. We map the different conceptualisations of generalisation in AI and cognitive science along these three dimensions and consider their role in human-AI teaming. This results in interdisciplinary challenges across AI and cognitive science that must be tackled to provide a foundation for effective and cognitively supported alignment in human-AI teaming scenarios.

View on arXiv
Comments on this paper