ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.22610
24
0

Evaluating Multimodal Language Models as Visual Assistants for Visually Impaired Users

28 March 2025
Antonia Karamolegkou
Malvina Nikandrou
Georgios Pantazopoulos
Danae Sanchez Villegas
Phillip Rust
Ruchira Dhar
Daniel Hershcovich
Anders Søgaard
ArXivPDFHTML
Abstract

This paper explores the effectiveness of Multimodal Large Language models (MLLMs) as assistive technologies for visually impaired individuals. We conduct a user survey to identify adoption patterns and key challenges users face with such technologies. Despite a high adoption rate of these models, our findings highlight concerns related to contextual understanding, cultural sensitivity, and complex scene understanding, particularly for individuals who may rely solely on them for visual interpretation. Informed by these results, we collate five user-centred tasks with image and video inputs, including a novel task on Optical Braille Recognition. Our systematic evaluation of twelve MLLMs reveals that further advancements are necessary to overcome limitations related to cultural context, multilingual support, Braille reading comprehension, assistive object recognition, and hallucinations. This work provides critical insights into the future direction of multimodal AI for accessibility, underscoring the need for more inclusive, robust, and trustworthy visual assistance technologies.

View on arXiv
@article{karamolegkou2025_2503.22610,
  title={ Evaluating Multimodal Language Models as Visual Assistants for Visually Impaired Users },
  author={ Antonia Karamolegkou and Malvina Nikandrou and Georgios Pantazopoulos and Danae Sanchez Villegas and Phillip Rust and Ruchira Dhar and Daniel Hershcovich and Anders Søgaard },
  journal={arXiv preprint arXiv:2503.22610},
  year={ 2025 }
}
Comments on this paper