206

DesignMinds: Enhancing Video-Based Design Ideation with Vision-Language Model and Context-Injected Large Language Model

International Conference on Conversational User Interfaces (CUI), 2024
Main:17 Pages
7 Figures
Bibliography:5 Pages
2 Tables
Appendix:1 Pages
Abstract

Ideation is a critical component of video-based design (VBD), where videos serve as the primary medium for design exploration and inspiration. The emergence of generative AI offers considerable potential to enhance this process by streamlining video analysis and facilitating idea generation. In this paper, we present DesignMinds, a prototype that integrates a state-of-the-art Vision-Language Model (VLM) with a context-enhanced Large Language Model (LLM) to support ideation in VBD. To evaluate DesignMinds, we conducted a between-subject study with 35 design practitioners, comparing its performance to a baseline condition. Our results demonstrate that DesignMinds significantly enhances the flexibility and originality of ideation, while also increasing task engagement. Importantly, the introduction of this technology did not negatively impact user experience, technology acceptance, or usability.

View on arXiv
Comments on this paper