Eliciting Topic Hierarchies from Large Language Models

Current research has explored how Generative AI can support the brainstorming process for content creators, but a gap remains in exploring support-tools for the pre-writing process. Specifically, our research is focused on supporting users in finding topics at the right level of specificity for their audience. This process is called topic scoping. Topic scoping is a cognitively demanding task, requiring users to actively recall subtopics in a given domain. This manual approach also reduces the diversity of subtopics that a user is able to explore. We propose using Large Language Models (LLMs) to support the process of topic scoping by iteratively generating subtopics at increasing levels of specificity: dynamically creating topic hierarchies. We tested three different prompting strategies and found that increasing the amount of context included in the prompt improves subtopic generation by 20 percentage points. Finally, we discuss applications of this research in education, content creation, and product management.
View on arXiv