ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.21074
21
0

On the Potential of Large Language Models to Solve Semantics-Aware Process Mining Tasks

29 April 2025
Adrian Rebmann
Fabian David Schmidt
Goran Glavaš
Han van der Aa
    LRM
ArXivPDFHTML
Abstract

Large language models (LLMs) have shown to be valuable tools for tackling process mining tasks. Existing studies report on their capability to support various data-driven process analyses and even, to some extent, that they are able to reason about how processes work. This reasoning ability suggests that there is potential for LLMs to tackle semantics-aware process mining tasks, which are tasks that rely on an understanding of the meaning of activities and their relationships. Examples of these include process discovery, where the meaning of activities can indicate their dependency, whereas in anomaly detection the meaning can be used to recognize process behavior that is abnormal. In this paper, we systematically explore the capabilities of LLMs for such tasks. Unlike prior work, which largely evaluates LLMs in their default state, we investigate their utility through both in-context learning and supervised fine-tuning. Concretely, we define five process mining tasks requiring semantic understanding and provide extensive benchmarking datasets for evaluation. Our experiments reveal that while LLMs struggle with challenging process mining tasks when used out of the box or with minimal in-context examples, they achieve strong performance when fine-tuned for these tasks across a broad range of process types and industries.

View on arXiv
@article{rebmann2025_2504.21074,
  title={ On the Potential of Large Language Models to Solve Semantics-Aware Process Mining Tasks },
  author={ Adrian Rebmann and Fabian David Schmidt and Goran Glavaš and Han van der Aa },
  journal={arXiv preprint arXiv:2504.21074},
  year={ 2025 }
}
Comments on this paper