ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.18205
37
13

Lemur: Log Parsing with Entropy Sampling and Chain-of-Thought Merging

28 February 2024
Wei Zhang
Jian Yang
Anjie Le
Z. Li
Shuangyong Song
Xianfu Cheng
Tieqiao Zheng
Shi Xu
ArXivPDFHTML
Abstract

Logs produced by extensive software systems are integral to monitoring system behaviors. Advanced log analysis facilitates the detection, alerting, and diagnosis of system faults. Log parsing, which entails transforming raw log messages into structured templates, constitutes a critical phase in the automation of log analytics. Existing log parsers fail to identify the correct templates due to reliance on human-made rules. Besides, these methods focus on statistical features while ignoring semantic information in log messages. To address these challenges, we introduce a cutting-edge \textbf{L}og parsing framework with \textbf{E}ntropy sampling and chain-of-thought \textbf{M}erging (\model{}). Specifically, to discard the tedious manual rules, we propose a novel sampling method inspired by information entropy, which efficiently clusters typical logs. Furthermore, to enhance the merging of log templates, we design a chain-of-thought method for large language models (LLMs). LLMs exhibit exceptional semantic comprehension and deftly distinguish between parameters and invariant tokens. We have conducted experiments on large-scale public datasets. Extensive evaluation demonstrates that \model{} achieves state-of-the-art performance and impressive efficiency. The Code is available atthis https URL.

View on arXiv
@article{zhang2025_2402.18205,
  title={ Lemur: Log Parsing with Entropy Sampling and Chain-of-Thought Merging },
  author={ Wei Zhang and Xiangyuan Guan and Lu Yunhong and Jie Zhang and Shuangyong Song and Xianfu Cheng and Zhenhe Wu and Zhoujun Li },
  journal={arXiv preprint arXiv:2402.18205},
  year={ 2025 }
}
Comments on this paper