ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.11541
288
74
v1v2v3 (latest)

Empower Large Language Model to Perform Better on Industrial Domain-Specific Question Answering

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023
19 May 2023
Zezhong Wang
Lu Wang
Pu Zhao
Lu Wang
Jue Zhang
Mohit Garg
Qingwei Lin
Saravan Rajmohan
ArXiv (abs)PDFHTMLHuggingFace (1 upvotes)
Abstract

Large Language Model (LLM) has gained popularity and achieved remarkable results in open-domain tasks, but its performance in real industrial domain-specific scenarios is average since there is no specific knowledge in it. This issue has attracted widespread attention, but there are few relevant benchmarks available. In this paper, we provide a benchmark Question Answering (QA) dataset named MSQA, which is about Microsoft products and IT technical problems encountered by customers. This dataset contains industry cloud-specific QA knowledge, which is not available for general LLM, so it is well suited for evaluating methods aimed at improving domain-specific capabilities of LLM. In addition, we propose a new model interaction paradigm that can empower LLM to achieve better performance on domain-specific tasks where it is not proficient. Extensive experiments demonstrate that the approach following our model fusion framework outperforms the commonly used LLM with retrieval methods.

View on arXiv
Comments on this paper