Using multi-agent architecture to mitigate the risk of LLM hallucinations
Abd Elrahman Amer
Magdi Amer
- LLMAG
Main:14 Pages
8 Figures
Abstract
Improving customer service quality and response time are critical factors for maintaining customer loyalty and increasing a company's market share. While adopting emerging technologies such as Large Language Models (LLMs) is becoming a necessity to achieve these goals, the risk of hallucination remains a major challenge. In this paper, we present a multi-agent system to handle customer requests sent via SMS. This system integrates LLM based agents with fuzzy logic to mitigate hallucination risks.
View on arXivComments on this paper
