ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.04016
16
0

SLOT: Structuring the Output of Large Language Models

6 May 2025
Darren Yow-Bang Wang
Zhengyuan Shen
Soumya Smruti Mishra
Zhichao Xu
Yifei Teng
Haibo Ding
    LLMAG
ArXivPDFHTML
Abstract

Structured outputs are essential for large language models (LLMs) in critical applications like agents and information extraction. Despite their capabilities, LLMs often generate outputs that deviate from predefined schemas, significantly hampering reliable application development. We present SLOT (Structured LLM Output Transformer), a model-agnostic approach that transforms unstructured LLM outputs into precise structured formats. While existing solutions predominantly rely on constrained decoding techniques or are tightly coupled with specific models, SLOT employs a fine-tuned lightweight language model as a post-processing layer, achieving flexibility across various LLMs and schema specifications. We introduce a systematic pipeline for data curation and synthesis alongside a formal evaluation methodology that quantifies both schema accuracy and content fidelity. Our results demonstrate that fine-tuned Mistral-7B model with constrained decoding achieves near perfect schema accuracy (99.5%) and content similarity (94.0%), outperforming Claude-3.5-Sonnet by substantial margins (+25 and +20 percentage points, respectively). Notably, even compact models like Llama-3.2-1B can match or exceed the structured output capabilities of much larger proprietary models when equipped with SLOT, enabling reliable structured generation in resource-constrained environments.

View on arXiv
@article{wang2025_2505.04016,
  title={ SLOT: Structuring the Output of Large Language Models },
  author={ Darren Yow-Bang Wang and Zhengyuan Shen and Soumya Smruti Mishra and Zhichao Xu and Yifei Teng and Haibo Ding },
  journal={arXiv preprint arXiv:2505.04016},
  year={ 2025 }
}
Comments on this paper