Evaluating Structured Output Robustness of Small Language Models for Open Attribute-Value Extraction from Clinical Notes
Nikita Neveditsin
Pawan Lingras
Vijay Mago

Main:4 Pages
5 Figures
Bibliography:2 Pages
11 Tables
Appendix:5 Pages
Abstract
We present a comparative analysis of the parseability of structured outputs generated by small language models for open attribute-value extraction from clinical notes. We evaluate three widely used serialization formats: JSON, YAML, and XML, and find that JSON consistently yields the highest parseability. Structural robustness improves with targeted prompting and larger models, but declines for longer documents and certain note types. Our error analysis identifies recurring format-specific failure patterns. These findings offer practical guidance for selecting serialization formats and designing prompts when deploying language models in privacy-sensitive clinical settings.
View on arXiv@article{neveditsin2025_2507.01810, title={ Evaluating Structured Output Robustness of Small Language Models for Open Attribute-Value Extraction from Clinical Notes }, author={ Nikita Neveditsin and Pawan Lingras and Vijay Mago }, journal={arXiv preprint arXiv:2507.01810}, year={ 2025 } }
Comments on this paper