In this work, we study the effect of annotation guidelines -- textual descriptions of event types and arguments, when instruction-tuning large language models for event extraction. We conducted a series of experiments with both human-provided and machine-generated guidelines in both full- and low-data settings. Our results demonstrate the promise of annotation guidelines when there is a decent amount of training data and highlight its effectiveness in improving cross-schema generalization and low-frequency event-type performance.
View on arXiv@article{srivastava2025_2502.16377, title={ Instruction-Tuning LLMs for Event Extraction with Annotation Guidelines }, author={ Saurabh Srivastava and Sweta Pati and Ziyu Yao }, journal={arXiv preprint arXiv:2502.16377}, year={ 2025 } }