Aligning Language Models for Icelandic Legal Text Summarization

The integration of language models in the legal domain holds considerable promise for streamlining processes and improving efficiency in managing extensive workloads. However, the specialized terminology, nuanced language, and formal style of legal texts can present substantial challenges. This study examines whether preference-based training techniques, specifically Reinforcement Learning from Human Feedback and Direct Preference Optimization, can enhance models' performance in generating Icelandic legal summaries that align with domain-specific language standards and user preferences. We compare models fine-tuned with preference training to those using conventional supervised learning. Results indicate that preference training improves the legal accuracy of generated summaries over standard fine-tuning but does not significantly enhance the overall quality of Icelandic language usage. Discrepancies between automated metrics and human evaluations further underscore the importance of qualitative assessment in developing language models for the legal domain.
View on arXiv@article{harðarson2025_2504.18180, title={ Aligning Language Models for Icelandic Legal Text Summarization }, author={ Þórir Hrafn Harðarson and Hrafn Loftsson and Stefán Ólafsson }, journal={arXiv preprint arXiv:2504.18180}, year={ 2025 } }