ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.08896
29
35

Towards a Decomposable Metric for Explainable Evaluation of Text Generation from AMR

20 August 2020
Juri Opitz
Anette Frank
ArXivPDFHTML
Abstract

Systems that generate natural language text from abstract meaning representations such as AMR are typically evaluated using automatic surface matching metrics that compare the generated texts to reference texts from which the input meaning representations were constructed. We show that besides well-known issues from which such metrics suffer, an additional problem arises when applying these metrics for AMR-to-text evaluation, since an abstract meaning representation allows for numerous surface realizations. In this work we aim to alleviate these issues by proposing MFβ\mathcal{M}\mathcal{F}_\betaMFβ​, a decomposable metric that builds on two pillars. The first is the principle of meaning preservation M\mathcal{M}M: it measures to what extent a given AMR can be reconstructed from the generated sentence using SOTA AMR parsers and applying (fine-grained) AMR evaluation metrics to measure the distance between the original and the reconstructed AMR. The second pillar builds on a principle of (grammatical) form F\mathcal{F}F that measures the linguistic quality of the generated text, which we implement using SOTA language models. In two extensive pilot studies we show that fulfillment of both principles offers benefits for AMR-to-text evaluation, including explainability of scores. Since MFβ\mathcal{M}\mathcal{F}_\betaMFβ​ does not necessarily rely on gold AMRs, it may extend to other text generation tasks.

View on arXiv
Comments on this paper