Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation
Extraction
Recently, prompt-tuning has achieved promising results on some few-class classification tasks. The core idea of prompt-tuning is to insert text pieces, i.e., template, to the input and transform a classification task into a masked language modeling problem. However, as for relation extraction, determining the appropriate prompt template requires domain expertise, and single label word handcrafted or auto-searched is cumbersome and time-consuming to verify their effectiveness in non-few-shot scenarios, which also fails to leverage the abundant semantic knowledge in the entities and relation labels. To this end, we focus on incorporating knowledge into prompt-tuning for relation extraction and propose a knowledge-aware prompt-tuning with synergistic optimization (KNIGHT) approach. Specifically, we inject entity and relation knowledge into prompt construction with learnable virtual template words and answer words and jointly optimize their representation with knowledge constraints. Extensive experimental results on 5 datasets with standard and low-resource settings demonstrate the effectiveness of our approach.
View on arXiv