ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.00356
6
6

Improving Social Meaning Detection with Pragmatic Masking and Surrogate Fine-Tuning

1 August 2021
Chiyu Zhang
Muhammad Abdul-Mageed
    ObjD
    AI4CE
ArXivPDFHTML
Abstract

Masked language models (MLMs) are pre-trained with a denoising objective that is in a mismatch with the objective of downstream fine-tuning. We propose pragmatic masking and surrogate fine-tuning as two complementing strategies that exploit social cues to drive pre-trained representations toward a broad set of concepts useful for a wide class of social meaning tasks. We test our models on 151515 different Twitter datasets for social meaning detection. Our methods achieve 2.34%2.34\%2.34% F1F_1F1​ over a competitive baseline, while outperforming domain-specific language models pre-trained on large datasets. Our methods also excel in few-shot learning: with only 5%5\%5% of training data (severely few-shot), our methods enable an impressive 68.54%68.54\%68.54% average F1F_1F1​. The methods are also language agnostic, as we show in a zero-shot setting involving six datasets from three different languages.

View on arXiv
Comments on this paper