ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12667
59
0

Plausibility Vaccine: Injecting LLM Knowledge for Event Plausibility

16 March 2025
Jacob Chmura
Jonah Dauvet
Sebastian Sabry
ArXivPDFHTML
Abstract

Despite advances in language modelling, distributional methods that build semantic representations from co-occurrences fail to discriminate between plausible and implausible events. In this work, we investigate how plausibility prediction can be improved by injecting latent knowledge prompted from large language models using parameter-efficient fine-tuning. We train 12 task adapters to learn various physical properties and association measures and perform adapter fusion to compose latent semantic knowledge from each task on top of pre-trained AlBERT embeddings. We automate auxiliary task data generation, which enables us to scale our approach and fine-tune our learned representations across two plausibility datasets. Our code is available atthis https URL.

View on arXiv
@article{chmura2025_2503.12667,
  title={ Plausibility Vaccine: Injecting LLM Knowledge for Event Plausibility },
  author={ Jacob Chmura and Jonah Dauvet and Sebastian Sabry },
  journal={arXiv preprint arXiv:2503.12667},
  year={ 2025 }
}
Comments on this paper