ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.03815
  4. Cited By
Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text
  Correspondence

Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence

8 May 2022
Myeongjun Jang
Frank Mtumbuka
Thomas Lukasiewicz
ArXivPDFHTML

Papers citing "Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence"

6 / 6 papers shown
Title
Paraphrasing in Affirmative Terms Improves Negation Understanding
Paraphrasing in Affirmative Terms Improves Negation Understanding
MohammadHossein Rezaei
Eduardo Blanco
42
1
0
11 Jun 2024
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
260
346
0
01 Feb 2021
BERT & Family Eat Word Salad: Experiments with Text Understanding
BERT & Family Eat Word Salad: Experiments with Text Understanding
Ashim Gupta
Giorgi Kvernadze
Vivek Srikumar
197
73
0
10 Jan 2021
UnNatural Language Inference
UnNatural Language Inference
Koustuv Sinha
Prasanna Parthasarathi
Joelle Pineau
Adina Williams
216
94
0
30 Dec 2020
Out of Order: How Important Is The Sequential Order of Words in a
  Sentence in Natural Language Understanding Tasks?
Out of Order: How Important Is The Sequential Order of Words in a Sentence in Natural Language Understanding Tasks?
Thang M. Pham
Trung Bui
Long Mai
Anh Totti Nguyen
209
122
0
30 Dec 2020
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
413
2,584
0
03 Sep 2019
1