ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.06976
  4. Cited By
Investigating Multi-source Active Learning for Natural Language
  Inference

Investigating Multi-source Active Learning for Natural Language Inference

14 February 2023
Ard Snijders
Douwe Kiela
Katerina Margatina
ArXivPDFHTML

Papers citing "Investigating Multi-source Active Learning for Natural Language Inference"

13 / 13 papers shown
Title
On the Pros and Cons of Active Learning for Moral Preference Elicitation
On the Pros and Cons of Active Learning for Moral Preference Elicitation
Vijay Keswani
Vincent Conitzer
Hoda Heidari
Jana Schaich Borg
Walter Sinnott-Armstrong
24
2
0
26 Jul 2024
STAR: Constraint LoRA with Dynamic Active Learning for Data-Efficient
  Fine-Tuning of Large Language Models
STAR: Constraint LoRA with Dynamic Active Learning for Data-Efficient Fine-Tuning of Large Language Models
Linhai Zhang
Jialong Wu
Deyu Zhou
Guoqiang Xu
25
4
0
02 Mar 2024
Perturbation-Based Two-Stage Multi-Domain Active Learning
Perturbation-Based Two-Stage Multi-Domain Active Learning
Rui He
Zeyu Dai
Shan He
Ke Tang
6
2
0
19 Jun 2023
Active Learning Principles for In-Context Learning with Large Language
  Models
Active Learning Principles for In-Context Learning with Large Language Models
Katerina Margatina
Timo Schick
Nikolaos Aletras
Jane Dwivedi-Yu
20
39
0
23 May 2023
On the Limitations of Simulating Active Learning
On the Limitations of Simulating Active Learning
Katerina Margatina
Nikolaos Aletras
29
11
0
21 May 2023
Multi-Domain Learning From Insufficient Annotations
Multi-Domain Learning From Insufficient Annotations
Tahira Shehzadi
Shengcai Liu
D. Stricker
Marcus Liwicki
Muhammad Zeshan Afzal
8
1
0
04 May 2023
State-of-the-art generalisation research in NLP: A taxonomy and review
State-of-the-art generalisation research in NLP: A taxonomy and review
Dieuwke Hupkes
Mario Giulianelli
Verna Dankers
Mikel Artetxe
Yanai Elazar
...
Leila Khalatbari
Maria Ryskina
Rita Frieske
Ryan Cotterell
Zhijing Jin
103
91
0
06 Oct 2022
Is More Data Better? Re-thinking the Importance of Efficiency in Abusive
  Language Detection with Transformers-Based Active Learning
Is More Data Better? Re-thinking the Importance of Efficiency in Abusive Language Detection with Transformers-Based Active Learning
Hannah Rose Kirk
Bertie Vidgen
Scott A. Hale
18
10
0
21 Sep 2022
Active Learning for Argument Strength Estimation
Active Learning for Argument Strength Estimation
Nataliia Kees
Michael Fromm
Evgeniy Faerman
T. Seidl
19
5
0
23 Sep 2021
Cold-start Active Learning through Self-supervised Language Modeling
Cold-start Active Learning through Self-supervised Language Modeling
Michelle Yuan
Hsuan-Tien Lin
Jordan L. Boyd-Graber
104
180
0
19 Oct 2020
Are We Modeling the Task or the Annotator? An Investigation of Annotator
  Bias in Natural Language Understanding Datasets
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva
Yoav Goldberg
Jonathan Berant
235
319
0
21 Aug 2019
Hypothesis Only Baselines in Natural Language Inference
Hypothesis Only Baselines in Natural Language Inference
Adam Poliak
Jason Naradowsky
Aparajita Haldar
Rachel Rudinger
Benjamin Van Durme
187
574
0
02 May 2018
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
247
9,042
0
06 Jun 2015
1