ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.05623
  4. Cited By
AnchorAL: Computationally Efficient Active Learning for Large and
  Imbalanced Datasets

AnchorAL: Computationally Efficient Active Learning for Large and Imbalanced Datasets

8 April 2024
Pietro Lesci
Andreas Vlachos
ArXivPDFHTML

Papers citing "AnchorAL: Computationally Efficient Active Learning for Large and Imbalanced Datasets"

7 / 7 papers shown
Title
Annotation Efficiency: Identifying Hard Samples via Blocked Sparse
  Linear Bandits
Annotation Efficiency: Identifying Hard Samples via Blocked Sparse Linear Bandits
Adit Jain
Soumyabrata Pal
Sunav Choudhary
Ramasuri Narayanam
Vikram Krishnamurthy
13
0
0
26 Oct 2024
On the Limitations of Simulating Active Learning
On the Limitations of Simulating Active Learning
Katerina Margatina
Nikolaos Aletras
16
11
0
21 May 2023
Whose Language Counts as High Quality? Measuring Language Ideologies in
  Text Data Selection
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection
Suchin Gururangan
Dallas Card
Sarah K. Drier
E. K. Gade
Leroy Z. Wang
Zeyu Wang
Luke Zettlemoyer
Noah A. Smith
157
72
0
25 Jan 2022
Influence-Balanced Loss for Imbalanced Visual Classification
Influence-Balanced Loss for Imbalanced Visual Classification
Seulki Park
Jongin Lim
Younghan Jeon
J. Choi
CVBM
76
129
0
06 Oct 2021
Deduplicating Training Data Makes Language Models Better
Deduplicating Training Data Makes Language Models Better
Katherine Lee
Daphne Ippolito
A. Nystrom
Chiyuan Zhang
Douglas Eck
Chris Callison-Burch
Nicholas Carlini
SyDa
234
447
0
14 Jul 2021
Cold-start Active Learning through Self-supervised Language Modeling
Cold-start Active Learning through Self-supervised Language Modeling
Michelle Yuan
Hsuan-Tien Lin
Jordan L. Boyd-Graber
104
180
0
19 Oct 2020
SMOTE: Synthetic Minority Over-sampling Technique
SMOTE: Synthetic Minority Over-sampling Technique
Nitesh V. Chawla
Kevin W. Bowyer
Lawrence Hall
W. Kegelmeyer
AI4TS
148
25,150
0
09 Jun 2011
1