ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.12943
  4. Cited By
Towards Simple and Efficient Task-Adaptive Pre-training for Text
  Classification

Towards Simple and Efficient Task-Adaptive Pre-training for Text Classification

26 September 2022
Arnav Ladkat
Aamir Miyajiwala
Samiksha Jagadale
Rekha Kulkarni
Raviraj Joshi
    VLM
ArXivPDFHTML

Papers citing "Towards Simple and Efficient Task-Adaptive Pre-training for Text Classification"

3 / 3 papers shown
Title
L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT
  Language Models, and Resources
L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources
Raviraj Joshi
41
52
0
02 Feb 2022
Task-adaptive Pre-training of Language Models with Word Embedding
  Regularization
Task-adaptive Pre-training of Language Models with Word Embedding Regularization
Kosuke Nishida
Kyosuke Nishida
Sen Yoshida
VLM
37
8
0
17 Sep 2021
Evaluating Deep Learning Approaches for Covid19 Fake News Detection
Evaluating Deep Learning Approaches for Covid19 Fake News Detection
Apurva Wani
Isha Joshi
Snehal Khandve
Vedangi Wagh
Raviraj Joshi
GNN
60
110
0
11 Jan 2021
1