ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.05338
  4. Cited By
To Tune or Not To Tune? How About the Best of Both Worlds?

To Tune or Not To Tune? How About the Best of Both Worlds?

9 July 2019
Ran A. Wang
Haibo Su
Chunye Wang
Kailin Ji
J. Ding
    VLM
ArXivPDFHTML

Papers citing "To Tune or Not To Tune? How About the Best of Both Worlds?"

6 / 6 papers shown
Title
Natural Language Understanding for Argumentative Dialogue Systems in the
  Opinion Building Domain
Natural Language Understanding for Argumentative Dialogue Systems in the Opinion Building Domain
W. A. Abro
Annalena Aicher
Niklas Rach
Stefan Ultes
Wolfgang Minker
Guilin Qi
23
32
0
03 Mar 2021
Self-Tuning for Data-Efficient Deep Learning
Self-Tuning for Data-Efficient Deep Learning
Ximei Wang
Jing Gao
Mingsheng Long
Jianmin Wang
BDL
22
69
0
25 Feb 2021
Detecting and Classifying Malevolent Dialogue Responses: Taxonomy, Data
  and Methodology
Detecting and Classifying Malevolent Dialogue Responses: Taxonomy, Data and Methodology
Yangjun Zhang
Pengjie Ren
Maarten de Rijke
16
11
0
21 Aug 2020
Beyond Fine-tuning: Few-Sample Sentence Embedding Transfer
Beyond Fine-tuning: Few-Sample Sentence Embedding Transfer
Siddhant Garg
Rohit Kumar Sharma
Yingyu Liang
20
4
0
10 Apr 2020
Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
S. Rothe
Shashi Narayan
Aliaksei Severyn
SILM
63
433
0
29 Jul 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,950
0
20 Apr 2018
1