ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.00189
  4. Cited By
GPTA: Generative Prompt Tuning Assistant for Synergistic Downstream
  Neural Network Enhancement with LLMs

GPTA: Generative Prompt Tuning Assistant for Synergistic Downstream Neural Network Enhancement with LLMs

29 March 2024
Xiao Liu
Jiawei Zhang
ArXivPDFHTML

Papers citing "GPTA: Generative Prompt Tuning Assistant for Synergistic Downstream Neural Network Enhancement with LLMs"

4 / 4 papers shown
Title
Human Still Wins over LLM: An Empirical Study of Active Learning on
  Domain-Specific Annotation Tasks
Human Still Wins over LLM: An Empirical Study of Active Learning on Domain-Specific Annotation Tasks
Yuxuan Lu
Bingsheng Yao
Shao Zhang
Yun Wang
Peng Zhang
T. Lu
Toby Jia-Jun Li
Dakuo Wang
ALM
34
18
0
16 Nov 2023
A Comparative Analysis of Task-Agnostic Distillation Methods for
  Compressing Transformer Language Models
A Comparative Analysis of Task-Agnostic Distillation Methods for Compressing Transformer Language Models
Takuma Udagawa
Aashka Trivedi
Michele Merler
Bishwaranjan Bhattacharjee
31
7
0
13 Oct 2023
Knowledge Distillation Transfer Sets and their Impact on Downstream NLU
  Tasks
Knowledge Distillation Transfer Sets and their Impact on Downstream NLU Tasks
Charith Peris
Lizhen Tan
Thomas Gueudré
Turan Gojayev
Vivi Wei
Gokmen Oz
22
4
0
10 Oct 2022
Teaching Machines to Read and Comprehend
Teaching Machines to Read and Comprehend
Karl Moritz Hermann
Tomás Kociský
Edward Grefenstette
L. Espeholt
W. Kay
Mustafa Suleyman
Phil Blunsom
170
3,508
0
10 Jun 2015
1