ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.04933
  4. Cited By
InfoPrompt: Information-Theoretic Soft Prompt Tuning for Natural
  Language Understanding

InfoPrompt: Information-Theoretic Soft Prompt Tuning for Natural Language Understanding

8 June 2023
Junda Wu
Tong Yu
Rui Wang
Zhao-quan Song
Ruiyi Zhang
Handong Zhao
Chaochao Lu
Shuai Li
Ricardo Henao
    VLM
ArXivPDFHTML

Papers citing "InfoPrompt: Information-Theoretic Soft Prompt Tuning for Natural Language Understanding"

12 / 12 papers shown
Title
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
L. Wang
Sheng Chen
Linnan Jiang
Shu Pan
Runze Cai
Sen Yang
Fei Yang
40
3
0
24 Oct 2024
Hard Prompts Made Interpretable: Sparse Entropy Regularization for
  Prompt Tuning with RL
Hard Prompts Made Interpretable: Sparse Entropy Regularization for Prompt Tuning with RL
Yunseon Choi
Sangmin Bae
Seonghyun Ban
Minchan Jeong
Chuheng Zhang
Lei Song
Li Zhao
Jiang Bian
Kee-Eung Kim
VLM
AAML
21
3
0
20 Jul 2024
How to Protect Copyright Data in Optimization of Large Language Models?
How to Protect Copyright Data in Optimization of Large Language Models?
T. Chu
Zhao-quan Song
Chiwun Yang
12
29
0
23 Aug 2023
Efficient SGD Neural Network Training via Sublinear Activated Neuron
  Identification
Efficient SGD Neural Network Training via Sublinear Activated Neuron Identification
Lianke Qin
Zhao-quan Song
Yuanyuan Yang
10
9
0
13 Jul 2023
An Iterative Algorithm for Rescaled Hyperbolic Functions Regression
An Iterative Algorithm for Rescaled Hyperbolic Functions Regression
Yeqi Gao
Zhao-quan Song
Junze Yin
15
33
0
01 May 2023
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures
  of Soft Prompts
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
118
98
0
24 May 2022
The Power of Prompt Tuning for Low-Resource Semantic Parsing
The Power of Prompt Tuning for Low-Resource Semantic Parsing
Nathan Schucher
Siva Reddy
H. D. Vries
VLM
90
36
0
16 Oct 2021
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Matthew Cer
VLM
LRM
131
276
0
15 Oct 2021
Leveraging Information Bottleneck for Scientific Document Summarization
Leveraging Information Bottleneck for Scientific Document Summarization
Jiaxin Ju
Ming Liu
Huan Yee Koh
Yuan Jin
Lan Du
Shirui Pan
55
13
0
04 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
243
340
0
01 Jan 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1