ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.14733
  4. Cited By
Hard Prompts Made Interpretable: Sparse Entropy Regularization for
  Prompt Tuning with RL

Hard Prompts Made Interpretable: Sparse Entropy Regularization for Prompt Tuning with RL

20 July 2024
Yunseon Choi
Sangmin Bae
Seonghyun Ban
Minchan Jeong
Chuheng Zhang
Lei Song
Li Zhao
Jiang Bian
Kee-Eung Kim
    VLM
    AAML
ArXivPDFHTML

Papers citing "Hard Prompts Made Interpretable: Sparse Entropy Regularization for Prompt Tuning with RL"

5 / 5 papers shown
Title
From General to Specific: Tailoring Large Language Models for
  Personalized Healthcare
From General to Specific: Tailoring Large Language Models for Personalized Healthcare
Ruize Shi
Hong Huang
Wei Zhou
Kehan Yin
Kai Zhao
Yun Zhao
LM&MA
AI4MH
69
0
0
20 Dec 2024
Order Matters: Exploring Order Sensitivity in Multimodal Large Language
  Models
Order Matters: Exploring Order Sensitivity in Multimodal Large Language Models
Zhijie Tan
Xu Chu
Weiping Li
Tong Mo
13
1
0
22 Oct 2024
Eliciting Textual Descriptions from Representations of Continuous
  Prompts
Eliciting Textual Descriptions from Representations of Continuous Prompts
Dana Ramati
Daniela Gottesman
Mor Geva
22
0
0
15 Oct 2024
Prompt-aligned Gradient for Prompt Tuning
Prompt-aligned Gradient for Prompt Tuning
Beier Zhu
Yulei Niu
Yucheng Han
Yuehua Wu
Hanwang Zhang
VLM
169
263
0
30 May 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
1