ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.15080
  4. Cited By
Infusing Hierarchical Guidance into Prompt Tuning: A Parameter-Efficient
  Framework for Multi-level Implicit Discourse Relation Recognition

Infusing Hierarchical Guidance into Prompt Tuning: A Parameter-Efficient Framework for Multi-level Implicit Discourse Relation Recognition

23 February 2024
Haodong Zhao
Ruifang He
Mengnan Xiao
Jing Xu
ArXivPDFHTML

Papers citing "Infusing Hierarchical Guidance into Prompt Tuning: A Parameter-Efficient Framework for Multi-level Implicit Discourse Relation Recognition"

4 / 4 papers shown
Title
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
209
604
0
14 Oct 2021
Fairness-aware Class Imbalanced Learning
Fairness-aware Class Imbalanced Learning
Shivashankar Subramanian
Afshin Rahimi
Timothy Baldwin
Trevor Cohn
Lea Frermann
FaML
86
24
0
21 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
254
2,999
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
232
302
0
01 Jan 2021
1