ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.07565
  4. Cited By
Multitask Pre-training of Modular Prompt for Chinese Few-Shot Learning

Multitask Pre-training of Modular Prompt for Chinese Few-Shot Learning

14 October 2022
Tianxiang Sun
Zhengfu He
Qinen Zhu
Xipeng Qiu
Xuanjing Huang
    VLM
    VPVLM
ArXivPDFHTML

Papers citing "Multitask Pre-training of Modular Prompt for Chinese Few-Shot Learning"

11 / 11 papers shown
Title
Efficient Knowledge Transfer in Multi-Task Learning through Task-Adaptive Low-Rank Representation
Efficient Knowledge Transfer in Multi-Task Learning through Task-Adaptive Low-Rank Representation
Xiao Zhang
Kangsheng Wang
Tianyu Hu
Huimin Ma
45
3
0
20 Apr 2025
Prompt Your Brain: Scaffold Prompt Tuning for Efficient Adaptation of
  fMRI Pre-trained Model
Prompt Your Brain: Scaffold Prompt Tuning for Efficient Adaptation of fMRI Pre-trained Model
Zijian Dong
Yilei Wu
Zijiao Chen
Yichi Zhang
Yueming Jin
Juan Helen Zhou
18
1
0
20 Aug 2024
BBTv2: Towards a Gradient-Free Future with Large Language Models
BBTv2: Towards a Gradient-Free Future with Large Language Models
Tianxiang Sun
Zhengfu He
Hong Qian
Yunhua Zhou
Xuanjing Huang
Xipeng Qiu
100
53
0
23 May 2022
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Matthew Cer
VLM
LRM
134
276
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
780
0
14 Oct 2021
Paradigm Shift in Natural Language Processing
Paradigm Shift in Natural Language Processing
Tianxiang Sun
Xiangyang Liu
Xipeng Qiu
Xuanjing Huang
114
82
0
26 Sep 2021
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language
  Understanding and Generation
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation
Yunfan Shao
Zhichao Geng
Yitao Liu
Junqi Dai
Hang Yan
Fei Yang
Li Zhe
Hujun Bao
Xipeng Qiu
MedIm
59
145
0
13 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
235
1,444
0
18 Mar 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural
  Language Inference
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
256
1,584
0
21 Jan 2020
1