ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.07880
  4. Cited By
Is Prompt-Based Finetuning Always Better than Vanilla Finetuning?
  Insights from Cross-Lingual Language Understanding

Is Prompt-Based Finetuning Always Better than Vanilla Finetuning? Insights from Cross-Lingual Language Understanding

15 July 2023
Bolei Ma
Ercong Nie
Helmut Schmid
Hinrich Schütze
    AAML
    VLM
    LRM
ArXivPDFHTML

Papers citing "Is Prompt-Based Finetuning Always Better than Vanilla Finetuning? Insights from Cross-Lingual Language Understanding"

10 / 10 papers shown
Title
Dialogue Ontology Relation Extraction via Constrained Chain-of-Thought Decoding
Dialogue Ontology Relation Extraction via Constrained Chain-of-Thought Decoding
Renato Vukovic
David Arps
Carel van Niekerk
Benjamin Matthias Ruppik
Hsien-chin Lin
Michael Heck
Milica Gašić
40
1
0
05 Aug 2024
AdaMergeX: Cross-Lingual Transfer with Large Language Models via
  Adaptive Adapter Merging
AdaMergeX: Cross-Lingual Transfer with Large Language Models via Adaptive Adapter Merging
Yiran Zhao
Wenxuan Zhang
Huiming Wang
Kenji Kawaguchi
Lidong Bing
MoMe
25
15
0
29 Feb 2024
Decomposed Prompting: Unveiling Multilingual Linguistic Structure
  Knowledge in English-Centric Large Language Models
Decomposed Prompting: Unveiling Multilingual Linguistic Structure Knowledge in English-Centric Large Language Models
Ercong Nie
Shuzhou Yuan
Bolei Ma
Helmut Schmid
Michael Farber
Frauke Kreuter
Hinrich Schütze
ReLM
91
6
0
28 Feb 2024
GNNavi: Navigating the Information Flow in Large Language Models by
  Graph Neural Network
GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural Network
Shuzhou Yuan
Ercong Nie
Michael Farber
Helmut Schmid
Hinrich Schütze
25
3
0
18 Feb 2024
Why Lift so Heavy? Slimming Large Language Models by Cutting Off the Layers
Why Lift so Heavy? Slimming Large Language Models by Cutting Off the Layers
Shuzhou Yuan
Ercong Nie
Bolei Ma
Michael Farber
32
2
0
18 Feb 2024
ToPro: Token-Level Prompt Decomposition for Cross-Lingual Sequence
  Labeling Tasks
ToPro: Token-Level Prompt Decomposition for Cross-Lingual Sequence Labeling Tasks
Bolei Ma
Ercong Nie
Shuzhou Yuan
Helmut Schmid
Michael Farber
Frauke Kreuter
Hinrich Schütze
VLM
95
4
0
29 Jan 2024
Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner
Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner
Zhengxiang Shi
Aldo Lipani
VLM
CLL
27
21
0
02 May 2023
Prompt-Tuning Can Be Much Better Than Fine-Tuning on Cross-lingual
  Understanding With Multilingual Language Models
Prompt-Tuning Can Be Much Better Than Fine-Tuning on Cross-lingual Understanding With Multilingual Language Models
Lifu Tu
Caiming Xiong
Yingbo Zhou
VLM
AAML
LRM
92
26
0
22 Oct 2022
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural
  Language Inference
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
251
1,584
0
21 Jan 2020
1