ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.09234
  4. Cited By
ClickPrompt: CTR Models are Strong Prompt Generators for Adapting
  Language Models to CTR Prediction

ClickPrompt: CTR Models are Strong Prompt Generators for Adapting Language Models to CTR Prediction

13 October 2023
Jianghao Lin
Bo Chen
Hangyu Wang
Yunjia Xi
Yanru Qu
Xinyi Dai
Kangning Zhang
Ruiming Tang
Yong Yu
Weinan Zhang
ArXivPDFHTML

Papers citing "ClickPrompt: CTR Models are Strong Prompt Generators for Adapting Language Models to CTR Prediction"

8 / 8 papers shown
Title
Scaled Supervision is an Implicit Lipschitz Regularizer
Scaled Supervision is an Implicit Lipschitz Regularizer
Z. Ouyang
Chunhui Zhang
Yaning Jia
Soroush Vosoughi
BDL
OffRL
69
0
0
19 Mar 2025
Balancing Efficiency and Effectiveness: An LLM-Infused Approach for Optimized CTR Prediction
Balancing Efficiency and Effectiveness: An LLM-Infused Approach for Optimized CTR Prediction
Guoxiao Zhang
Yi Wei
Yadong Zhang
Huajian Feng
Qiang Liu
70
0
0
09 Dec 2024
Efficiency Unleashed: Inference Acceleration for LLM-based Recommender Systems with Speculative Decoding
Efficiency Unleashed: Inference Acceleration for LLM-based Recommender Systems with Speculative Decoding
Yunjia Xi
Hangyu Wang
Bo Chen
Jianghao Lin
Menghui Zhu
W. Liu
Ruiming Tang
Zhewei Wei
W. Zhang
Yong Yu
OffRL
78
4
0
11 Aug 2024
PTab: Using the Pre-trained Language Model for Modeling Tabular Data
PTab: Using the Pre-trained Language Model for Modeling Tabular Data
Guangyi Liu
Jie-jin Yang
Ledell Yu Wu
LMTD
63
32
0
15 Sep 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
234
780
0
14 Oct 2021
What Changes Can Large-scale Language Models Bring? Intensive Study on
  HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Boseop Kim
Hyoungseok Kim
Sang-Woo Lee
Gichang Lee
Donghyun Kwak
...
Jaewook Kang
Inho Kang
Jung-Woo Ha
W. Park
Nako Sung
VLM
229
121
0
10 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
243
340
0
01 Jan 2021
1