Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.09234
Cited By
ClickPrompt: CTR Models are Strong Prompt Generators for Adapting Language Models to CTR Prediction
13 October 2023
Jianghao Lin
Bo Chen
Hangyu Wang
Yunjia Xi
Yanru Qu
Xinyi Dai
Kangning Zhang
Ruiming Tang
Yong Yu
Weinan Zhang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ClickPrompt: CTR Models are Strong Prompt Generators for Adapting Language Models to CTR Prediction"
8 / 8 papers shown
Title
Scaled Supervision is an Implicit Lipschitz Regularizer
Z. Ouyang
Chunhui Zhang
Yaning Jia
Soroush Vosoughi
BDL
OffRL
72
0
0
19 Mar 2025
Balancing Efficiency and Effectiveness: An LLM-Infused Approach for Optimized CTR Prediction
Guoxiao Zhang
Yi Wei
Yadong Zhang
Huajian Feng
Qiang Liu
73
0
0
09 Dec 2024
Efficiency Unleashed: Inference Acceleration for LLM-based Recommender Systems with Speculative Decoding
Yunjia Xi
Hangyu Wang
Bo Chen
Jianghao Lin
Menghui Zhu
W. Liu
Ruiming Tang
Zhewei Wei
W. Zhang
Yong Yu
OffRL
81
4
0
11 Aug 2024
PTab: Using the Pre-trained Language Model for Modeling Tabular Data
Guangyi Liu
Jie-jin Yang
Ledell Yu Wu
LMTD
63
32
0
15 Sep 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
780
0
14 Oct 2021
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Boseop Kim
Hyoungseok Kim
Sang-Woo Lee
Gichang Lee
Donghyun Kwak
...
Jaewook Kang
Inho Kang
Jung-Woo Ha
W. Park
Nako Sung
VLM
229
121
0
10 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
248
340
0
01 Jan 2021
1