Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.10790
Cited By
Preserving Commonsense Knowledge from Pre-trained Language Models via Causal Inference
19 June 2023
Junhao Zheng
Qianli Ma
Shengjie Qiu
Yue Wu
Peitian Ma
Junlong Liu
Hu Feng
Xichen Shang
Haibin Chen
AAML
KELM
CML
CLL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Preserving Commonsense Knowledge from Pre-trained Language Models via Causal Inference"
6 / 6 papers shown
Title
Distilling Causal Effect from Miscellaneous Other-Class for Continual Named Entity Recognition
Junhao Zheng
Zhanxian Liang
Haibin Chen
Qianli Ma
CML
50
8
0
08 Oct 2022
Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Runxin Xu
Fuli Luo
Zhiyuan Zhang
Chuanqi Tan
Baobao Chang
Songfang Huang
Fei Huang
LRM
131
140
0
13 Sep 2021
Uncovering Main Causalities for Long-tailed Information Extraction
Guoshun Nan
Jiaqi Zeng
Rui Qiao
Zhijiang Guo
Wei Lu
CML
39
36
0
11 Sep 2021
Distilling Causal Effect of Data in Class-Incremental Learning
Xinting Hu
Kaihua Tang
C. Miao
Xiansheng Hua
Hanwang Zhang
CML
153
154
0
02 Mar 2021
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
214
190
0
25 Sep 2019
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
380
2,216
0
03 Sep 2019
1