Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.16790
Cited By
Beyond Self-learned Attention: Mitigating Attention Bias in Transformer-based Models Using Attention Guidance
26 February 2024
Jiri Gesi
Iftekhar Ahmed
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Beyond Self-learned Attention: Mitigating Attention Bias in Transformer-based Models Using Attention Guidance"
4 / 4 papers shown
Title
A Systematic Evaluation of Large Language Models of Code
Frank F. Xu
Uri Alon
Graham Neubig
Vincent J. Hellendoorn
ELM
ALM
202
628
0
26 Feb 2022
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
Yue Wang
Weishi Wang
Shafiq R. Joty
S. Hoi
210
1,485
0
02 Sep 2021
CURE: Code-Aware Neural Machine Translation for Automatic Program Repair
Nan Jiang
Thibaud Lutellier
Lin Tan
NAI
132
233
0
26 Feb 2021
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Shuai Lu
Daya Guo
Shuo Ren
Junjie Huang
Alexey Svyatkovskiy
...
Nan Duan
Neel Sundaresan
Shao Kun Deng
Shengyu Fu
Shujie Liu
ELM
196
853
0
09 Feb 2021
1