ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.16790
  4. Cited By
Beyond Self-learned Attention: Mitigating Attention Bias in
  Transformer-based Models Using Attention Guidance

Beyond Self-learned Attention: Mitigating Attention Bias in Transformer-based Models Using Attention Guidance

26 February 2024
Jiri Gesi
Iftekhar Ahmed
ArXivPDFHTML

Papers citing "Beyond Self-learned Attention: Mitigating Attention Bias in Transformer-based Models Using Attention Guidance"

4 / 4 papers shown
Title
A Systematic Evaluation of Large Language Models of Code
A Systematic Evaluation of Large Language Models of Code
Frank F. Xu
Uri Alon
Graham Neubig
Vincent J. Hellendoorn
ELM
ALM
202
628
0
26 Feb 2022
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
  Code Understanding and Generation
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
Yue Wang
Weishi Wang
Shafiq R. Joty
S. Hoi
210
1,485
0
02 Sep 2021
CURE: Code-Aware Neural Machine Translation for Automatic Program Repair
CURE: Code-Aware Neural Machine Translation for Automatic Program Repair
Nan Jiang
Thibaud Lutellier
Lin Tan
NAI
132
233
0
26 Feb 2021
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding
  and Generation
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Shuai Lu
Daya Guo
Shuo Ren
Junjie Huang
Alexey Svyatkovskiy
...
Nan Duan
Neel Sundaresan
Shao Kun Deng
Shengyu Fu
Shujie Liu
ELM
196
853
0
09 Feb 2021
1