ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.04633
  4. Cited By
CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models
  for Programming Language Attend Code Structure

CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure

7 October 2022
Nuo Chen
Qiushi Sun
Renyu Zhu
Xiang Li
Xuesong Lu
Ming Gao
ArXivPDFHTML

Papers citing "CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure"

11 / 11 papers shown
Title
Toward Neurosymbolic Program Comprehension
Toward Neurosymbolic Program Comprehension
Alejandro Velasco
Aya Garryyeva
David Nader-Palacio
Antonio Mastropaolo
Denys Poshyvanyk
34
0
0
03 Feb 2025
MPCODER: Multi-user Personalized Code Generator with Explicit and
  Implicit Style Representation Learning
MPCODER: Multi-user Personalized Code Generator with Explicit and Implicit Style Representation Learning
Zhenlong Dai
Chang Yao
WenKang Han
Ying Yuan
Zhipeng Gao
Jingyuan Chen
19
10
0
25 Jun 2024
A Critical Study of What Code-LLMs (Do Not) Learn
A Critical Study of What Code-LLMs (Do Not) Learn
Abhinav Anand
Shweta Verma
Krishna Narasimhan
Mira Mezini
35
4
0
17 Jun 2024
AI Coders Are Among Us: Rethinking Programming Language Grammar Towards
  Efficient Code Generation
AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation
Zhensu Sun
Xiaoning Du
Zhou Yang
Li Li
David Lo
28
10
0
25 Apr 2024
Structure-aware Fine-tuning for Code Pre-trained Models
Structure-aware Fine-tuning for Code Pre-trained Models
Jiayi Wu
Renyu Zhu
Nuo Chen
Qiushi Sun
Xiang Li
Ming Gao
27
2
0
11 Apr 2024
INSPECT: Intrinsic and Systematic Probing Evaluation for Code
  Transformers
INSPECT: Intrinsic and Systematic Probing Evaluation for Code Transformers
Anjan Karmakar
Romain Robbes
19
2
0
08 Dec 2023
Representational Strengths and Limitations of Transformers
Representational Strengths and Limitations of Transformers
Clayton Sanford
Daniel J. Hsu
Matus Telgarsky
9
81
0
05 Jun 2023
Deep Learning Meets Software Engineering: A Survey on Pre-Trained Models
  of Source Code
Deep Learning Meets Software Engineering: A Survey on Pre-Trained Models of Source Code
Changan Niu
Chuanyi Li
Bin Luo
Vincent Ng
SyDa
VLM
36
48
0
24 May 2022
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
  Code Understanding and Generation
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
Yue Wang
Weishi Wang
Shafiq R. Joty
S. Hoi
204
1,451
0
02 Sep 2021
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding
  and Generation
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Shuai Lu
Daya Guo
Shuo Ren
Junjie Huang
Alexey Svyatkovskiy
...
Nan Duan
Neel Sundaresan
Shao Kun Deng
Shengyu Fu
Shujie Liu
ELM
190
853
0
09 Feb 2021
Informer: Beyond Efficient Transformer for Long Sequence Time-Series
  Forecasting
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
Haoyi Zhou
Shanghang Zhang
J. Peng
Shuai Zhang
Jianxin Li
Hui Xiong
Wan Zhang
AI4TS
164
3,799
0
14 Dec 2020
1