Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2212.10017
Cited By
Unveiling Code Pre-Trained Models: Investigating Syntax and Semantics Capacities
20 December 2022
Wei Ma
Shangqing Liu
Mengjie Zhao
Xiaofei Xie
Wenhan Wang
Q. Hu
Jiexin Zhang
Yang Liu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Unveiling Code Pre-Trained Models: Investigating Syntax and Semantics Capacities"
5 / 5 papers shown
Title
Can Large Language Models Understand Intermediate Representations?
Hailong Jiang
Jianfeng Zhu
Yao Wan
B. Fang
Hongyu Zhang
Ruoming Jin
Qiang Guan
48
1
0
07 Feb 2025
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
Yue Wang
Weishi Wang
Shafiq R. Joty
S. Hoi
201
1,451
0
02 Sep 2021
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Shuai Lu
Daya Guo
Shuo Ren
Junjie Huang
Alexey Svyatkovskiy
...
Nan Duan
Neel Sundaresan
Shao Kun Deng
Shengyu Fu
Shujie Liu
ELM
188
853
0
09 Feb 2021
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
220
3,054
0
23 Jan 2020
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
199
876
0
03 May 2018
1