ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.10007
  4. Cited By
CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file
  Context

CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context

20 December 2022
Yangruibo Ding
Zijian Wang
Wasi Uddin Ahmad
M. K. Ramanathan
Ramesh Nallapati
Parminder Bhatia
Dan Roth
Bing Xiang
ArXivPDFHTML

Papers citing "CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context"

7 / 7 papers shown
Title
Knowledge Augmented Complex Problem Solving with Large Language Models: A Survey
Knowledge Augmented Complex Problem Solving with Large Language Models: A Survey
Da Zheng
Lun Du
Junwei Su
Yuchen Tian
Yuqi Zhu
Jintian Zhang
Lanning Wei
Ningyu Zhang
H. Chen
LRM
49
0
0
06 May 2025
Bias Testing and Mitigation in LLM-based Code Generation
Bias Testing and Mitigation in LLM-based Code Generation
Dong Huang
Qingwen Bu
Jie M. Zhang
Xiaofei Xie
Junjie Chen
Heming Cui
33
20
0
03 Sep 2023
RepoCoder: Repository-Level Code Completion Through Iterative Retrieval
  and Generation
RepoCoder: Repository-Level Code Completion Through Iterative Retrieval and Generation
Fengji Zhang
B. Chen
Yue Zhang
Jacky Keung
Jin Liu
Daoguang Zan
Yi Mao
Jian-Guang Lou
Weizhu Chen
25
218
0
22 Mar 2023
Grounded Copilot: How Programmers Interact with Code-Generating Models
Grounded Copilot: How Programmers Interact with Code-Generating Models
Shraddha Barke
M. James
Nadia Polikarpova
136
212
0
30 Jun 2022
A Systematic Evaluation of Large Language Models of Code
A Systematic Evaluation of Large Language Models of Code
Frank F. Xu
Uri Alon
Graham Neubig
Vincent J. Hellendoorn
ELM
ALM
196
624
0
26 Feb 2022
Can Machines Read Coding Manuals Yet? -- A Benchmark for Building Better
  Language Models for Code Understanding
Can Machines Read Coding Manuals Yet? -- A Benchmark for Building Better Language Models for Code Understanding
Ibrahim Abdelaziz
Julian T Dolby
Jamie McCusker
Kavitha Srinivas
ELM
31
5
0
15 Sep 2021
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
  Code Understanding and Generation
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
Yue Wang
Weishi Wang
Shafiq R. Joty
S. Hoi
204
1,451
0
02 Sep 2021
1