ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.05006
  4. Cited By
COAST: Enhancing the Code Debugging Ability of LLMs through Communicative Agent Based Data Synthesis

COAST: Enhancing the Code Debugging Ability of LLMs through Communicative Agent Based Data Synthesis

9 August 2024
Weiqing Yang
Hanbin Wang
Zhenghao Liu
Xinze Li
Yukun Yan
Shuo Wang
Yu Gu
Minghe Yu
Zhiyuan Liu
Ge Yu
ArXivPDFHTML

Papers citing "COAST: Enhancing the Code Debugging Ability of LLMs through Communicative Agent Based Data Synthesis"

6 / 6 papers shown
Title
Integrating Expert Knowledge into Logical Programs via LLMs
Integrating Expert Knowledge into Logical Programs via LLMs
Franciszek Górski
Oskar Wysocki
Marco Valentino
André Freitas
25
0
0
17 Feb 2025
MdEval: Massively Multilingual Code Debugging
MdEval: Massively Multilingual Code Debugging
Shukai Liu
Linzheng Chai
Jian Yang
Jiajun Shi
He Zhu
...
Yu Hao
Liqun Yang
Guanglin Niu
Ge Zhang
Z. Li
LRM
ELM
61
6
0
04 Nov 2024
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
DeepSeek-AI Xiao Bi
:
Xiao Bi
Deli Chen
Guanting Chen
...
Yao Zhao
Shangyan Zhou
Shunfeng Zhou
Qihao Zhu
Yuheng Zou
LRM
ALM
133
298
0
05 Jan 2024
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,261
0
28 Jan 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
780
0
14 Oct 2021
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
  Code Understanding and Generation
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
Yue Wang
Weishi Wang
Shafiq R. Joty
S. Hoi
201
1,451
0
02 Sep 2021
1