ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.17125
  4. Cited By
A Large-Scale Survey on the Usability of AI Programming Assistants:
  Successes and Challenges

A Large-Scale Survey on the Usability of AI Programming Assistants: Successes and Challenges

30 March 2023
Jenny T Liang
Chenyang Yang
Brad A. Myers
ArXivPDFHTML

Papers citing "A Large-Scale Survey on the Usability of AI Programming Assistants: Successes and Challenges"

10 / 10 papers shown
Title
Large Language Models are Qualified Benchmark Builders: Rebuilding Pre-Training Datasets for Advancing Code Intelligence Tasks
Large Language Models are Qualified Benchmark Builders: Rebuilding Pre-Training Datasets for Advancing Code Intelligence Tasks
Kang Yang
Xinjun Mao
Shangwen Wang
Y. Wang
Tanghaoran Zhang
Bo Lin
Yihao Qin
Zhang Zhang
Yao Lu
Kamal Al-Sabahi
ALM
57
1
0
28 Apr 2025
Do It For Me vs. Do It With Me: Investigating User Perceptions of Different Paradigms of Automation in Copilots for Feature-Rich Software
Do It For Me vs. Do It With Me: Investigating User Perceptions of Different Paradigms of Automation in Copilots for Feature-Rich Software
Anjali Khurana
Xiaotian Su
April Yi Wang
Parmit K. Chilana
33
0
0
22 Apr 2025
CursorCore: Assist Programming through Aligning Anything
CursorCore: Assist Programming through Aligning Anything
Hao Jiang
Qi Liu
Rui Li
Shengyu Ye
Shijin Wang
43
1
0
09 Oct 2024
Can Developers Prompt? A Controlled Experiment for Code Documentation
  Generation
Can Developers Prompt? A Controlled Experiment for Code Documentation Generation
L. Herrmann
Tim Puhlfürß
Felix Dietrich
24
3
0
01 Aug 2024
RLSF: Reinforcement Learning via Symbolic Feedback
RLSF: Reinforcement Learning via Symbolic Feedback
Piyush Jha
Prithwish Jana
Arnav Arora
Vijay Ganesh
LRM
36
3
0
26 May 2024
Rocks Coding, Not Development--A Human-Centric, Experimental Evaluation
  of LLM-Supported SE Tasks
Rocks Coding, Not Development--A Human-Centric, Experimental Evaluation of LLM-Supported SE Tasks
Wei Wang
Huilong Ning
Gaowei Zhang
Libo Liu
Yi Wang
19
11
0
08 Feb 2024
Grounded Copilot: How Programmers Interact with Code-Generating Models
Grounded Copilot: How Programmers Interact with Code-Generating Models
Shraddha Barke
M. James
Nadia Polikarpova
136
212
0
30 Jun 2022
Productivity Assessment of Neural Code Completion
Productivity Assessment of Neural Code Completion
Albert Ziegler
Eirini Kalliamvakou
Shawn Simister
Ganesh Sittampalam
Alice Li
Andrew Rice
Devon Rifkin
E. Aftandilian
102
176
0
13 May 2022
A Systematic Evaluation of Large Language Models of Code
A Systematic Evaluation of Large Language Models of Code
Frank F. Xu
Uri Alon
Graham Neubig
Vincent J. Hellendoorn
ELM
ALM
196
624
0
26 Feb 2022
Leaving My Fingerprints: Motivations and Challenges of Contributing to
  OSS for Social Good
Leaving My Fingerprints: Motivations and Challenges of Contributing to OSS for Social Good
Yu Huang
Denae Ford
Thomas Zimmermann
37
31
0
26 Apr 2021
1