ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.08030
  4. Cited By
Why and When LLM-Based Assistants Can Go Wrong: Investigating the
  Effectiveness of Prompt-Based Interactions for Software Help-Seeking

Why and When LLM-Based Assistants Can Go Wrong: Investigating the Effectiveness of Prompt-Based Interactions for Software Help-Seeking

12 February 2024
Anjali Khurana
Hariharan Subramonyam
Parmit K. Chilana
ArXivPDFHTML

Papers citing "Why and When LLM-Based Assistants Can Go Wrong: Investigating the Effectiveness of Prompt-Based Interactions for Software Help-Seeking"

1 / 1 papers shown
Title
Grounded Copilot: How Programmers Interact with Code-Generating Models
Grounded Copilot: How Programmers Interact with Code-Generating Models
Shraddha Barke
M. James
Nadia Polikarpova
112
212
0
30 Jun 2022
1