39
0

Empirical evaluation of LLMs in predicting fixes of Configuration bugs in Smart Home System

Abstract

This empirical study evaluates the effectiveness of Large Language Models (LLMs) in predicting fixes for configuration bugs in smart home systems. The research analyzes three prominent LLMs - GPT-4, GPT-4o (GPT-4 Turbo), and Claude 3.5 Sonnet - using four distinct prompt designs to assess their ability to identify appropriate fix strategies and generate correct solutions. The study utilized a dataset of 129 debugging issues from the Home Assistant Community, focusing on 21 randomly selected cases for in-depth analysis. Results demonstrate that GPT-4 and Claude 3.5 Sonnet achieved 80\% accuracy in strategy prediction when provided with both bug descriptions and original scripts. GPT-4 exhibited consistent performance across different prompt types, while GPT-4o showed advantages in speed and cost-effectiveness despite slightly lower accuracy. The findings reveal that prompt design significantly impacts model performance, with comprehensive prompts containing both description and original script yielding the best results. This research provides valuable insights for improving automated bug fixing in smart home system configurations and demonstrates the potential of LLMs in addressing configuration-related challenges.

View on arXiv
@article{monisha2025_2502.10953,
  title={ Empirical evaluation of LLMs in predicting fixes of Configuration bugs in Smart Home System },
  author={ Sheikh Moonwara Anjum Monisha and Atul Bharadwaj },
  journal={arXiv preprint arXiv:2502.10953},
  year={ 2025 }
}
Comments on this paper