ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.16529
33
0

Retrieval-Augmented Fine-Tuning With Preference Optimization For Visual Program Generation

23 February 2025
Deokhyung Kang
Jeonghun Cho
Yejin Jeon
Sunbin Jang
Minsub Lee
Jawoon Cho
Gary Geunbae Lee
ArXivPDFHTML
Abstract

Visual programming languages (VPLs) allow users to create programs through graphical interfaces, which results in easier accessibility and their widespread usage in various domains. To further enhance this accessibility, recent research has focused on generating VPL code from user instructions using large language models (LLMs). Specifically, by employing prompting-based methods, these studies have shown promising results. Nevertheless, such approaches can be less effective for industrial VPLs such as Ladder Diagram (LD). LD is a pivotal language used in industrial automation processes and involves extensive domain-specific configurations, which are difficult to capture in a single prompt. In this work, we demonstrate that training-based methods outperform prompting-based methods for LD generation accuracy, even with smaller backbone models. Building on these findings, we propose a two-stage training strategy to further enhance VPL generation. First, we employ retrieval-augmented fine-tuning to leverage the repetitive use of subroutines commonly seen in industrial VPLs. Second, we apply direct preference optimization (DPO) to further guide the model toward accurate outputs, using systematically generated preference pairs through graph editing operations. Extensive experiments on real-world LD data demonstrate that our approach improves program-level accuracy by over 10% compared to supervised fine-tuning, which highlights its potential to advance industrial automation.

View on arXiv
@article{kang2025_2502.16529,
  title={ Retrieval-Augmented Fine-Tuning With Preference Optimization For Visual Program Generation },
  author={ Deokhyung Kang and Jeonghun Cho and Yejin Jeon and Sunbin Jang and Minsub Lee and Jawoon Cho and Gary Geunbae Lee },
  journal={arXiv preprint arXiv:2502.16529},
  year={ 2025 }
}
Comments on this paper