ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.05445
59
0

ToxicSQL: Migrating SQL Injection Threats into Text-to-SQL Models via Backdoor Attack

7 March 2025
Meiyu Lin
Haichuan Zhang
Jiale Lao
Renyuan Li
Yuanchun Zhou
Carl Yang
Yang Cao
Mingjie Tang
    SILM
ArXivPDFHTML
Abstract

Large language models (LLMs) have shown state-of-the-art results in translating natural language questions into SQL queries (Text-to-SQL), a long-standing challenge within the database community. However, security concerns remain largely unexplored, particularly the threat of backdoor attacks, which can introduce malicious behaviors into models through fine-tuning with poisoned datasets. In this work, we systematically investigate the vulnerabilities of LLM-based Text-to-SQL models and present ToxicSQL, a novel backdoor attack framework. Our approach leverages stealthy {semantic and character-level triggers} to make backdoors difficult to detect and remove, ensuring that malicious behaviors remain covert while maintaining high model accuracy on benign inputs. Furthermore, we propose leveraging SQL injection payloads as backdoor targets, enabling the generation of malicious yet executable SQL queries, which pose severe security and privacy risks in language model-based SQL development. We demonstrate that injecting only 0.44% of poisoned data can result in an attack success rate of 79.41%, posing a significant risk to database security. Additionally, we propose detection and mitigation strategies to enhance model reliability. Our findings highlight the urgent need for security-aware Text-to-SQL development, emphasizing the importance of robust defenses against backdoor threats.

View on arXiv
@article{lin2025_2503.05445,
  title={ ToxicSQL: Migrating SQL Injection Threats into Text-to-SQL Models via Backdoor Attack },
  author={ Meiyu Lin and Haichuan Zhang and Jiale Lao and Renyuan Li and Yuanchun Zhou and Carl Yang and Yang Cao and Mingjie Tang },
  journal={arXiv preprint arXiv:2503.05445},
  year={ 2025 }
}
Comments on this paper