332
v1v2v3 (latest)

VisPoison: An Effective Backdoor Attack Framework for Tabular Data Visualization Models

Main:12 Pages
7 Figures
Bibliography:2 Pages
20 Tables
Abstract

Text-to-visualization (text-to-vis) models for tabular data have become essential tools in the era of big data, enabling users to generate visualizations and make data-driven decisions through natural language queries (NLQs). Despite their growing adoption, the security vulnerabilities of these models remain largely unexplored. To address this gap, we propose VisPoison, a backdoor attack framework that realistically simulates three types of attacks on text-to-vis models via data poisoning: data exposure, misleading visualizations, and denial-of-service (DoS). Specifically, VisPoison introduces two types of stealthy triggers to enable both proactive and passive backdoor activations. Proactive triggers are deliberately inserted by attackers using rare-word patterns to extract sensitive information, whereas passive triggers are unintentionally activated by users through first-word prompts, resulting in visualization errors or DoS failures. To support these triggers, we craft specialized payloads for visualization queries that allow compromised models to function normally on benign inputs while producing malicious outputs in the presence of triggers. Extensive evaluations on both trainable and in-context learning (ICL)-based text-to-vis models show that VisPoison achieves attack success rates exceeding 90\%, exposing serious vulnerabilities. Additionally, existing defense strategies reveal limited effectiveness against VisPoison, underscoring the urgent need for more robust and security-aware text-to-vis systems to safeguard human-data interaction.

View on arXiv
Comments on this paper