16
1

Emerging Cyber Attack Risks of Medical AI Agents

Abstract

Large language models (LLMs)-powered AI agents exhibit a high level of autonomy in addressing medical and healthcare challenges. With the ability to access various tools, they can operate within an open-ended action space. However, with the increase in autonomy and ability, unforeseen risks also arise. In this work, we investigated one particular risk, i.e., cyber attack vulnerability of medical AI agents, as agents have access to the Internet through web browsing tools. We revealed that through adversarial prompts embedded on webpages, cyberattackers can: i) inject false information into the agent's response; ii) they can force the agent to manipulate recommendation (e.g., healthcare products and services); iii) the attacker can also steal historical conversations between the user and agent, resulting in the leak of sensitive/private medical information; iv) furthermore, the targeted agent can also cause a computer system hijack by returning a malicious URL in its response. Different backbone LLMs were examined, and we found such cyber attacks can succeed in agents powered by most mainstream LLMs, with the reasoning models such as DeepSeek-R1 being the most vulnerable.

View on arXiv
@article{qiu2025_2504.03759,
  title={ Emerging Cyber Attack Risks of Medical AI Agents },
  author={ Jianing Qiu and Lin Li and Jiankai Sun and Hao Wei and Zhe Xu and Kyle Lam and Wu Yuan },
  journal={arXiv preprint arXiv:2504.03759},
  year={ 2025 }
}
Comments on this paper