ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.07234
  4. Cited By
Goal-guided Generative Prompt Injection Attack on Large Language Models

Goal-guided Generative Prompt Injection Attack on Large Language Models

6 April 2024
Chong Zhang
Mingyu Jin
Qinkai Yu
Chengzhi Liu
Haochen Xue
Xiaobo Jin
    AAML
    SILM
ArXivPDFHTML

Papers citing "Goal-guided Generative Prompt Injection Attack on Large Language Models"

7 / 7 papers shown
Title
Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs
Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs
Chetan Pathade
AAML
SILM
46
0
0
07 May 2025
Time Series Forecasting with LLMs: Understanding and Enhancing Model Capabilities
Time Series Forecasting with LLMs: Understanding and Enhancing Model Capabilities
Mingyu Jin
Hua Tang
Chong Zhang
Qinkai Yu
Chengzhi Liu
Suiyuan Zhu
Yongfeng Zhang
Mengnan Du
AI4TS
90
26
0
31 Dec 2024
Palisade -- Prompt Injection Detection Framework
Palisade -- Prompt Injection Detection Framework
Sahasra Kokkula
Somanathan R
Nandavardhan R
Aashishkumar
G Divya
AAML
25
1
0
28 Oct 2024
Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation
Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation
Yuxi Li
Yi Liu
Yuekang Li
Ling Shi
Gelei Deng
Shengquan Chen
Kailong Wang
31
12
0
20 May 2024
AttackEval: How to Evaluate the Effectiveness of Jailbreak Attacking on
  Large Language Models
AttackEval: How to Evaluate the Effectiveness of Jailbreak Attacking on Large Language Models
Dong Shu
Mingyu Jin
Suiyuan Zhu
Beichen Wang
Zihao Zhou
Chong Zhang
Yongfeng Zhang
ELM
37
12
0
17 Jan 2024
Survey of Vulnerabilities in Large Language Models Revealed by
  Adversarial Attacks
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks
Erfan Shayegani
Md Abdullah Al Mamun
Yu Fu
Pedram Zaree
Yue Dong
Nael B. Abu-Ghazaleh
AAML
138
139
0
16 Oct 2023
Gradient-based Adversarial Attacks against Text Transformers
Gradient-based Adversarial Attacks against Text Transformers
Chuan Guo
Alexandre Sablayrolles
Hervé Jégou
Douwe Kiela
SILM
98
225
0
15 Apr 2021
1