Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.18122
Cited By
Poisoned LangChain: Jailbreak LLMs by LangChain
26 June 2024
Ziqiu Wang
Jun Liu
Shengkai Zhang
Yang Yang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Poisoned LangChain: Jailbreak LLMs by LangChain"
3 / 3 papers shown
Title
Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs
Chetan Pathade
AAML
SILM
46
0
0
07 May 2025
InsightLens: Discovering and Exploring Insights from Conversational Contexts in Large-Language-Model-Powered Data Analysis
Luoxuan Weng
Xingbo Wang
Junyu Lu
Yingchaojie Feng
Yihan Liu
Wei Chen
50
5
0
02 Apr 2024
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng-Zhen Zhang
Yuxiao Dong
Jie Tang
BDL
LRM
242
1,070
0
05 Oct 2022
1