ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.18122
  4. Cited By
Poisoned LangChain: Jailbreak LLMs by LangChain

Poisoned LangChain: Jailbreak LLMs by LangChain

26 June 2024
Ziqiu Wang
Jun Liu
Shengkai Zhang
Yang Yang
ArXivPDFHTML

Papers citing "Poisoned LangChain: Jailbreak LLMs by LangChain"

3 / 3 papers shown
Title
Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs
Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs
Chetan Pathade
AAML
SILM
46
0
0
07 May 2025
InsightLens: Discovering and Exploring Insights from Conversational
  Contexts in Large-Language-Model-Powered Data Analysis
InsightLens: Discovering and Exploring Insights from Conversational Contexts in Large-Language-Model-Powered Data Analysis
Luoxuan Weng
Xingbo Wang
Junyu Lu
Yingchaojie Feng
Yihan Liu
Wei Chen
50
5
0
02 Apr 2024
GLM-130B: An Open Bilingual Pre-trained Model
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng-Zhen Zhang
Yuxiao Dong
Jie Tang
BDL
LRM
242
1,070
0
05 Oct 2022
1