ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.06302
  4. Cited By
Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak
  Attacks

Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks

10 June 2024
Zonghao Ying
Aishan Liu
Xianglong Liu
Dacheng Tao
ArXivPDFHTML

Papers citing "Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks"

8 / 8 papers shown
Title
CUE-M: Contextual Understanding and Enhanced Search with Multimodal Large Language Model
CUE-M: Contextual Understanding and Enhanced Search with Multimodal Large Language Model
Dongyoung Go
Taesun Whang
Chanhee Lee
Hwayeon Kim
Sunghoon Park
Seunghwan Ji
Dongchan Kim
Young-Bum Kim
Young-Bum Kim
LRM
72
1
0
19 Nov 2024
JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal
  Large Language Models against Jailbreak Attacks
JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks
Weidi Luo
Siyuan Ma
Xiaogeng Liu
Xiaoyu Guo
Chaowei Xiao
AAML
58
17
0
03 Apr 2024
Does Few-shot Learning Suffer from Backdoor Attacks?
Does Few-shot Learning Suffer from Backdoor Attacks?
Xinwei Liu
Xiaojun Jia
Jindong Gu
Yuan Xun
Siyuan Liang
Xiaochun Cao
58
18
0
31 Dec 2023
Pre-trained Trojan Attacks for Visual Recognition
Pre-trained Trojan Attacks for Visual Recognition
Aishan Liu
Xinwei Zhang
Yisong Xiao
Yuguang Zhou
Siyuan Liang
Jiakai Wang
Xianglong Liu
Xiaochun Cao
Dacheng Tao
AAML
61
25
0
23 Dec 2023
FigStep: Jailbreaking Large Vision-Language Models via Typographic Visual Prompts
FigStep: Jailbreaking Large Vision-Language Models via Typographic Visual Prompts
Yichen Gong
Delong Ran
Jinyuan Liu
Conglei Wang
Tianshuo Cong
Anyu Wang
Sisi Duan
Xiaoyun Wang
MLLM
127
116
0
09 Nov 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,261
0
28 Jan 2022
Dual Attention Suppression Attack: Generate Adversarial Camouflage in
  Physical World
Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World
Jiakai Wang
Aishan Liu
Zixin Yin
Shunchang Liu
Shiyu Tang
Xianglong Liu
AAML
133
191
0
01 Mar 2021
1