ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.04822
  4. Cited By
Identify Susceptible Locations in Medical Records via Adversarial
  Attacks on Deep Predictive Models

Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models

13 February 2018
Mengying Sun
Fengyi Tang
Jinfeng Yi
Fei Wang
Jiayu Zhou
    AAML
    OOD
    MedIm
ArXivPDFHTML

Papers citing "Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models"

9 / 9 papers shown
Title
Surfacing Biases in Large Language Models using Contrastive Input
  Decoding
Surfacing Biases in Large Language Models using Contrastive Input Decoding
G. Yona
Or Honovich
Itay Laish
Roee Aharoni
27
11
0
12 May 2023
Rethinking Textual Adversarial Defense for Pre-trained Language Models
Rethinking Textual Adversarial Defense for Pre-trained Language Models
Jiayi Wang
Rongzhou Bao
Zhuosheng Zhang
Hai Zhao
AAML
SILM
17
11
0
21 Jul 2022
MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction
  Models in Healthcare
MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare
Muchao Ye
Junyu Luo
Guanjie Zheng
Cao Xiao
Ting Wang
Fenglong Ma
AAML
24
3
0
11 Dec 2021
Explainable Deep Learning in Healthcare: A Methodological Survey from an
  Attribution View
Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View
Di Jin
Elena Sergeeva
W. Weng
Geeticka Chauhan
Peter Szolovits
OOD
31
55
0
05 Dec 2021
Evaluating the Robustness of Neural Language Models to Input
  Perturbations
Evaluating the Robustness of Neural Language Models to Input Perturbations
M. Moradi
Matthias Samwald
AAML
48
95
0
27 Aug 2021
Machine Learning with Electronic Health Records is vulnerable to
  Backdoor Trigger Attacks
Machine Learning with Electronic Health Records is vulnerable to Backdoor Trigger Attacks
Byunggill Joe
Akshay Mehra
I. Shin
Jihun Hamm
9
9
0
15 Jun 2021
Adversarial Attacks on Deep Learning Models in Natural Language
  Processing: A Survey
Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey
W. Zhang
Quan Z. Sheng
A. Alhazmi
Chenliang Li
AAML
24
57
0
21 Jan 2019
Query-Efficient Black-Box Attack by Active Learning
Query-Efficient Black-Box Attack by Active Learning
Pengcheng Li
Jinfeng Yi
Lijun Zhang
AAML
MLAU
21
54
0
13 Sep 2018
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the
  Robustness of 18 Deep Image Classification Models
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
D. Su
Huan Zhang
Hongge Chen
Jinfeng Yi
Pin-Yu Chen
Yupeng Gao
VLM
18
387
0
05 Aug 2018
1