Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2104.08763
Cited By
Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training
18 April 2021
Shunsuke Kitada
Hitoshi Iyatomi
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training"
4 / 4 papers shown
Title
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
13
76
0
25 Jan 2024
Improving Prediction Performance and Model Interpretability through Attention Mechanisms from Basic and Applied Research Perspectives
Shunsuke Kitada
FaML
HAI
AI4CE
25
0
0
24 Mar 2023
Improving Health Mentioning Classification of Tweets using Contrastive Adversarial Training
Pervaiz Iqbal Khan
Shoaib Ahmed Siddiqui
Imran Razzak
Andreas Dengel
Sheraz Ahmed
8
3
0
03 Mar 2022
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
196
1,363
0
06 Jun 2016
1