ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.08763
  4. Cited By
Making Attention Mechanisms More Robust and Interpretable with Virtual
  Adversarial Training

Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training

18 April 2021
Shunsuke Kitada
Hitoshi Iyatomi
    AAML
ArXivPDFHTML

Papers citing "Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training"

4 / 4 papers shown
Title
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
13
76
0
25 Jan 2024
Improving Prediction Performance and Model Interpretability through
  Attention Mechanisms from Basic and Applied Research Perspectives
Improving Prediction Performance and Model Interpretability through Attention Mechanisms from Basic and Applied Research Perspectives
Shunsuke Kitada
FaML
HAI
AI4CE
25
0
0
24 Mar 2023
Improving Health Mentioning Classification of Tweets using Contrastive
  Adversarial Training
Improving Health Mentioning Classification of Tweets using Contrastive Adversarial Training
Pervaiz Iqbal Khan
Shoaib Ahmed Siddiqui
Imran Razzak
Andreas Dengel
Sheraz Ahmed
10
3
0
03 Mar 2022
A Decomposable Attention Model for Natural Language Inference
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
196
1,363
0
06 Jun 2016
1