ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.02949
  4. Cited By
The SaTML '24 CNN Interpretability Competition: New Innovations for
  Concept-Level Interpretability

The SaTML '24 CNN Interpretability Competition: New Innovations for Concept-Level Interpretability

3 April 2024
Stephen Casper
Jieun Yun
Joonhyuk Baek
Yeseong Jung
Minhwan Kim
Kiwan Kwon
Saerom Park
Hayden Moore
David Shriver
Marissa Connor
Keltin Grimes
A. Nicolson
Arush Tagade
Jessica Rumbelow
Hieu Minh Nguyen
Dylan Hadfield-Menell
ArXivPDFHTML

Papers citing "The SaTML '24 CNN Interpretability Competition: New Innovations for Concept-Level Interpretability"

4 / 4 papers shown
Title
Zoom-shot: Fast and Efficient Unsupervised Zero-Shot Transfer of CLIP to
  Vision Encoders with Multimodal Loss
Zoom-shot: Fast and Efficient Unsupervised Zero-Shot Transfer of CLIP to Vision Encoders with Multimodal Loss
Jordan Shipard
Arnold Wiliem
Kien Nguyen Thanh
Wei Xiang
Clinton Fookes
VLM
CLIP
38
2
0
22 Jan 2024
Natural Language Descriptions of Deep Visual Features
Natural Language Descriptions of Deep Visual Features
Evan Hernandez
Sarah Schwettmann
David Bau
Teona Bagashvili
Antonio Torralba
Jacob Andreas
MILM
201
117
0
26 Jan 2022
Robust Feature-Level Adversaries are Interpretability Tools
Robust Feature-Level Adversaries are Interpretability Tools
Stephen Casper
Max Nadeau
Dylan Hadfield-Menell
Gabriel Kreiman
AAML
45
27
0
07 Oct 2021
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
254
3,684
0
28 Feb 2017
1