ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.11900
  4. Cited By
ASTERYX : A model-Agnostic SaT-basEd appRoach for sYmbolic and
  score-based eXplanations

ASTERYX : A model-Agnostic SaT-basEd appRoach for sYmbolic and score-based eXplanations

23 June 2022
Ryma Boumazouza
Fahima Cheikh
Bertrand Mazure
Karim Tabia
ArXiv (abs)PDFHTML

Papers citing "ASTERYX : A model-Agnostic SaT-basEd appRoach for sYmbolic and score-based eXplanations"

24 / 24 papers shown
Title
Axiomatic Characterisations of Sample-based Explainers
Axiomatic Characterisations of Sample-based Explainers
Leila Amgoud
Martin Cooper
Salim Debbaoui
FAtt
74
1
0
09 Aug 2024
Hard to Explain: On the Computational Hardness of In-Distribution Model
  Interpretation
Hard to Explain: On the Computational Hardness of In-Distribution Model Interpretation
Guy Amir
Shahaf Bassan
Guy Katz
75
3
0
07 Aug 2024
Automated Explanation Selection for Scientific Discovery
Automated Explanation Selection for Scientific Discovery
Markus Iser
LRM
104
0
0
24 Jul 2024
Local vs. Global Interpretability: A Computational Complexity
  Perspective
Local vs. Global Interpretability: A Computational Complexity Perspective
Shahaf Bassan
Guy Amir
Guy Katz
96
11
0
05 Jun 2024
Logic-Based Explainability: Past, Present & Future
Logic-Based Explainability: Past, Present & Future
Joao Marques-Silva
81
3
0
04 Jun 2024
On Formal Feature Attribution and Its Approximation
On Formal Feature Attribution and Its Approximation
Jinqiang Yu
Alexey Ignatiev
Peter Stuckey
105
8
0
07 Jul 2023
On Logic-Based Explainability with Partially Specified Inputs
On Logic-Based Explainability with Partially Specified Inputs
Ramón Béjar
António Morgado
Jordi Planes
Sasha Rubin
73
0
0
27 Jun 2023
Delivering Inflated Explanations
Delivering Inflated Explanations
Yacine Izza
Alexey Ignatiev
Peter Stuckey
Sasha Rubin
XAI
58
6
0
27 Jun 2023
Disproving XAI Myths with Formal Methods -- Initial Results
Disproving XAI Myths with Formal Methods -- Initial Results
Sasha Rubin
81
9
0
13 May 2023
A New Class of Explanations for Classifiers with Non-Binary Features
A New Class of Explanations for Classifiers with Non-Binary Features
Chunxi Ji
Adnan Darwiche
FAtt
71
3
0
28 Apr 2023
Finding Minimum-Cost Explanations for Predictions made by Tree Ensembles
Finding Minimum-Cost Explanations for Predictions made by Tree Ensembles
John Törnblom
Emil Karlsson
Simin Nadjm-Tehrani
FAtt
161
0
0
16 Mar 2023
The Inadequacy of Shapley Values for Explainability
The Inadequacy of Shapley Values for Explainability
Xuanxiang Huang
Sasha Rubin
FAtt
97
41
0
16 Feb 2023
HardSATGEN: Understanding the Difficulty of Hard SAT Formula Generation
  and A Strong Structure-Hardness-Aware Baseline
HardSATGEN: Understanding the Difficulty of Hard SAT Formula Generation and A Strong Structure-Hardness-Aware Baseline
Yongqian Li
Xinyan Chen
Wenxuan Guo
Xijun Li
Wanqian Luo
Jun Huang
Hui-Ling Zhen
Mingxuan Yuan
Junchi Yan
82
21
0
04 Feb 2023
On Computing Probabilistic Abductive Explanations
On Computing Probabilistic Abductive Explanations
Yacine Izza
Xuanxiang Huang
Alexey Ignatiev
Nina Narodytska
Martin C. Cooper
Sasha Rubin
FAttXAI
111
20
0
12 Dec 2022
VeriX: Towards Verified Explainability of Deep Neural Networks
VeriX: Towards Verified Explainability of Deep Neural Networks
Min Wu
Haoze Wu
Clark W. Barrett
AAML
146
13
0
02 Dec 2022
Feature Necessity & Relevancy in ML Classifier Explanations
Feature Necessity & Relevancy in ML Classifier Explanations
Xuanxiang Huang
Martin C. Cooper
António Morgado
Jordi Planes
Sasha Rubin
FAtt
69
19
0
27 Oct 2022
Logic-Based Explainability in Machine Learning
Logic-Based Explainability in Machine Learning
Sasha Rubin
LRMXAI
126
40
0
24 Oct 2022
On Computing Relevant Features for Explaining NBCs
On Computing Relevant Features for Explaining NBCs
Yacine Izza
Sasha Rubin
97
5
0
11 Jul 2022
Eliminating The Impossible, Whatever Remains Must Be True
Eliminating The Impossible, Whatever Remains Must Be True
Jinqiang Yu
Alexey Ignatiev
Peter Stuckey
Nina Narodytska
Sasha Rubin
89
23
0
20 Jun 2022
On Tackling Explanation Redundancy in Decision Trees
On Tackling Explanation Redundancy in Decision Trees
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
98
64
0
20 May 2022
Provably Precise, Succinct and Efficient Explanations for Decision Trees
Provably Precise, Succinct and Efficient Explanations for Decision Trees
Yacine Izza
Alexey Ignatiev
Nina Narodytska
Martin C. Cooper
Sasha Rubin
FAtt
75
8
0
19 May 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified
  Perturbation Analysis
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
68
42
0
15 Feb 2022
On Deciding Feature Membership in Explanations of SDD & Related
  Classifiers
On Deciding Feature Membership in Explanations of SDD & Related Classifiers
Xuanxiang Huang
Sasha Rubin
FAttLRM
50
3
0
15 Feb 2022
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.3K
17,178
0
16 Feb 2016
1