ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.05320
  4. Cited By
Increasing the Interpretability of Recurrent Neural Networks Using
  Hidden Markov Models
v1v2 (latest)

Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models

16 June 2016
Viktoriya Krakovna
Finale Doshi-Velez
    AI4CE
ArXiv (abs)PDFHTML

Papers citing "Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models"

31 / 31 papers shown
Title
A Review of Multimodal Explainable Artificial Intelligence: Past,
  Present and Future
A Review of Multimodal Explainable Artificial Intelligence: Past, Present and Future
Shilin Sun
Wenbin An
Feng Tian
Fang Nan
Qidong Liu
Jing Liu
N. Shah
Ping Chen
208
9
0
18 Dec 2024
Generative learning for nonlinear dynamics
Generative learning for nonlinear dynamics
William Gilpin
AI4CEPINN
157
31
0
07 Nov 2023
AI for Investment: A Platform Disruption
AI for Investment: A Platform Disruption
Mohammad Rasouli
Ravi Chiruvolu
Ali Risheh
64
4
0
06 Sep 2023
Hybrid hidden Markov LSTM for short-term traffic flow prediction
Hybrid hidden Markov LSTM for short-term traffic flow prediction
Agnimitra Sengupta
A. Das
S. I. Guler
BDLAI4TS
69
3
0
11 Jul 2023
Weighted Automata Extraction and Explanation of Recurrent Neural
  Networks for Natural Language Tasks
Weighted Automata Extraction and Explanation of Recurrent Neural Networks for Natural Language Tasks
Zeming Wei
Xiyue Zhang
Yihao Zhang
Meng Sun
99
12
0
24 Jun 2023
BTPK-based interpretable method for NER tasks based on Talmudic Public
  Announcement Logic
BTPK-based interpretable method for NER tasks based on Talmudic Public Announcement Logic
Yulin Chen
Beishui Liao
Bruno Bentzen
Bo Yuan
Zelai Yao
Haixiao Chi
D. Gabbay
83
1
0
24 Jan 2022
M2Lens: Visualizing and Explaining Multimodal Models for Sentiment
  Analysis
M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis
Xingbo Wang
Jianben He
Zhihua Jin
Muqiao Yang
Yong Wang
Huamin Qu
122
83
0
17 Jul 2021
Absolute Value Constraint: The Reason for Invalid Performance Evaluation
  Results of Neural Network Models for Stock Price Prediction
Absolute Value Constraint: The Reason for Invalid Performance Evaluation Results of Neural Network Models for Stock Price Prediction
Yi Wei
126
1
0
10 Jan 2021
MEME: Generating RNN Model Explanations via Model Extraction
MEME: Generating RNN Model Explanations via Model Extraction
Dmitry Kazhdan
B. Dimanov
M. Jamnik
Pietro Lio
LRM
89
13
0
13 Dec 2020
Uncertainty Estimation and Calibration with Finite-State Probabilistic
  RNNs
Uncertainty Estimation and Calibration with Finite-State Probabilistic RNNs
Cheng Wang
Carolin (Haas) Lawrence
Mathias Niepert
UQCV
74
10
0
24 Nov 2020
Scaling Hidden Markov Language Models
Scaling Hidden Markov Language Models
Justin T. Chiu
Alexander M. Rush
BDL
147
25
0
09 Nov 2020
Towards Ground Truth Explainability on Tabular Data
Towards Ground Truth Explainability on Tabular Data
Brian Barr
Ke Xu
Claudio Silva
E. Bertini
Robert Reilly
C. Bayan Bruss
J. Wittenbach
130
7
0
20 Jul 2020
AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph
  modularity
AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity
S. Udrescu
A. Tan
Jiahai Feng
Orisvaldo Neto
Tailin Wu
Max Tegmark
180
209
0
18 Jun 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAMLXAI
180
396
0
30 Apr 2020
Sequential Interpretability: Methods, Applications, and Future Direction
  for Understanding Deep Learning Models in the Context of Sequential Data
Sequential Interpretability: Methods, Applications, and Future Direction for Understanding Deep Learning Models in the Context of Sequential Data
B. Shickel
Parisa Rashidi
AI4TS
99
18
0
27 Apr 2020
Intelligence, physics and information -- the tradeoff between accuracy
  and simplicity in machine learning
Intelligence, physics and information -- the tradeoff between accuracy and simplicity in machine learning
Tailin Wu
148
1
0
11 Jan 2020
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
543
6,721
0
22 Oct 2019
Powering Hidden Markov Model by Neural Network based Generative Models
Powering Hidden Markov Model by Neural Network based Generative Models
Dong Liu
Antoine Honoré
Saikat Chatterjee
L. Rasmussen
BDL
172
15
0
13 Oct 2019
Scalable Explanation of Inferences on Large Graphs
Scalable Explanation of Inferences on Large Graphs
Chao Chen
Yuhang Liu
Xi Zhang
Sihong Xie
75
6
0
13 Aug 2019
Self-Attentive Hawkes Processes
Self-Attentive Hawkes Processes
Qiang Zhang
Aldo Lipani
Ömer Kirnap
Emine Yilmaz
AI4TS
143
46
0
17 Jul 2019
Improving the Performance of the LSTM and HMM Model via Hybridization
Improving the Performance of the LSTM and HMM Model via Hybridization
Larkin Liu
Yu-Chung Lin
Joshua Reid
154
9
0
09 Jul 2019
Representing Formal Languages: A Comparison Between Finite Automata and
  Recurrent Neural Networks
Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks
Joshua J. Michalenko
Ameesh Shah
Abhinav Verma
Richard G. Baraniuk
Swarat Chaudhuri
Ankit B. Patel
AI4CE
136
22
0
27 Feb 2019
An Evaluation of the Human-Interpretability of Explanation
An Evaluation of the Human-Interpretability of Explanation
Isaac Lage
Emily Chen
Jeffrey He
Menaka Narayanan
Been Kim
Sam Gershman
Finale Doshi-Velez
FAttXAI
195
164
0
31 Jan 2019
Evaluating the Ability of LSTMs to Learn Context-Free Grammars
Evaluating the Ability of LSTMs to Learn Context-Free Grammars
Luzi Sennhauser
Robert C. Berwick
111
57
0
06 Nov 2018
Using Machine Learning Safely in Automotive Software: An Assessment and
  Adaption of Software Process Requirements in ISO 26262
Using Machine Learning Safely in Automotive Software: An Assessment and Adaption of Software Process Requirements in ISO 26262
Rick Salay
Krzysztof Czarnecki
114
71
0
05 Aug 2018
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic
  Corrections
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections
Xin Zhang
Armando Solar-Lezama
Rishabh Singh
FAtt
134
63
0
21 Feb 2018
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
J. Uesato
Brendan O'Donoghue
Aaron van den Oord
Pushmeet Kohli
AAML
348
618
0
15 Feb 2018
Understanding Recurrent Neural State Using Memory Signatures
Understanding Recurrent Neural State Using Memory Signatures
Skanda Koppula
K. Sim
K. K. Chin
127
2
0
11 Feb 2018
Fibres of Failure: Classifying errors in predictive processes
Fibres of Failure: Classifying errors in predictive processes
L. Carlsson
Gunnar Carlsson
Mikael Vejdemo-Johansson
AI4CE
135
4
0
09 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAttXAI
128
246
0
02 Feb 2018
Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery
Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery
Scott Wisdom
Thomas Powers
J. Pitton
L. Atlas
113
36
0
22 Nov 2016
1