ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.14332
  4. Cited By
Active Exploration via Experiment Design in Markov Chains
v1v2 (latest)

Active Exploration via Experiment Design in Markov Chains

International Conference on Artificial Intelligence and Statistics (AISTATS), 2022
29 June 2022
Mojmír Mutný
Tadeusz Janik
Andreas Krause
ArXiv (abs)PDFHTML

Papers citing "Active Exploration via Experiment Design in Markov Chains"

12 / 12 papers shown
Title
Provable Maximum Entropy Manifold Exploration via Diffusion Models
Provable Maximum Entropy Manifold Exploration via Diffusion Models
Riccardo De Santi
Marin Vlastelica
Ya-Ping Hsieh
Zebang Shen
Niao He
Andreas Krause
DiffM
142
5
0
18 Jun 2025
The Catechol Benchmark: Time-series Solvent Selection Data for Few-shot Machine Learning
The Catechol Benchmark: Time-series Solvent Selection Data for Few-shot Machine Learning
Toby Boyne
Juan S. Campos
Becky D Langdon
Jixiang Qing
Yilin Xie
...
Kim E. Jelfs
Sarah Boyall
Thomas M. Dixon
Linden Schrecker
Jose Pablo Folch
127
1
0
09 Jun 2025
Geometric Active Exploration in Markov Decision Processes: the Benefit
  of Abstraction
Geometric Active Exploration in Markov Decision Processes: the Benefit of Abstraction
Ric De Santi
Federico Arangath Joseph
Noah Liniger
Mirco Mutti
Andreas Krause
AI4CE
211
3
0
18 Jul 2024
Global Reinforcement Learning: Beyond Linear and Convex Rewards via
  Submodular Semi-gradient Methods
Global Reinforcement Learning: Beyond Linear and Convex Rewards via Submodular Semi-gradient Methods
Ric De Santi
Manish Prajapat
Andreas Krause
197
6
0
13 Jul 2024
Transition Constrained Bayesian Optimization via Markov Decision
  Processes
Transition Constrained Bayesian Optimization via Markov Decision Processes
Jose Pablo Folch
Calvin Tsay
Robert M. Lee
B. Shafei
Weronika Ormaniec
Andreas Krause
Mark van der Wilk
Ruth Misener
Mojmír Mutný
311
6
0
13 Feb 2024
Practical Path-based Bayesian Optimization
Practical Path-based Bayesian Optimization
Jose Pablo Folch
J. Odgers
Shiqiang Zhang
Robert M. Lee
B. Shafei
David Walz
Calvin Tsay
Mark van der Wilk
Ruth Misener
174
4
0
01 Dec 2023
Submodular Reinforcement Learning
Submodular Reinforcement LearningInternational Conference on Learning Representations (ICLR), 2023
Manish Prajapat
Mojmír Mutný
Melanie Zeilinger
Andreas Krause
OffRL
225
22
0
25 Jul 2023
Optimistic Active Exploration of Dynamical Systems
Optimistic Active Exploration of Dynamical SystemsNeural Information Processing Systems (NeurIPS), 2023
Bhavya Sukhija
Lenart Treven
Cansu Sancaktar
Sebastian Blaes
Stelian Coros
Andreas Krause
405
27
0
21 Jun 2023
Cancellation-Free Regret Bounds for Lagrangian Approaches in Constrained
  Markov Decision Processes
Cancellation-Free Regret Bounds for Lagrangian Approaches in Constrained Markov Decision Processes
A. Müller
Pragnya Alatur
Giorgia Ramponi
Niao He
256
6
0
12 Jun 2023
Reinforcement Learning with General Utilities: Simpler Variance
  Reduction and Large State-Action Space
Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action SpaceInternational Conference on Machine Learning (ICML), 2023
Anas Barakat
Ilyas Fatkhullin
Niao He
198
14
0
02 Jun 2023
Instance-Dependent Near-Optimal Policy Identification in Linear MDPs via
  Online Experiment Design
Instance-Dependent Near-Optimal Policy Identification in Linear MDPs via Online Experiment DesignNeural Information Processing Systems (NeurIPS), 2022
Andrew Wagenmaker
Kevin Jamieson
OffRL
262
32
0
06 Jul 2022
SnAKe: Bayesian Optimization with Pathwise Exploration
SnAKe: Bayesian Optimization with Pathwise Exploration
Jose Pablo Folch
Shiqiang Zhang
Robert M. Lee
B. Shafei
David Walz
Calvin Tsay
Mark van der Wilk
Ruth Misener
442
23
0
31 Jan 2022
1