Bayesian Exploration Networks
Authors: Mattie Fellows, Brandon Gary Kaplowitz, Christian Schroeder De Witt, Shimon Whiteson
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results demonstrate that BEN can learn true Bayes-optimal policies in tasks where existing model-free approaches fail. |
| Researcher Affiliation | Academia | 1Department of Engineering Science, University of Oxford, Oxford, United Kingdom 2Department of Economics, New York University, New York, United States of America 3Department of Computer Science, University of Oxford, Oxford, United Kingdom. |
| Pseudocode | Yes | Algorithm 1 APPROXBRL(PΦ, M(ϕ)) |
| Open Source Code | No | The paper does not contain an explicit statement about open-sourcing the code for the described methodology or a link to a code repository. |
| Open Datasets | No | The paper introduces a novel search and rescue gridworld MDP and evaluates on this custom environment and the Tiger Problem. It does not provide access information (link, DOI, repository, or citation) for a publicly available or open dataset. |
| Dataset Splits | No | The paper describes experimental settings like 'episodic' and 'zero-shot' but does not specify explicit training, validation, and test dataset splits (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper states 'The experiments were made possible by a generous equipment grant from NVIDIA' but does not provide specific hardware models (e.g., GPU/CPU models, memory details) used for running experiments. |
| Software Dependencies | No | The paper mentions using ADAM for stochastic gradient descent and refers to neural network components (e.g., Re LU activations, gated recurrent unit) but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | We vary the number of steps for the MSBBE minimisation with a learning rate of 0.02 using ADAM for the stochastic gradient descent. For the for Q-function approximator, we use a fully connected linear layer with Re LU activations, a gated recurrent unit and a final fully connected linear layer with Re LU activations. All hidden dimensions are 32. The dimension of ˆh0 is 2. |