AR-DAE: Towards Unbiased Neural Entropy Gradient Estimation

Authors: Jae Hyun Lim, Aaron Courville, Christopher Pal, Chin-Wei Huang

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct theoretical and experimental analyses on the approximation error of the proposed method, as well as extensive studies on heuristics to ensure its robustness. Finally, using the proposed gradient approximator to estimate the gradient of entropy, we demonstrate state-of-the-art performance on density estimation with variational autoencoders and continuous control with soft actor-critic.
Researcher Affiliation Academia 1Mila 2Université de Montréal 3CIFAR fellow 4Canada CIFAR AI Chair 5Polytechnique Montréal.
Pseudocode No The paper mentions 'Algorithm 1' in Appendix E, but the pseudocode or algorithm block itself is not provided within the extracted text.
Open Source Code No The paper does not provide a specific repository link or an explicit statement about the public release of the source code for the described methodology.
Open Datasets Yes MNIST We first demonstrate the robustness of our method on different choices of architectures for VAE: (...) We run our experiments on six continuous control environments from the Open AI gym benchmark suite (Brockman et al., 2016) and Rllab (Duan et al., 2016).
Dataset Splits No The paper references datasets like MNIST and OpenAI Gym, which have standard splits, but it does not explicitly provide the specific percentages or sample counts for the training, validation, and test splits within the provided text. Details are often deferred to appendices not included.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. It only mentions experiments were conducted.
Software Dependencies No The paper mentions PyTorch in its references, 'Automatic differentiation in pytorch. 2017.', but does not specify the version number of PyTorch or any other software libraries used in their experiments. No specific ancillary software details with version numbers are provided.
Experiment Setup No The paper states that experimental details can be found in appendices (e.g., 'The experimental details can be found in Appendix E.'), but these appendices are not provided in the extracted text. Therefore, specific hyperparameter values or training configurations are not present.