Provably Correct Optimization and Exploration with Non-linear Policies

Authors: Fei Feng, Wotao Yin, Alekh Agarwal, Lin Yang

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically evaluate this adaptation, and show that it outperforms prior heuristics inspired by linear methods, establishing the value in correctly reasoning about the agent s uncertainty under non-linear function approximation. We conduct experiments to testify the effectiveness of ENIAC.
Researcher Affiliation Collaboration 1Department of Mathematics, University of California, Los Angeles, Los Angeles, CA, USA. 2Microsoft Research, Redmond, WA, USA. 3Department of Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, CA, USA.
Pseudocode Yes Algorithm 1 Exploratory Non-Linear Incremental Actor Critic (ENIAC); Algorithm 2 Policy Update
Open Source Code Yes Check our code at https://github.com/FlorenceFeng/ENIAC.
Open Datasets Yes We test on a continuous control task which requires exploration: continuous control Mountain Car5 from Open AI Gym (Brockman et al., 2016). 5https://gym.openai.com/envs/Mountain Car Continuous-v0/
Dataset Splits No The paper mentions evaluating methods over "10 random seeds" and varying "depths of networks" but does not specify any training/test/validation dataset splits (e.g., percentages or counts).
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions using PPO and fully-connected neural networks (FCNN) and refers to PyTorch in a citation, but it does not provide specific version numbers for these software components.
Experiment Setup Yes We evaluate all methods on varying depths of networks: 2-layer stands for (64, 64) hidden units, 4-layer for (64, 128, 128, 64), and 6-layer for (64, 64, 128, 128, 64, 64). Layers are connected with ReLU non-linearities. Hyperparameters for all methods are provided in Appendix F.