Scalable Coordinated Exploration in Concurrent Reinforcement Learning

Authors: Maria Dimakopoulou, Ian Osband, Benjamin Van Roy

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present computational results that demonstrate the robustness and effectiveness of the approach we suggest in Section 3.
Researcher Affiliation Collaboration Maria Dimakopoulou Stanford University madima@stanford.edu Ian Osband Google Deep Mind iosband@google.com Benjamin Van Roy Stanford University bvr@stanford.edu
Pseudocode No The paper describes algorithms using text and mathematical formulations but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper provides a link to a demo video (https://youtu.be/kwvhfzbzb0o) but does not contain an explicit statement about the release of source code for the methodology described in the paper, nor a direct link to a code repository.
Open Datasets No The paper describes the environments used (cartpole problem, bipolar chain, parallel chains, Deep Mind control suite) but does not provide concrete access information (specific link, DOI, repository name, formal citation with authors/year) for publicly available or open datasets.
Dataset Splits No The paper describes the experimental setup and agent interactions but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software components like 'Deep Mind control suite' and 'ADAM optimizer' but does not provide specific version numbers for these or other ancillary software dependencies.
Experiment Setup Yes We pass the neural network six features: cos(φt), sin(φt), φt 10, x 10, 1{|xt| < 0.1}. Let fθ : S RA be a (50, 50)-MLP with rectified linear units and linear skip connection. We initialize each Qe(s, a | θe) = fθe + 3fθe 0 (s)[a] for θe, θe 0 sampled from Glorot initialization [2]. After each action, for each agent we sample a minibatch of 16 transitions uniformly from the shared replay buffer and take gradient steps with respect to θe using the ADAM optimizer with learning rate 10 3 [8]. We sample noise ze,j N(0, 0.01) to be used in the shared replay buffer.