Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Learning in Non-Cooperative Configurable Markov Decision Processes
Authors: Giorgia Ramponi, Alberto Maria Metelli, Alessandro Concetti, Marcello Restelli
NeurIPS 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Furthermore, we empirically validate the performance of our algorithm in simulated domains.In this section, we provide the experimental evaluation of our algorithms in two different settings: when the policies are stochastic and when the policies are deterministic. |
| Researcher Affiliation | Academia | Giorgia Ramponi ETH AI Center Zurich, Switzerland EMAIL Alberto Maria Metelli Politecnico di Milano Milan, Italy EMAIL Alessandro Concetti Politecnico di Milano Milan, Italy EMAIL Marcello Restelli Politecnico di Milano Milan, Italy EMAIL |
| Pseudocode | Yes | Algorithm 1 Action-feedback Optimistic Configuration Learning (Af OCL). and Algorithm 2 Reward-feedback Optimistic Configuration Learning (Rf OCL) |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See supplementary material. |
| Open Datasets | No | The paper introduces “two novel environments: Configurable Gridworld and the Student-Teacher.” These are described as simulation environments designed for the experiments, not external datasets with specific public access information provided in the paper. |
| Dataset Splits | No | The paper describes experiments in simulated environments and evaluates performance based on cumulative regret over episodes. It does not mention explicit training, validation, or test dataset splits. |
| Hardware Specification | No | Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A] |
| Software Dependencies | No | The paper describes algorithms and concepts (e.g., UCB1 algorithm, value iteration) but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, or specific solvers). |
| Experiment Setup | Yes | In the first experiment (Figure 1), we considered 10 and 30 configurations with a number of episodes K = 2000 and K = 4000 and horizon H = 10. For this experiment, the agent plays optimal stochastic policies.and the results with M 2 {40, 60, 100} and horizon H = 10 are shown.50 runs, 98% c.i. |