On Imitation in Mean-field Games

Authors: Giorgia Ramponi, Pavel Kolev, Olivier Pietquin, Niao He, Mathieu Lauriere, Matthieu Geist

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also provide a numerical illustration empirically supporting our claims in the appendix.
Researcher Affiliation Collaboration 1 ETH AI Center, Zurich 2 Max Planck Institute for Intelligent Systems, Tübingen, Germany 3 Google Deep Mind 4 ETH Zurich, Department of Computer Science 5 Shanghai Frontiers Science Center of Artificial Intelligence and Deep Learning, NYU Shanghai
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It describes concepts and equations.
Open Source Code No The paper does not provide a specific link or an explicit statement about releasing source code for the methodology described.
Open Datasets No The paper describes a simulated environment called "Attractor MFG" for its experiments, rather than using a publicly available or open dataset. No access information is provided for data.
Dataset Splits No The paper describes a simulation setup and does not mention training/validation/test dataset splits. It focuses on varying parameters of a simulated model.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its simulations.
Software Dependencies No The paper does not provide specific software dependencies with version numbers for its simulations.
Experiment Setup Yes The experiment consists in computing the errors BCn , vanilla ADVn , and MFC ADVn for various values of L = {0.01, 0.05, 0.1, 0.5} and H = {3, 25, 50, 75, 100} and we show the NIG as a function of the mentioned errors.