Efficient Inverse Multiagent Learning

Authors: Denizalp Goktas, Amy Greenwald, Sadie Zhao, Alec Koppel, Sumitra Ganesh

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We run two sets of experiments with the aim of answering two questions. Our first goal is to understand the extent to which our algorithms are able to compute inverse Nash equilibria, if any, beyond our theoretical guarantees. Our second goal is to understand the ability of game-theoretic models to make predictions about the future.
Researcher Affiliation Collaboration Denizalp Goktas & Amy Greenwald Brown University, Computer Science denizalp_goktas@brown.edu Sadie Zhao Harvard University, Computer Science Alex Koppel & Sumitra Ganesh JP Morgan Chase & Co
Pseudocode Yes Algorithm 1 Adversarial Inverse Multiagent Planning
Open Source Code Yes 9Our code can be found here.
Open Datasets Yes Using publicly available hourly Spanish electricity prices and aggregate demand data from Kaggle, we compute a simulacrum of the game that seeks to replicate these observations from January 2015 to December 2016.
Dataset Splits Yes Using publicly available hourly Spanish electricity prices and aggregate demand data from Kaggle, we compute a simulacrum of the game that seeks to replicate these observations from January 2015 to December 2016. We also train an ARIMA model on the same data, and run a hyperparameter search for both algorithms using data from January 2017 to December 2018. After picking hyperparameters, we then retrain both models on the data between January 2015 to December 2018, and predict prices up to December 2018. We also compute the mean squared error (MSE) of both methods using January 2018 to December 2020 as a test set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using Jax (Bradbury et al., 2018) but does not specify a version number for this or any other software dependency.
Experiment Setup No The paper mentions that a hyperparameter search was run for the algorithms but does not provide the specific hyperparameter values or detailed training configurations (e.g., learning rate, batch size, epochs) in the main text.