Predicting deliberative outcomes

Authors: Vikas Garg, Tommi Jaakkola

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we demonstrate on two real voting datasets that our games can recover interpretable strategic interactions, and predict strategies for players in new settings.
Researcher Affiliation Academia 1CSAIL, MIT.
Pseudocode No The paper describes mathematical models and dynamics using equations and prose, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement or a link providing access to the source code for the methodology described.
Open Datasets Yes We included all the cases from the Rehnquist Court, during the period 1994-2005 that had votes documented for all the 9 Justices.1 data available at: http://scdb.wustl.edu/data.php ... Our second dataset consists of the roll call votes of the member countries on the resolutions considered in the UN General Assembly. ... Voeten et al., 2009. URL https:// hdl.handle.net/1902.1/12379.
Dataset Splits No The paper describes dividing data into sets A, B, and C for training and transfer testing but does not specify a distinct validation set used for hyperparameter tuning or early stopping during model training in the conventional sense. It uses set B and C for evaluation of transfer performance, not validation during training.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper mentions 'RMSprop optimizer in Py Torch' but does not specify version numbers for PyTorch or any other software components, which is required for reproducibility.
Experiment Setup Yes We report the results with k = 5, α = 0.1, and λ = 0.1 for all our experiments, except the transferable setting where we set α = 0.01 and λ = 0. We set ν to be the identity function in (4). We trained our models in batches of size 200, with default settings of the RMSprop optimizer in Py Torch.