Learning Tree Structured Potential Games
Authors: Vikas Garg, Tommi Jaakkola
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We now describe the results of our experiments on both synthetic and real data to demonstrate the efficacy of our algorithm. |
| Researcher Affiliation | Academia | Vikas K. Garg CSAIL, MIT vgarg@csail.mit.edu Tommi Jaakkola CSAIL, MIT tommi@csail.mit.edu |
| Pseudocode | Yes | Algorithm 1 Learning tree structured potential games |
| Open Source Code | No | The paper does not provide any statement about releasing the source code for its methodology or a link to a code repository. |
| Open Datasets | Yes | Publicly available at http://scdb.wustl.edu/. Publicly available at http://www.senate.gov/. |
| Dataset Splits | No | The paper uses a training set but does not specify any explicit train/validation/test dataset splits, percentages, or cross-validation methodology. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, or solvers with their versions) used in the experiments. |
| Experiment Setup | Yes | We report below the results of our experiments with the following setting of parameters: ρ = 1, βt = 0.005 (for all t), C = 10, ϵ = 0.1, and Max Iter = 100. For each local optimization problem, the configurations were constrained to share the slack variable in order to reduce the total number of optimization variables. Moreover, we used a scaled 0-1 loss [15], e(y, ym) = 1{y = ym}/n for each local optimization. We set h = 1 for the approximate method. |