Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Graphical Model Market Maker for Combinatorial Prediction Markets
Authors: Kathryn Blackmond Laskey, Wei Sun, Robin Hanson, Charles Twardy, Shou Matsumoto, Brandon Goldfedder
JAIR 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare the performance of three algorithms: the straightforward algorithm from the DAGGRE (Decomposition-Based Aggregation) prediction market for geopolitical events, the simple block-merge model from the Sci Cast market for science and technology forecasting, and a more sophisticated algorithm we developed for future markets. A numerical performance evaluation of the algorithms is presented in Section 5, followed by a concluding section. |
| Researcher Affiliation | Collaboration | Kathryn Blackmond Laskey EMAIL SEOR Department, George Mason University Fairfax, VA, 22030 USA; Wei Sun EMAIL Freddie Mac, Mc Lean, VA, USA; Robin Hanson EMAIL Economics Department, George Mason University Fairfax, VA, 22030 USA; Charles Twardy EMAIL C4I and Cyber Center, George Mason University Fairfax, VA, 22030 USA; Shou Matsumoto EMAIL C4I and Cyber Center, George Mason University Fairfax, VA, 22030 USA; Brandon Goldfedder EMAIL Gold Brand Software, 1282 Mason Mill Court Herndon, VA 20170, USA. |
| Pseudocode | Yes | Algorithm 4.1 (Update Userโs Asset Blocks after a Trade). Algorithm 4.2 (Calculate Cash using Global Separator). Algorithm 4.3 (Construct DAC Asset Junction Tree). Algorithm 4.4 (Min-propagation Protocol between two cliques). |
| Open Source Code | No | The paper does not provide an explicit statement about open-sourcing the code for the methodology described, nor does it include a link to a code repository. It mentions that the methods were implemented in DAGGRE and Sci Cast, which were public markets, but not that their implementation code is public. |
| Open Datasets | No | The paper describes a simulation to generate data for its experiments: "Our simulation varied the depth of the tree from m = 4 to m = 6. We assumed a market of 30 users with 240 trades in each round, or an average of 8 trades per user. For each of 50 runs, the simulation proceeded as follows. 1. Generate a random depth m and an m-level Bayesian network to represent a tournament with 2m teams." This indicates the dataset was generated for the purpose of the experiment, and no external public dataset is used or provided with access information. |
| Dataset Splits | No | The paper's experiments involve generating synthetic data through a simulation for each run, rather than using a static dataset with predefined training, validation, or test splits. The experimental procedure describes how data is generated and varied (e.g., 'random depth m', '240 trades in each round', '50 runs'), but not traditional dataset splits for a fixed dataset. |
| Hardware Specification | No | The paper discusses computation time and memory usage as performance metrics but does not specify any particular hardware (e.g., CPU, GPU models, memory amounts) used for running the experiments or simulations. |
| Software Dependencies | No | The paper describes algorithms and methods for prediction markets and graphical models, but it does not specify any particular software libraries, frameworks, or their version numbers that were used for implementation (e.g., Python, PyTorch, TensorFlow, specific solvers with versions). |
| Experiment Setup | Yes | Our simulation varied the depth of the tree from m = 4 to m = 6. We assumed a market of 30 users with 240 trades in each round, or an average of 8 trades per user. For each of 50 runs, the simulation proceeded as follows. 1. Generate a random depth m and an m-level Bayesian network to represent a tournament with 2m teams. 2. For each user, select a random team Tu. Select all nodes containing Tu as the nodes u edits. 3. For each of the 240 trades, perform the following steps. (a) Select a user to make the trade. For the ๏ฌrst 30 trades, cycle through the users, so that each user has at least one trade. For the rest of the trades, select a trader at random. (b) Generate a parent-child pair from the possible trades involving u s favorite team. Generate a random trade for user u to change the probability of a win of the parent node given a win of the child node. (c) Update the consensus probability distribution given the edit, recording the computation time. (d) Calculate u s new cash using each of the cash calculation methods, recording the computation time. (e) Calculate the expected score for all users using each of the expected score calculation methods, recording the computation time. 4. Record the storage required for the consensus probability distribution and all users asset structures for PJT and each of the trade-based asset management methods. |