Certifying Strategyproof Auction Networks

Authors: Michael Curry, Ping-yeh Chiang, Tom Goldstein, John Dickerson

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experiment on two auction settings: 1 agent, 2 items, with valuations uniformly distributed on [0, 1] (the true optimal mechanism is derived analytically and presented by Manelli and Vincent [2006]); and 2 agents, 2 items, with valuations uniformly distributed on [0, 1], which is unsolved analytically but shown to be empirically learnable in Duetting et al. [2019]. For each of these settings, we train 3 networks: ... Our results for regret, revenue and solve time are summarized in Table 1.
Researcher Affiliation Academia Michael J. Curry curry@cs.umd.edu Computer Science Department University of Maryland College Park, MD 20742; Ping-Yeh Chiang pchiang@cs.umd.edu Computer Science Department University of Maryland College Park, MD 20742; Tom Goldstein tomg@cs.umd.edu Computer Science Department University of Maryland College Park, MD 20742; John P. Dickerson john@cs.umd.edu Computer Science Department University of Maryland College Park, MD 20742
Pseudocode No The paper describes methods and equations, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format.
Open Source Code No The paper states 'All training code is implemented using the Py Torch framework Paszke et al. [2019]' but does not provide a link to its own open-source implementation or explicitly state that the code for the described methodology is publicly available.
Open Datasets No The paper states 'We generate 600,000 valuation profiles as training set and 3,000 valuation profiles as the testing set' and describes valuations as 'uniformly distributed on [0, 1]', indicating synthetically generated data without providing a link, DOI, or citation for public access to the specific generated dataset used.
Dataset Splits Yes We generate 600,000 valuation profiles as training set and 3,000 valuation profiles as the testing set.
Hardware Specification No The paper does not specify the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'All training code is implemented using the Py Torch framework Paszke et al. [2019]' and refers to a 'Gurobi-based Gurobi Optimization, LLC [2020] integer program formulation', but does not provide specific version numbers for PyTorch, Gurobi, or any other software dependencies.
Experiment Setup Yes We use a batch size of 20,000 for training, and we train the network for a total of 1000 epochs. At train time, we generate misreports through 25 steps of gradient ascent on the truthful valuation profiles with learning rate of .02; at test time, we use 1000 steps. ... detailed architectures, along with hyperparameters of the augmented Lagrangian, are reported in Appendix ??.