Auction Learning as a Two-Player Game
Authors: Jad Rahme, Samy Jelassi, S. Matthew Weinberg
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our approach by learning competitive or strictly improved auctions compared to prior work. |
| Researcher Affiliation | Academia | Jad Rahme , Samy Jelassi, S. Matthew Weinberg Princeton University Princeton, NJ 08540, USA {jrahme, sjelassi, smweinberg }@princeton.edu |
| Pseudocode | Yes | Algorithm 1 ALGnet training |
| Open Source Code | No | The paper does not provide any explicit statement or link for open-source code for the described methodology. |
| Open Datasets | No | The paper describes the distributions from which data is sampled (e.g., 'i.i.d. U[0, 1]', 'U[4, 16] and v2 U[4, 7]', or specific density functions), and references previous works where these settings are studied. However, it does not provide concrete access information (such as a URL, DOI, repository link, or formal citation including author and year for a specific dataset) for a publicly available or open dataset. |
| Dataset Splits | No | The paper mentions using a 'test set of 10,000 valuation profiles' and 'batches of valuation profiles of size B {500}' for training, but does not explicitly specify a separate validation dataset split or its size. |
| Hardware Specification | No | The paper states 'all our experiments can be run on Google s Colab plateform (with GPU)', which indicates the platform and type of processor used (GPU), but does not provide specific hardware details such as the exact GPU model, CPU, or memory specifications. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Adam W optimizer' but does not provide specific version numbers for these or any other software dependencies, which would be necessary for reproduction. |
| Experiment Setup | Yes | In Alg. 1, we used batches of valuation profiles of size B {500} and set T {160000, 240000}, Tlimit {40000, 60000}, Tinit {800, 1600} and τ {100}. We used the Adam W optimizer (Loshchilov & Hutter, 2017) to train the Auctioneer s and the Misreporter s networks with learning rate γ {0.0005, 0.001}. Typical values for the architecture s parameters are na = np = nm [3, 7] and hp = hn = hm {50, 100, 200}. |