A Permutation-Equivariant Neural Network Architecture For Auction Design

Authors: Jad Rahme, Samy Jelassi, Joan Bruna, S. Matthew Weinberg5664-5672

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Section 4 presents numerical evidence for the effectiveness of our approach. Figure 1 (a)-(b) presents the distribution of h R of the optimal auction learned for setting (I) when varying the number of samples L.
Researcher Affiliation Academia Jad Rahme1, Samy Jelassi 1,2, Joan Bruna 2,3, S. Matthew Weinberg 1 1Princeton University, USA 2Courant Institute of Mathematical Sciences, New York University, USA 3Center for Data Science, New York University, USA
Pseudocode No The paper describes the architecture and optimization procedure textually, but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for its methodology is open-source or publicly available.
Open Datasets No The paper describes drawing item values from uniform distributions (e.g., 'Item values are drawn from U[0, 1]') and sampling valuation profiles, indicating synthetic data generation rather than the use of a pre-existing, publicly available dataset with concrete access information.
Dataset Splits No The paper mentions 'training set' and 'testing set' (e.g., 'Figure 3(a)-(b) presents a plot of revenue and regret as a function of training epochs for the setting (I). We can see that the revenue converges to the theoretical optimum while the regret converges to zero on both the training and testing set.'), but does not explicitly detail a validation split or its proportions.
Hardware Specification No The paper does not provide any specific details about the hardware used for running experiments.
Software Dependencies No The paper discusses neural network architectures but does not provide specific software dependencies or their version numbers (e.g., PyTorch, TensorFlow versions).
Experiment Setup Yes We evaluate these terms by running gradient ascent on v i with a step-size of 0.001 for {300, 500} iterations (we test {100, 300} different random initial v i and report the one achieves the largest regret).