A Scalable Neural Network for DSIC Affine Maximizer Auction Design

Authors: Zhijian Duan, Haoran Sun, Yurong Chen, Xiaotie Deng

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to demonstrate that AMenu Net outperforms strong baselines in both contextual and non-contextual multi-item auctions, scales well to larger auctions, generalizes well to different settings, and identifies useful deterministic allocations.
Researcher Affiliation Academia Zhijian Duan CFCS, School of Computer Science Peking University zjduan@pku.edu.cn Haoran Sun Peking University sunhaoran0301@stu.pku.edu.cn Yurong Chen CFCS, School of Computer Science Peking University chenyurong@pku.edu.cn Xiaotie Deng CFCS, School of Computer Science & CMAR, Institute for AI Peking University xiaotie@pku.edu.cn
Pseudocode No The paper describes the architecture and steps of AMenu Net but does not provide formal pseudocode or an algorithm block.
Open Source Code Yes Our implementation is available at https://github.com/Haoran0301/AMenu Net
Open Datasets No We generate each bidder representations xi R10 and item representations yj R10 independently from a uniform distribution in [ 1, 1]10 (i.e., U[ 1, 1]10). The valuation vij is sampled from U[0, Sigmoid(x T i yj)].
Dataset Splits No The paper mentions training samples and evaluation samples but does not explicitly describe a separate validation set or split for hyperparameter tuning.
Hardware Specification No All experiments are run on a Linux machine with NVIDIA Graphics Processing Unit (GPU) cores.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies, libraries, or solvers used in the experiments.
Experiment Setup Yes We train the models for a maximum of 8000 iterations, with 32768 generated samples per iteration. The batch size is 2048, and we evaluate all models on 100000 samples. We set the softmax temperature as 500 and the learning rate as 3 10 4. We tune the menu size in {32, 64, 128, 256, 512, 1024}. For the boost layer, we use a two-layer fully connected neural network with Re LU activation. The menu size and τ varies in different settings, and we present these numbers in 3.