Data Market Design through Deep Learning

Authors: Sai Srivatsa Ravindranath, Yanchen Jiang, David C. Parkes

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that this new deep learning framework can almost precisely replicate all known solutions from theory, expand to more complex settings, and be used to establish the optimality of new designs for data markets and make conjectures in regard to the structure of optimal designs. [...] 5 Experimental Results for the Single Buyer Setting [...] 6 Experimental Results for the Multi-Buyer Setting
Researcher Affiliation Academia Sai Srivatsa Ravindranath Yanchen Jiang David C. Parkes Harvard John A. Paulson School of Engineering and Applied Sciences {saisr, yanchen_jiang, parkes} @g.harvard.edu
Pseudocode No The paper includes figures illustrating neural network architectures (e.g., Figure 4, Figure 5) but does not provide any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The source code for all experiments is available from Github at https://github.com/saisrivatsan/ deep-data-markets/
Open Datasets Yes We specifically consider the following settings with binary states and binary actions with payoffs v = 1 and interim beliefs drawn from: A. an unit interval, i.e, θ U[0, 1]. B. an equal weight mixture of Beta(8, 30) and Beta(60, 30). [...] The optimal menus for each of these settings are given by [7].
Dataset Splits No The paper mentions training data ("minibatch", "ℓi.i.d samples S") and test data ("separate test set") but does not specify a separate validation dataset split with explicit percentages, counts, or a detailed methodology for splitting beyond "separate test set".
Hardware Specification Yes All our experiments were run on a single NVIDIA Tesla V100 GPU.
Software Dependencies No The paper mentions software components like "Adam Optimizer" and "Leaky ReLU activation functions" but does not provide specific version numbers for any libraries or dependencies.
Experiment Setup Yes We set the softmax temperature τ to 1/200. We train Rochet Net for 20, 000 iterations with a minibatch of size 2^15 sampled online for every update. [...] For the multi-buyer setting, all our neural networks consist of 3 hidden layers with 200 hidden units each. [...] We train the neural networks for 20000 iterations and make parameter updates using the Adam Optimizer with a learning rate of 0.001.