Robust Allocations with Diversity Constraints
Authors: Zeyu Shen, Lodewijk Gelauff, Ashish Goel, Aleksandra Korolova, Kamesh Munagala
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We finally perform an empirical simulation on real-world data that models ad allocations to show that this gap between Nash Welfare and other rules persists in the wild. |
| Researcher Affiliation | Academia | Zeyu Shen Duke University Durham NC 27708-0129 zeyu.shen@duke.edu Lodewijk Gelauff Ashish Goel Management Science and Engineering Stanford University, Stanford CA 94305 {lodewijk,ashishg}@stanford.edu Aleksandra Korolova Department of Computer Science University of Southern California korolova@usc.edu Kamesh Munagala Department of Computer Science Duke University, Durham NC 27708-0129 kamesh@cs.duke.edu |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is uploaded as part of the submission. |
| Open Datasets | Yes | We use two datasets. The UCI Adult dataset [1] tabulates census information... The Yahoo A3 dataset [2] contains bid information... |
| Dataset Splits | No | The paper states "N/A" for the question "Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?", indicating that these details, including dataset splits, are not provided. |
| Hardware Specification | No | The paper states "N/A" for the question "Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?", indicating that hardware specifications are not provided. |
| Software Dependencies | No | The paper mentions that code is uploaded but does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | No | The paper mentions some parameters (e.g., T for budget-capped valuation, repeating simulations 10 times) but does not provide comprehensive experimental setup details such as hyperparameters (learning rate, batch size, epochs) or specific optimizer settings. |