Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Fair and Efficient Allocations with Limited Demands
Authors: Sushirdeep Narayana, Ian A. Kash5620-5627
AAAI 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Simulation Results We simulate the LCP and DRF-W mechanisms on randomly generated problem instances in order to better understand the trade-off between fairness and ef๏ฌciency. The simulation varies the number of agents from 2 to 5. For each number of agents, 2000 examples were generated. The number of resources for each example was chosen uniformly at random between 1 to 10. |
| Researcher Affiliation | Academia | Sushirdeep Narayana, Ian A. Kash Department of Computer Science, University of Illinois at Chicago, USA EMAIL, EMAIL |
| Pseudocode | No | The paper describes the mechanisms (DRF-W, LCP) and their properties mathematically and through examples, but it does not include pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any links to open-source code for the described methodology, nor does it explicitly state that code will be made available. |
| Open Datasets | No | The paper states: "The simulation varies the number of agents from 2 to 5. For each number of agents, 2000 examples were generated. The number of resources for each example was chosen uniformly at random between 1 to 10. The demand vector of an agent was generated using a uniform distribution on (0.0, 1.0]. The demand vector was then normalized for each agent. The amount of work ki required for agent i to complete was chosen uniformly at random from (0.0, 100.0]." This indicates randomly generated data rather than a publicly available dataset with concrete access information. |
| Dataset Splits | No | The paper describes how problem instances were randomly generated for simulations but does not specify any training, validation, or test dataset splits in the typical machine learning sense, nor does it mention cross-validation. |
| Hardware Specification | No | The paper does not mention any specific hardware specifications (e.g., GPU/CPU models, memory) used for running the simulations. |
| Software Dependencies | No | The paper does not provide specific details about software dependencies or their version numbers used for the simulations (e.g., programming languages, libraries, frameworks, or solvers with versions). |
| Experiment Setup | No | The paper describes how the simulation instances were generated (e.g., number of agents, resources, demand distribution, work amount) but does not provide details about experimental setup in terms of hyperparameters for a model or specific system-level training settings. |