Fair and Efficient Social Choice in Dynamic Settings
Authors: Rupert Freeman, Seyed Majid Zahedi, Vincent Conitzer
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We study a dynamic social choice problem in which an alternative is chosen at each round according to the reported valuations of a set of agents. In the interests of obtaining a solution that is both efficient and fair, we aim to maximize the long-term Nash welfare, which is the product of all agents utilities. We present and analyze two greedy algorithms for this problem, including the classic Proportional Fair (PF) algorithm. We analyze several versions of the algorithms and how they relate, and provide an axiomatization of PF. Finally, we evaluate the algorithms on data gathered from a computer systems application. |
| Researcher Affiliation | Academia | Rupert Freeman and Seyed Majid Zahedi and Vincent Conitzer Department of Computer Science Duke University Durham, NC 27708, USA {rupert,zahedi,conitzer}@cs.duke.edu |
| Pseudocode | Yes | The paper includes 'Algorithm 1 GREEDY (select one alternative)', 'Algorithm 2 GREEDY (select all alternatives)', 'Algorithm 3 PROPORTIONALFAIR (select one alternative)', and 'Algorithm 4 PROPORTIONALFAIR (select all alternatives)' as clearly labeled algorithm blocks. |
| Open Source Code | No | The paper states, 'The details can be found in the full version of the paper, which appears on the authors websites.', but this is not a concrete access statement for source code of the described methodology. No specific repository link or explicit code release statement is provided. |
| Open Datasets | Yes | We obtain our instance from Apache Spark [Zaharia et al., 2010] benchmarks. Table 1 lists the twelve Spark applications in our instance, each of which is defined by a fixed number of tasks. |
| Dataset Splits | No | The paper describes how smaller instances are generated ('For each value of T in Figure 3, we randomly generate a starting round t [1, 497 T] and consider the T rounds starting at t, for 100 random choices of t.'), but it does not specify explicit training, validation, or test dataset splits in typical machine learning sense. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Apache Spark' as the source of benchmarks, but it does not provide specific version numbers for Spark or any other ancillary software dependencies used in the experiments. |
| Experiment Setup | No | The paper describes the problem setting and the data characteristics (e.g., 'In our instance, there are two power boosts to be allocated. So at each round there are 12 2 alternatives, one for each pair of applications.'), but it does not provide specific experimental setup details such as hyperparameters, training configurations, or system-level settings for model training. |