Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Iterative Local Voting for Collective Decision-making in Continuous Spaces

Authors: Nikhil Garg, Vijay Kamble, Ashish Goel, David Marn, Kamesh Munagala

JAIR 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We then describe an experiment in which we test our algorithm for the decision of the U.S. Federal Budget on Mechanical Turk with over 2,000 workers, employing neighborhoods de ned by various L-Norm balls. We make several observations that inform future implementations of such a procedure.
Researcher Affiliation Academia Nikhil Garg EMAIL Stanford University Department of Electrical Engineering 475 Via Ortega, Stanford, CA 94035 USA; Vijay Kamble EMAIL University of Illinois at Chicago College of Business Administration 601 S Morgan St, Chicago, IL 60607 USA; Ashish Goel EMAIL Stanford University Department of Management Science & Engineering 475 Via Ortega, Stanford, CA 94035 USA; David Marn EMAIL University of California, Berkeley Department of Electrical Engineering & Computer Science 2626 Hearst Ave, Berkeley, CA 94720 USA; Kamesh Munagala EMAIL Duke University, Department of Computer Science D205, Levine Science Research Center, Research Drive, Durham, NC 27708
Pseudocode Yes Algorithm 1: Iterative Local Voting (ILV) Inputs: Initial solution x0 X, tolerance ϵ > 0, an integer N, initial radius r0 > 0, termination time T, norm q for local neighborhood. Output: Solution x.
Open Source Code No The paper mentions a live demo accessible at: http://gargnikhil.com/projectdetails/Iterative Local Voting/. However, it does not explicitly state that source code for the methodology is provided for download or in a repository. It only mentions planning to post the data and feedback.
Open Datasets Yes We asked voters to vote on the U.S. Federal Budget across several of its major categories: National Defense; Healthcare; Transportation, Science, & Education; and Individual Income Tax... The 2016 budget estimate was obtained from http:// federal-budget.insidegov.com/l/119/2016-Estimate and http://atlas.newamerica. org/education-federal-budget
Dataset Splits No The paper describes how participants were assigned to different conditions within the Mechanical Turk experiment: "each of the constrained mechanisms had three copies, given to three separate groups of people. Each group consisted of two sets with di erent starting points, with each worker being asked to vote in each set in her assigned group." This is an experimental design for participant assignment, not a dataset split for training/testing purposes in the traditional sense.
Hardware Specification No The paper states that experiments were conducted "on Amazon Mechanical Turk (https://www.mturk.com)". This indicates the platform used for human intelligence tasks but does not provide specific hardware details (e.g., GPU/CPU models, memory) used by the authors for computational tasks related to their algorithm or analysis.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or solvers) used for implementing or analyzing the described methodology.
Experiment Setup Yes To update the current point, we waited for 10 submissions and then updated the point to their average. This averaging explains the step-like structure in the convergence plots in the next section. The radius was decreased approximately every 60 submissions, rt = r0 / t/60 .