On the Impossibility of Convex Inference in Human Computation

Authors: Nihar Shah, Dengyong Zhou

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We take an axiomatic approach by formulating a set of axioms that impose two mild and natural assumptions on the objective function for the inference. Under these axioms, we show that it is unfortunately impossible to ensure convexity of the inference problem. On the other hand, we show that interestingly, in the absence of a requirement to model spammers , one can construct reasonable objective functions for crowdsourcing that guarantee convex inference.
Researcher Affiliation Collaboration Nihar B. Shah U.C. Berkeley nihar@eecs.berkeley.edu Dengyong Zhou Microsoft Research dengyong.zhou@microsoft.com
Pseudocode No The paper presents mathematical models and proofs but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper is theoretical and does not describe a new method for which source code would be released. No statement about open-source code availability was found.
Open Datasets No The paper is theoretical and focuses on mathematical properties of objective functions rather than empirical studies involving training data. No mention of specific public datasets or access information for training was found.
Dataset Splits No The paper is theoretical and does not describe empirical experiments with dataset splits. Therefore, no information on training/validation/test splits is provided.
Hardware Specification No The paper is theoretical and does not describe any computational experiments that would require specific hardware. Therefore, no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not describe any computational experiments. Therefore, no specific software dependencies with version numbers are listed.
Experiment Setup No The paper focuses on theoretical derivations and proofs, not empirical experiments. As such, it does not provide details regarding experimental setup, hyperparameters, or training configurations.