Carrot and Stick: Eliciting Comparison Data and Beyond

Authors: Yiling Chen, Shi Feng, Fang-Yi Yu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on two real-world datasets further support our theoretical discoveries.
Researcher Affiliation Academia Yiling Chen Harvard University yiling@seas.harvard.edu Shi Feng Harvard University shifeng@fas.harvard.edu Fang-Yi Yu George Mason University fangyiyu@gmu.edu
Pseudocode Yes Mechanism 1: BPP mechanism for comparison data Input: Let A be a collection of items, E be an admissible assignment, and Λ†s be agents reports. for agent 𝑖 N with pair 𝑒𝑖= (π‘Žπ‘’π‘–, π‘Žπ‘£π‘–) = (π‘Ž, π‘Ž ) do Find π‘Ž A and two agents 𝑗and π‘˜so that 𝑒𝑗= (π‘Ž , π‘Ž ) and π‘’π‘˜= (π‘Ž , π‘Ž), and pay agent 𝑖 𝑀𝑖(Λ†s) = π‘ˆπ΅π‘ƒπ‘ƒ(ˆ𝑠𝑖, ˆ𝑠𝑗, Λ†π‘ π‘˜) = ˆ𝑠𝑖ˆ𝑠𝑗 Λ†π‘ π‘–Λ†π‘ π‘˜. (2)
Open Source Code Yes Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: The code is uploaded in the supplemental material.
Open Datasets Yes We test our mechanisms on real-world data (sushi preference dataset [26, 27] and Last.fm dataset [8]).
Dataset Splits No The paper does not explicitly provide training/validation/test dataset splits. The experiments focus on evaluating the payment mechanism on real-world data without explicit model training splits.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. The NeurIPS checklist states 'We believe the computer resources are not relevant to our main contributions.'
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiments.
Experiment Setup Yes For each agent 𝑖, we 1) randomly sample three items π‘Ž, π‘Ž , π‘Ž and two agents 𝑗, π‘˜, 2) derive agent 𝑖 s comparison on the first two items (π‘Ž, π‘Ž ) from her ranking, (and similarly for agent 𝑗 s comparison on (π‘Ž , π‘Ž ), and agent π‘˜ s comparison on (π‘Ž, π‘Ž )), 3) compute bonus-penalty payment on these three comparisons, 4) repeat the above procedure 100 times and pay agent 𝑖with the average of those 100 trials.