Judgment Aggregation under Issue Dependencies

Authors: Marco Costantini, Carla Groenland, Ulle Endriss

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate the performance of our rules empirically, we apply them to a dataset of crowdsourced judgments regarding the quality of hotels extracted from the travel website Trip Advisor. In our experiments we distinguish between the full dataset and a subset of highly polarised judgments, and we develop a new notion of polarisation for profiles of judgments for this purpose, which may also be of independent interest.
Researcher Affiliation Academia Marco Costantini University of Amsterdam The Netherlands marcostantini2008@gmail.com Carla Groenland University of Amsterdam The Netherlands carla.groenland@gmail.com Ulle Endriss University of Amsterdam The Netherlands ulle.endriss@uva.nl
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link to the open-source code for the methodology it describes.
Open Datasets Yes We use a dataset of hotel reviews extracted from Trip Advisor by Wang, Lu, and Zhai (2010), which is available at Pref Lib.org, an online reference library of preference data (Mattei and Walsh 2013).
Dataset Splits No The paper uses a dataset for empirical evaluation but does not specify training, validation, or test splits (e.g., percentages or counts) typically used in machine learning experiments for model development and evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks used).
Experiment Setup No The paper describes the evaluation criteria and performance metrics, but it does not provide specific experimental setup details such as hyperparameters, optimization settings, or model initialization details typically found in machine learning experiments.