Consensual Affine Transformations for Partial Valuation Aggregation
Authors: Hermann Schichl, Meinolf Sellmann2612-2619
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical comparisons with other aggregation methods, such as rank-based methods, Kemeny Young scoring, and a maximum likelihood estimator, show that the new method gives significantly better results in practice. Moreover, the computation is practically affordable and scales well even to larger numbers of experts and objects. We have devised four optimization approaches to deal with the biases introduced by partial valuations commonly occurring in conference reviewing. In a set of extensive experiments, we now evaluate and compare the various proposed formulations of the consensual affine transformation approach with each other and the most prominent existing techniques for score aggregation. |
| Researcher Affiliation | Collaboration | Hermann Schichl,1 Meinolf Sellmann2 1University of Vienna, Austria, 2General Electric hermann.schichl@univie.ac.at, meinolf@gmail.com |
| Pseudocode | No | The paper describes different formulations (linear, integer, non-linear) and their objectives, but does not provide structured pseudocode or an algorithm block for any of them. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | No | The paper mentions using a randomized model from (Roos, Rothe, and Scheuermann 2011) and instances derived from real conferences, but it does not provide concrete access information (link, DOI, or specific citation for public availability) for any dataset used. |
| Dataset Splits | No | The paper does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts). It describes how instances are generated and how evaluation is performed (e.g., by tracking error in top X% of objects), but not specific data partitioning for reproducibility. |
| Hardware Specification | Yes | All approaches have been implemented in C++ using the Gnu g++ compiler 4.4.5 (Red Hat 4.4.5-6) and were run on Intel Xeon CPU X3430 processors at 2.40GHz. |
| Software Dependencies | Yes | All approaches have been implemented in C++ using the Gnu g++ compiler 4.4.5 (Red Hat 4.4.5-6) and were run on Intel Xeon CPU X3430 processors at 2.40GHz. Whenever optimization was needed, we used Ilog Cplex 12.6. |
| Experiment Setup | Yes | In this experiment, we set the distortion limits to [ 5 5] for scaling and [ 1.8, 1.8] for shifting (which corresponds to 20% of the total score range in each direction). The noise was chosen uniformly at random in [ 0.72, 0.72] (or 40% of the maximal shifting distortion). We track this error rate as the percentage of misplaced objects in the top set. In our experiments we choose 25%, but this choice did not affect the results which were the same for other percentages, both lower and higher. |