Least Square Calibration for Peer Reviews

Authors: Sijun Tan, Jibang Wu, Xiaohui Bei, Haifeng Xu

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On our synthetic dataset, we empirically demonstrate that our algorithm consistently outperforms the baseline which select top papers based on the highest average ratings.
Researcher Affiliation Academia Sijun Tan Department of Computer Science University of Virginia Charlottesville, VA 22903 st8eu@virginia.edu Jibang Wu Department of Computer Science University of Virginia Charlottesville, VA 22903 jw7jb@virginia.edu Xiaohui Bei School of Physical and Mathematical Sciences Nanyang Technological University Singapore 637371 xhbei@ntu.edu.sg Haifeng Xu Department of Computer Science University of Virginia Charlottesville, VA 22903 hx4ad@virginia.edu
Pseudocode Yes Algorithm 1 Repeat-Union2
Open Source Code Yes The source code can be found at https://github.com/lab-sigma/lsc
Open Datasets Yes To be more realistic, the distribution parameters of our synthetic data are chosen based on ICLR 2019 review scores [1]. ... In addition, we include an experiment on a real-world dataset [22] in Section 5.2.
Dataset Splits No The paper does not provide specific dataset split information (e.g., percentages or counts) for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions 'Gurobi Optimizer [11]8' but does not provide a specific version number for the software dependency.
Experiment Setup Yes Parameters are set as N = 1000, M = 1000, k = 5. We compare two settings: (1) σ = 0 (noiseless case); (2) σ = 0.5 (noisy case).