Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Hybrid-MST: A Hybrid Active Sampling Strategy for Pairwise Preference Aggregation

Authors: JING LI, Rafal Mantiuk, Junle Wang, Suiyi Ling, Patrick Le Callet

NeurIPS 2018 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed method has been validated on both simulated and real-world datasets, where it shows higher preference aggregation ability than the state-of-the-art methods.
Researcher Affiliation Collaboration Jing Li LS2N/IPI Lab University of Nantes EMAIL Rafal K. Mantiuk Computer Laboratory University of Cambridge EMAIL Junle Wang Turing Lab Tencent Games EMAIL Suiyi Ling, Patrick Le Callet LS2N/IPI Lab University of Nantes suiyi.ling, EMAIL
Pseudocode Yes Algorithm 1 Hybrid-MST sampling algorithm
Open Source Code Yes Source code is public available in Github 1. 1Source code: https://github.com/jingnantes/hybrid-mst
Open Datasets Yes Video Quality Assessment(VQA) dataset This VQA dataset is a complete and balanced pairwise dataset from [38]. ... Image Quality Assessment (IQA) dataset This IQA dataset is a complete but imbalanced dataset from [26].
Dataset Splits No The paper does not explicitly provide training/validation/test dataset splits in the conventional machine learning sense. It describes Monte Carlo simulations and uses complete real-world datasets for evaluating aggregation performance, rather than splitting a dataset into distinct sets for model training, validation, and testing.
Hardware Specification Yes All computations are done using MATLAB R2014b on a Mac Book Pro laptop, with 2.5GHz Intel Core i5, 8GB memory.
Software Dependencies Yes All computations are done using MATLAB R2014b on a Mac Book Pro laptop...
Experiment Setup No The paper describes aspects of the experimental setup, such as the number of simulated objects, noise distribution, and the threshold for switching between GM and MST methods. However, it does not provide specific hyperparameter values like learning rates, batch sizes, or optimizer settings, which are common details in experimental setup descriptions.