Preference Inference through Rescaling Preference Learning

Authors: Nic Wilson, Mojtaba Montazery

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper we analyse the new preference relation, deriving results regarding rescale-optimality that entail when scaling makes a difference, and that lead to a characterisation that allows computation of preference. Section 2 defines the preference relation, and the notion of rescale-optimality, and gives some basic properties. It can happen that rescaling makes no difference; we show how to determine this in Section 3. In Sections 4 and 5 we give characterisations for rescale-optimality, that lead to a way of computing the relation, which we test out with benchmarks derived from a real ride-sharing dataset in Section 6.
Researcher Affiliation Academia Nic Wilson and Mojtaba Montazery Insight Centre for Data Analytics School of Computer Science and IT University College Cork, Ireland {nic.wilson,mojtaba.montazery}@insight-centre.org
Pseudocode No The paper does not contain any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm', nor does it present structured steps formatted like code.
Open Source Code No The paper does not include any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper mentions using 'a subset of a year’s worth of real ridesharing records, provided by a commercial ridesharing system Carma (see http://carmacarpool.com/)', but this URL is for a commercial system and not a public dataset repository. No specific citation with authors/year for the dataset is provided.
Dataset Splits No The paper states, 'We randomly generate 100 pairs of (γ, β), based on a uniform distribution for each feature,' but it does not specify any training, validation, or test dataset splits (e.g., percentages, sample counts, or cross-validation setup).
Hardware Specification No The paper states, 'CPLEX 12.6.2 is used as the solver on a computer facilitated by a Core i7 2.60 GHz processor and 8 GB RAM memory.' While it mentions a 'Core i7' processor, it does not specify the exact model number (e.g., Core i7-XXXX) nor any GPU details, which are necessary for full reproducibility.
Software Dependencies Yes CPLEX 12.6.2 is used as the solver on a computer facilitated by a Core i7 2.60 GHz processor and 8 GB RAM memory.
Experiment Setup No The paper describes general experimental approach in Section 6, but it does not provide specific details such as hyperparameters, optimization settings, or other system-level training configurations.