Learning to Rank Based on Analogical Reasoning

Authors: Mohsen Ahmadi Fahandar, Eyke Hüllermeier

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Based on first experimental results for data sets from various domains (sports, education, tourism, etc.), we conclude that our approach is highly competitive.
Researcher Affiliation Academia Mohsen Ahmadi Fahandar, Eyke H ullermeier Department of Computer Science Paderborn University Pohlweg 49-51, 33098 Paderborn, Germany ahmadim@mail.upb.de, eyke@upb.de
Pseudocode Yes Algorithm 1 Analogy-based Pairwise Preferences (APP) and Algorithm 2 Rank Aggregation (RA) are provided.
Open Source Code No The paper provides a link for datasets ('1available at https://cs.uni-paderborn.de/is/'), but no explicit statement or link for the open-source code of the described methodology.
Open Datasets Yes The data sets1 are collected from various domains (e.g., sports, education, tourism) and comprise different types of feature (e.g., numeric, binary, ordinal). Table 1 provides a summary of the characteristics of the data sets. 1available at https://cs.uni-paderborn.de/is/
Dataset Splits Yes We fixed these parameters in an (internal) 2-fold cross validation (repeated 5 times) on the training data, using simple grid search on Sv Sk (i.e., trying all combinations).
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions software like 'SVM' and 'ERR' but does not specify any software dependencies with version numbers.
Experiment Setup Yes Recall that able2rank has two parameters to be tuned: The type of analogical proportion v Sv, where Sv = {v A, v A , v G, v MM, v AE, v AE }, and the number k Sk of relevant proportions considered for estimating pairwise preferences, where Sk = {10, 15, 20}. We fixed these parameters in an (internal) 2-fold cross validation (repeated 5 times) on the training data, using simple grid search on Sv Sk (i.e., trying all combinations). The combination (v , k ) with the lowest cross-validated error d RL is eventually adopted and used to make predictions on the test data (using the entire training data). The complexity parameter C of SVM is fixed in a similar way using an internal cross-validation.