Algorithm Selection via Ranking

Authors: Richard Oentaryo, Stephanus Daniel Handoko, Hoong Chuin Lau

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the SAT 2012 competition dataset show that our approach yields competitive performance to that of more sophisticated algorithm selection methods. We evaluate the efficacy of our RAS approach through extensive experiments on the SAT 2012 competition data.
Researcher Affiliation Academia Living Analytics Research Centre, School of Information Systems Singapore Management University, Singapore 178902
Pseudocode Yes Algorithm 1 SGD Procedure for Ranking Optimization
Open Source Code No The paper does not explicitly state that its code is open-source or provide a link for it.
Open Datasets Yes For our experiments, we use the SAT 2012 datasets supplied by the UBC group1, after SATZilla won the SAT 2012 Challenge. 1http://www.cs.ubc.ca/labs/beta/Projects/ SATzilla
Dataset Splits Yes As our evaluation procedure, we adopt 10-fold cross validation. Specifically, we partition the problem instances (i.e., rows of the matrix) into 10 equal parts, and generate 10 pairs of training and testing data. For each fold, we enforce that 10% of the instances contained in the testing data do not appear in the training data.
Hardware Specification No The paper does not specify the hardware used for running experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in their implementation or experiments.
Experiment Setup Yes We set the parameters of our RAS method as follows: the learning rate η = 10-2, regularization parameter λ = 10-4, and maximum iterations Tmax = 25. For the RF method, we set the number of trees to 99 as per (Xu et al. 2012), and configured it to be as close as possible to SATZilla.