Top Rank Optimization in Linear Time

Authors: Nan Li, Rong Jin, Zhi-Hua Zhou

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical study shows that the proposed approach is highly competitive to the state-of-the-art approaches and is 10-100 times faster. To evaluate the performance of the Top Push algorithm, we conduct a set of experiments on realworld datasets. Table 2 (left column) summarizes the datasets used in our experiments.
Researcher Affiliation Academia 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China 2Department of Computer Science and Engineering, Michigan State University, East Lansing, MI 48824 {lin,zhouzh}@lamda.nju.edu.cn rongjin@cse.msu.edu
Pseudocode Yes Algorithm 1 The Top Push Algorithm
Open Source Code No The paper does not provide any links to its source code or explicitly state that its code is publicly available.
Open Datasets Yes Table 2 (left column) summarizes the datasets used in our experiments. Some of them were used in previous studies [1, 31, 3], and others are larger datasets from different domains.
Dataset Splits Yes In each trial, the dataset is randomly divided into two subsets: 2/3 for training and 1/3 for test. For all algorithms, we set the precision parameter ϵ to 10 4, choose other parameters by 5-fold cross validation (based on the average value of Pos@Top) on training set, and perform the evaluation on test set.
Hardware Specification Yes All experiments are run on a machine with two Intel Xeon E7 CPUs and 16GB memory.
Software Dependencies Yes We implement Top Push and Infinite Push using MATLAB, implement AATP using CVX [14], and use LIBLINEAR [11] for LR and cs-SVM... [14] refers to 'CVX: Matlab software for disciplined convex programming, version 2.1'.
Experiment Setup Yes On each dataset, experiments are run for thirty trials. In each trial, the dataset is randomly divided into two subsets: 2/3 for training and 1/3 for test. For all algorithms, we set the precision parameter ϵ to 10 4, choose other parameters by 5-fold cross validation (based on the average value of Pos@Top) on training set, and perform the evaluation on test set.