Unified Locally Linear Classifiers With Diversity-Promoting Anchor Points

Authors: Chenghao Liu, Teng Zhang, Peilin Zhao, Jianling Sun, Steven Hoi

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments showed that DU-LLSVM consistently surpassed several state-of-the-art methods with a predefined local coding scheme (e.g. LLSVM) or a supervised anchor point learning(e.g. SAPL-LLSVM).
Researcher Affiliation Collaboration Chenghao Liu,1,2 Teng Zhang,1,3 Peilin Zhao,4 Jianling Sun,1,3 Steven C.H. Hoi2 1School of Computer Science and Technology, Zhejiang University, China 2School of Information Systems, Singapore Management University, Singapore 3Alibaba-Zhejiang University Joint Institute of Frontier Technologies, China 4School of Software Engineering, South China University of Technology, China
Pseudocode Yes Algorithm 1 Local Coding Coordinates (LLC) Optimization Algorithm and Algorithm 2 Diversified and Unified Locally Linear SVM (DU-LLSVM)
Open Source Code No The paper does not provide any statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes We conduct experiments on six real-world datasets which were normalized to have zero mean and unit variance in each dimension. The statistics of the datasets after preprocessing are summarized in Table 1. (phishing, Magic04, IJCNN, w8a, connect-4, Covtype)
Dataset Splits No The paper mentions 'To make a fair comparison, all the algorithms are repeated over 5 experimental runs of different random permutation' but does not specify a distinct validation split or cross-validation strategy for hyperparameter tuning separate from the test set evaluation.
Hardware Specification No The paper does not explicitly describe the hardware used for its experiments.
Software Dependencies No The paper does not provide specific version numbers for any software components or libraries used.
Experiment Setup Yes For parameter settings, we performed grid search and cross validation to select the best parameters for each algorithm on the training set. We tuned the number of anchor points m from range [10, 20, 50, 100], the nearest neighbouring parameter in LLSVM and SAPLLLSVM from range [2, 3, 5, 8, 10], the learning rate parameter ρ from range [0.01, 0.001, 0.0001, 0.00001], learning rate parameter for anchor point from range [0.01, 0.001, 0.0001], Lipschitz to noise ratio parameter μ from range [0.01, 0.1, 0.5, 1, 10, 100], and skip parameter from range [10, 100, 1000, 10000].