Learning with Augmented Class by Exploiting Unlabeled Data

Authors: Qing Da, Yang Yu, Zhi-Hua Zhou

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on diverse datasets show the effectiveness of the proposed approach. To validate the the effectiveness of LACU-SVM, we conduct experiments on benchmark datasets from several diverse domains
Researcher Affiliation Academia Qing Da Yang Yu Zhi-Hua Zhou National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210023, China {daq, yuy, zhouzh}@lamda.nju.edu.cn
Pseudocode Yes Algorithm 1 LACU-SVM Training Algorithm
Open Source Code No The paper mentions using 'the implementations of OVR-SVM, OC-SVM and MOC-SVM in the LIBSVM software (Chang and Lin 2011), and the implementations of 1-vs-Set Machine and i Forest from the code released by the corresponding authors.' However, it does not provide an explicit statement or link for the open-source code of their own proposed method (LACU-SVM).
Open Datasets Yes We conduct experiments on the MNIST handwritten digit dataset, 20 Newsgroups, and the Caltech101 dataset (Fei-Fei, Fergus, and Perona 2007). These are well-known, publicly available datasets.
Dataset Splits No The paper specifies training and test data sizes (e.g., 'The number of training data, unlabeled data and test data are 500, 500 and 1000' for MNIST), but it does not explicitly mention a separate validation set or describe how data was split for validation purposes.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. It does not mention where the experiments were performed or on what kind of machines.
Software Dependencies No The paper mentions using 'the implementations of OVR-SVM, OC-SVM and MOC-SVM in the LIBSVM software' but does not specify the version number of LIBSVM or any other software library used for their own implementation or experiments.
Experiment Setup Yes The width for Gaussian kernel γ is set to a fixed value of 1/d. For LACU-SVM, s in the ramp loss is set to 0.3, C1 is set to C, C2 is set to C1L/U, and number of iterations is set to 10 for all experiments. Without further explanation, the parameters and λ in LACUSVM are set to 1.3 and 0.1 respectively by default. For evaluation, we focus on the macro-averaged F1 by treating the augmented class as the (K + 1)-th class.