Teaching Semi-Supervised Classifier via Generalized Distillation

Authors: Chen Gong, Xiaojun Chang, Meng Fang, Jian Yang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The superiority of our algorithm to the stateof-the-art methods has also been demonstrated by the experiments on different datasets with various sources of privileged knowledge.
Researcher Affiliation Collaboration 1 School of Computer Science and Engineering, Nanjing University of Science and Technology 2 Jiangsu Key Laboratory of Image and Video Understanding for Social Security 3 Language Technologies Institute, Carnegie Mellon University 4 Tencent AI Lab
Pseudocode No The paper describes the proposed method using prose and mathematical equations, but does not include structured pseudocode or an algorithm block.
Open Source Code No The paper does not provide any concrete access to source code (e.g., repository link or explicit code release statement) for the methodology described.
Open Datasets Yes Specifically, the ORL dataset [Cai et al., 2006] is employed...we use a recent Wikipedia dataset [Pereira et al., 2014]...Specifically, a very challenging dataset CIFAR100 [Krizhevsky and Hinton, 2009] is employed here
Dataset Splits Yes In this paper, every compared method is evaluated by the 5-fold cross validation on each dataset, and the average accuracy over the outputs of the five independent runs are reported to assess the performance of a certain algorithm. Therefore, the training set in ORL contains 320 examples, in which we randomly select 80 examples into labeled set L and the remaining 240 examples constitute the unlabeled set U.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions models like Alex Net and VGGNet-16 for feature extraction but does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes In GDSSL, the parameter λ is set to 0.4 by searching the grid {0.2, 0.4, 0.6, 0.8}. The temperature parameter T is tuned to 0.01. Besides, the trade-off parameters in (6) are α = β = 0.1. For fair comparison, we build the same 10-NN graph for the graph-based methods including Lap RLS, LPDGL, Re LISH, and our GDSSL. The 10-NN graph with kernel width ξ = 10 is built for Lap RLS, Re LISH, LPDGL, and GDSSL.