Classification and Representation Joint Learning via Deep Networks

Authors: Ya Li, Xinmei Tian, Xu Shen, Dacheng Tao

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on several benchmark image classification datasets, and the results demonstrate the effectiveness of our proposed method.
Researcher Affiliation Collaboration CAS Key Laboratory of Technology in Geo-Spatial Information Processing and Application Systems, University of Science and Technology of China, China UBTECH Sydney Artificial Intelligence Institute, SIT, FEIT, The University of Sydney, Australia
Pseudocode Yes Algorithm 1 Parameter updating algorithm of our proposed co-learning network
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the described methodology.
Open Datasets Yes To evaluate the effectiveness of our proposed method, we conduct various experiments on three benchmark datasets: MNIST, SVHN, and CIFAR10.
Dataset Splits No The paper specifies training and test sets (e.g., "60000 28 28 handwritten digits of 10 classes and a test set of 10000 samples" for MNIST) and mentions different amounts of training samples used, but does not explicitly state the use of a separate validation split for reproduction.
Hardware Specification No The paper mentions "GPU" in a general context regarding storage limitations, but does not provide specific hardware details such as GPU/CPU models, processors, or memory specifications used for experiments.
Software Dependencies No All experiments are implemented using the CAFFE deep learning framework [Jia et al., 2014]. No specific version number is provided for CAFFE or any other software.
Experiment Setup Yes We use Le Net to conduct all experiments on MNIST. Le Net consists of 2 convolutional layers, and both of these layers are followed by a 2 2 max-pooling layers. Then, two fully connected layers are followed. The only preprocessing of the data is a global normalization that normalizes the pixel values of the image to 0-1. The parameters b and m are introduced for the pairwise loss, and λ is a trade-off parameter. The learning algorithm is presented in Algorithm 1. We preprocess the images using local contrast normalization. We adopt a network similar to that used in [Hoffer and Ailon, 2015], which consists of 4 convolutional layers and 1 fully connected layer. The images are preprocessed by performing global contrast normalization... Then, ZCA whitening is performed.