Deep Representation Learning with Target Coding

Authors: Shuo Yang, Ping Luo, Chen Change Loy, Kenneth W. Shum, Xiaoou Tang

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on popular visual benchmark datasets. We performed two sets of experiments to quantitatively evaluate the effectiveness of target coding.
Researcher Affiliation Academia 1Department of Information Engineering, The Chinese University of Hong Kong 2Shenzhen Key Lab of CVPR, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions "Our implementation is based on Caffe (Jia 2013)" and provides a project website "http://mmlab.ie.cuhk.edu.hk/projects/Target Coding/" but explicitly states "For more technical details of this work, please contact the corresponding author Ping Luo via pluo.lhi@gmail.com" rather than providing direct public access to their code.
Open Datasets Yes Three popular benchmark datasets were used, i.e. variant of the MNIST dataset with irrelevant backgrounds and rotation, STL-10, and CIFAR-100. Scalability to large number of classes: This part shows that the proposed method scales well to the 1000-category Image Net-2012 dataset...
Dataset Splits Yes We followed the standard testing protocol and training/test partitions for each dataset. Image Net-2012 dataset, which contains roughly 1.2 million training images, 50,000 validation images, and 150,000 testing images.
Hardware Specification No The paper does not specify any particular hardware components such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper states "Our implementation is based on Caffe (Jia 2013)" but does not provide specific version numbers for Caffe or any other software dependencies.
Experiment Setup No The paper states "The details of the network parameters are provided in the supplementary material" and mentions setting "hyper-parameters the same and optimally for all methods" without providing the specific values in the main text.