Amalgamating Knowledge towards Comprehensive Classification

Authors: Chengchao Shen, Xinchao Wang, Jie Song, Li Sun, Mingli Song3068-3075

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate the effectiveness of our proposed method, we conduct experiments on several publicly available benchmarks. More experimental results can be found in the supplementary material.
Researcher Affiliation Academia Chengchao Shen,1 Xinchao Wang,2 Jie Song,1 Li Sun,1 Mingli Song1 1Zhejiang University, 2Stevens Institute of Technology
Pseudocode Yes Algorithm 1 Knowledge Amalgamation from Multiple Teachers
Open Source Code No The paper does not provide a direct link to a code repository or explicitly state that the source code for their method is available.
Open Datasets Yes The first two datasets we adopt, CUB-2002011 (Wah et al. 2011) and Stanford Dogs (Khosla et al. 2011), are related to animals and the last two, FGVCAircraft (Maji et al. 2013) and Cars (Krause et al. 2013), are related to vehicles.
Dataset Splits No The paper mentions "randomly split their categories" for training teacher networks but does not provide specific train/validation/test dataset split percentages or sample counts.
Hardware Specification Yes The proposed method is implemented using Py Torch (Paszke et al. 2017) on a Quadro P5000 16G GPU.
Software Dependencies No The paper states "implemented using Py Torch (Paszke et al. 2017)" but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup No The paper discusses the network architecture and training objectives (L2 loss, SGD) but does not provide specific hyperparameter values such as learning rate, batch size, or number of epochs in the main text. It mentions referring to supplementary material for detailed configuration.