Knowledge Graph Transfer Network for Few-Shot Recognition

Authors: Riquan Chen, Tianshui Chen, Xiaolu Hui, Hefeng Wu, Guanbin Li, Liang Lin10575-10582

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on the Image Net dataset show significant performance improvement compared with current leading competitors. Furthermore, we construct an Image Net-6K dataset that covers larger scale categories, i.e, 6,000 categories, and experiments on this dataset further demonstrate the effectiveness of our proposed model.
Researcher Affiliation Collaboration 1Sun Yat-sen University, 2Dark Matter AI Research
Pseudocode No The paper does not include a pseudocode block or a clearly labeled algorithm.
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the described methodology.
Open Datasets Yes Extensive experiments on the Image Net dataset show significant performance improvement compared with current leading competitors. Furthermore, we construct an Image Net-6K dataset that covers larger scale categories, i.e, 6,000 categories, and experiments on this dataset further demonstrate the effectiveness of our proposed model.
Dataset Splits Yes The dataset covers 1,000 categories from ILSVRC2012 and is divided into 389 base categories and 611 novel categories where 193 base categories and 300 novel categories are used for crossvalidation and the remaining 196 categories and 311 novel categories are used for testing.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for the experiments.
Software Dependencies No The paper mentions using the SGD algorithm and ResNet-50 for feature extraction, but does not provide specific software dependencies with version numbers (e.g., PyTorch, TensorFlow versions).
Experiment Setup Yes At this stage, the model is trained using the SGD algorithm with a batch size of 256, momentum of 0.9 and weight decay of 0.0005. The learning rate is initialized as 0.1 and is divided by 10 for every 30 epochs. ... In this stage, we train the model using SGD algorithm with a batch size of 1,000, momentum of 0.9, weight decay of 0.0001, and learning rate of 0.01.