Knowledge-Embedded Representation Learning for Fine-Grained Image Recognition
Authors: Tianshui Chen, Liang Lin, Riquan Chen, Yang Wu, Xiaonan Luo
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the widely used Caltech UCSD bird dataset demonstrate the superiority of our KERL framework over existing state-of-the-art methods. |
| Researcher Affiliation | Collaboration | 1 Sun Yat-sen University, China 2 Sense Time Research, China 3 Guilin University of Electronic Technology, China |
| Pseudocode | No | The paper describes the GGNN propagation process mathematically and provides an overall pipeline diagram, but it does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information (like a repository link or an explicit statement of code release) for its own methodology. |
| Open Datasets | Yes | We evaluate our KERL framework and the competing methods on the Caltech-UCSD bird dataset [Wah et al., 2011] that is the most widely used benchmark for fine-grained image classification. |
| Dataset Splits | No | The dataset covers 200 species of birds, which contains 5,994 images for training and 5,794 for test. While training and testing sizes are given, there is no explicit mention of a separate validation set split with specific sizes or percentages. |
| Hardware Specification | No | The paper mentions implementing models with VGG16-Net and using convolutional neural networks, but it does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments. |
| Software Dependencies | No | The paper mentions using VGG16-Net, a compact bilinear model, SGD, and ADAM, but it does not provide specific version numbers for any software or libraries. |
| Experiment Setup | Yes | The dimension of the hidden state is set to 10 and that of the output feature is set to 5. The iteration time T is set to 5. The KERL framework is jointly trained using the cross-entropy loss. All components of the framework are trained with SGD except GGNN that is trained with ADAM following [Marino et al., 2017]. |