Fine-grained Image Classification by Visual-Semantic Embedding
Authors: Huapeng Xu, Guilin Qi, Jingjing Li, Meng Wang, Kang Xu, Huan Gao
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on a challenging large-scale UCSD Bird-200-2011 dataset verify that our approach outperforms several stateof-the-art methods with significant advances. |
| Researcher Affiliation | Academia | 1 Southeast University, Nanjing, China 2 University of Electronic Science and Technology of China, Chendu, China 3 Xi an Jiaotong University, Xi an, China 4 Nanjing University of Posts and Telecommunications, Nanjing, China |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. Procedures are described in narrative text and mathematical equations. |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We choose DBpedia [Lehmann et al., 2015] (KB) and English-language Wikipedia (text) from 06.01.2016 as external knowledge. Word2Vec and Trans R (described in Section 4) are used to get the class embedding. In this section, we present the experimental settings and show experimental results of our proposed model on the widely-used benchmark Caltech-UCSD Bird-200-2011 [Wah et al., 2011]. |
| Dataset Splits | No | The paper mentions using "Caltech-UCSD Bird-200-2011" and training, but does not specify the exact training/validation/test splits (e.g., percentages or sample counts) used for reproduction. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU model, CPU type) used for running the experiments. |
| Software Dependencies | No | The paper mentions several deep learning architectures and techniques (e.g., Alex Net, VGG, Google Net, Res Net, Word2Vec, Trans R, batch-normalization, dropout), but does not provide specific version numbers for any underlying software dependencies (e.g., Python, TensorFlow, PyTorch). |
| Experiment Setup | Yes | We train our model using stochastic gradient descent with mini-batches 40 and learning rate 0.0015. The hyperparameter α of Eq. 7 is set to be 0.85 with cross-validation. |