Two-Stream Contextualized CNN for Fine-Grained Image Classification

Authors: Jiang Liu, Chenqiang Gao, Deyu Meng, Wangmeng Zuo

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental According to our experiments on public datasets, our approach achieves considerable high recognition accuracy without any tedious human s involvements, as compared with the state-of-the-art approaches.
Researcher Affiliation Academia 1Chongqing University of Posts and Telecommunications, Chongqing, China 2Xi an Jiaotong University, Xi an, China 3Harbin Institute of Technology, Harbin, China
Pseudocode No The paper describes the method procedurally but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements or links indicating that source code for the methodology is openly available.
Open Datasets Yes We test our two-stream contextualized CNN framework on three popular datasets: Oxford Flower 102(Flower102)(Nilsback and Zisserman 2008), Caltech UCSD Birds 200-2010(CUB2010)(Welinder et al. 2010) and Caltech-UCSD Birds 200-2011(CUB2011)(Wah et al. 2011) using their corresponding evaluation metrics.
Dataset Splits No The paper mentions using a 'training set' to calculate a mean image, but it does not provide specific percentages or counts for training, validation, or test splits, nor does it cite predefined splits for these datasets.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions deep learning frameworks and algorithms but does not provide specific software dependencies with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9').
Experiment Setup No The paper describes the overall architecture and training strategy (e.g., 'fine-tuned from vgg-16', 'SGD method') but does not provide specific experimental setup details such as hyperparameter values (learning rate, batch size, epochs) or detailed training configurations.