Fine-Grained Recognition: Accounting for Subtle Differences between Similar Classes

Authors: Guolei Sun, Hisham Cholakkal, Salman Khan, Fahad Khan, Ling Shao12047-12054

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments are performed on five challenging datasets. Our approach outperforms existing methods using similar experimental setting on all five datasets.
Researcher Affiliation Collaboration 1ETH Zurich, 2Inception Institute of Artificial Intelligence guolei.sun@vision.ee.ethz.ch, {hisham.cholakkal, salman.khan, fahad.khan, ling.shao}@inceptioniai.org
Pseudocode No The paper does not include a clearly labeled pseudocode or algorithm block.
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the described methodology.
Open Datasets Yes We comprehensively evaluate our algorithm on CUB-2002011 (Wah et al. 2011), Stanford Cars (Krause et al. 2013), FGVC Aircraft (Maji et al. 2013), and Stanford Dogs (Khosla et al. 2011), all of which are widely used for finegrained recognition. ... Furthermore, we also evaluate on the recent terrain dataset for terrain recognition: GTOS-mobile (Xue, Zhang, and Dana 2018) dataset and GTOS (Ground Terrain in Outdoor Scenes) (Xue et al. 2017) dataset
Dataset Splits No Table 1 lists #Train and #Test splits for each dataset but does not explicitly mention a separate validation split. The text states, 'We follow the same train/test splits as in the table.'
Hardware Specification Yes Our algorithm is implemented using Pytorch (Paszke et al. 2017) using two Tesla V100 GPU.
Software Dependencies No The paper states 'Our algorithm is implemented using Pytorch (Paszke et al. 2017)', but does not provide a specific version number for PyTorch or any other software dependencies.
Experiment Setup Yes For fair comparisons with other methods (Yang et al. 2018; Wang, Morariu, and Davis 2018), we use an input image resolution of 448 448 in all experiments. ... Momentum SGD optimizer is used with an initial learning rate of 0.001, which decays by 0.1 for every 50 epochs. We set weight decay as 10 4.