Centralized Ranking Loss with Weakly Supervised Localization for Fine-Grained Object Retrieval
Authors: Xiawu Zheng, Rongrong Ji, Xiaoshuai Sun, Yongjian Wu, Feiyue Huang, Yanhua Yang
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of the proposed scheme on widely-used benchmarks including CUB200-2011 and CARS196. We have reported significant gains over the state-of-the-art schemes, e.g., 5.4% over SCDA [Wei et al., 2017] on CARS196, and 3.7% on CUB200-2011. |
| Researcher Affiliation | Collaboration | Xiawu Zheng1,2, Rongrong Ji1,2 , Xiaoshuai Sun3, Yongjian Wu4, Feiyue Huang 4, Yanhua Yang 5 1 Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University 2 School of Information Science and Engineering, Xiamen University 3 Harbin Institute of Technology 4 Tencent Technology (Shanghai) Co.,Ltd 5 Xidian University |
| Pseudocode | Yes | Algorithm 1: Attractive Feature Extraction |
| Open Source Code | No | The paper does not provide any statement or link regarding the public availability of its source code. |
| Open Datasets | Yes | Datasets: Both CUB-200-2011 and CARS196 datasets are used in evaluation. The CUB-200-2011 [Wah et al., 2011] contains 200 bird classes with 11,788 images... The CARS196 [Jonathan et al., 2013] contains 196 car classes with 16,185 images. |
| Dataset Splits | No | We employ the first 100 classes (5,864 images) for training and use the remaining 100 classes (5.924 images) for testing... We employ the first 98 classes (8,054 images) for training and the remaining 100 classes (8,131 images) for testing. The paper describes train/test splits but does not explicitly mention a separate validation split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions applying VGG-16 and ImageNet but does not provide specific version numbers for software dependencies like programming languages, libraries, or frameworks. |
| Experiment Setup | Yes | We use the same hyperparameters in all experiments without specific tuning, with a mini-batch size of 60, a margin parameter m of 1, and an initial learning rate starting from 0.0001 and being divided by 10 in every 100-200 epochs. |