Energy Confused Adversarial Metric Learning for Zero-Shot Image Retrieval and Clustering
Authors: Binghui Chen, Weihong Deng8134-8141
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have been performed on several popular datasets for ZSRC, including CARS (Krause et al. 2013), CUB (Wah et al. 2011), Stanford Online Products and In-shop(Liu et al. 2016), achieving state-of-the-art performances. |
| Researcher Affiliation | Academia | Binghui Chen, Weihong Deng Beijing University of Posts and Telecommunications |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code available at http://www.bhchen.cn/. |
| Open Datasets | Yes | our ECAML is evaluated over the widely used benchmarks with the standard zero-shot evaluation protocol (Oh Song et al. 2016) on CARS (Krause et al. 2013), CUB (Wah et al. 2011), Stanford Online Products (Oh Song et al. 2016) and In-Shop (Liu et al. 2016). |
| Dataset Splits | No | The paper describes training and testing sets explicitly (e.g., 'training(seen classes, top fig) and testing(unseen classes, bottom fig) sets'), but does not explicitly mention a separate validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for running experiments. |
| Software Dependencies | No | Our work is implemented by caffe (Jia et al. 2014). (Caffe mentioned but no version, and no other specific software dependencies with versions). |
| Experiment Setup | Yes | For training, the optimizer is Adam (Kingma and Ba 2014) with learning rate 1e 5 and weight decay 2e 4. The training iterations are 5k(CUB), 10k(CARS), 20k(Stanford Online Products and In-Shop), resp. The new fc-layer is optimized with 10 times learning rate for fast convergence. Moreover, for fair comparison, we use minibatch of size 128 throughout our experiments, which is composed of 64 random selected classes with two instances each class. |