Contrastive Fine-grained Class Clustering via Generative Adversarial Networks
Authors: Yunji Kim, Jung-Woo Ha
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 EXPERIMENTS |
| Researcher Affiliation | Industry | Yunji Kim NAVER AI Lab yunji.kim@navercorp.com; Jung-Woo Ha NAVER AI Lab & NAVER CLOVA jungwoo.ha@navercorp.com |
| Pseudocode | No | The paper describes the model architecture and objective functions using equations and tables, but it does not include a block explicitly labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | Code is available at https://github.com/naver-ai/c3-gan. |
| Open Datasets | Yes | We tested our method on 4 datasets that consist of single object images. i) CUB (Wah et al., 2011): 5,994 training and 5,794 test images of 200 bird species. ii) Stanford Cars (Krause et al., 2013): 8,144 training and 8,041 test images of 196 car models. iii) Stanford Dogs (Khosla et al., 2011): 12,000 training and 8,580 test images of 120 dog species. iv) Oxford Flower (Nilsback & Zisserman, 2008): 2,040 training and 6,149 test images of 102 flower categories. |
| Dataset Splits | No | For example, for CUB: '5,994 training and 5,794 test images'. The paper lists training and test image counts for each dataset but does not explicitly mention a separate validation split or how it's handled for training. |
| Hardware Specification | Yes | The training was done with 2 NVIDIA-V100 GPUs |
| Software Dependencies | No | The paper mentions 'Adam optimizer' and 'Inception networks' but does not specify any software names with version numbers for libraries or programming languages used. |
| Experiment Setup | Yes | The weights of the loss terms (λ0, λ1, λ2, λ3, λ4) are set as (5, 1, 1, 0.1, 1), and the temperature τ is set as 0.1. We utilized Adam optimizer of which learning rate is 0.0002 and values of momentum coefficients are (0.5, 0.999). |