Contrastive Clustering
Authors: Yunfan Li, Peng Hu, Zitao Liu, Dezhong Peng, Joey Tianyi Zhou, Xi Peng8547-8555
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results show that CC remarkably outperforms 17 competitive clustering methods on six challenging image benchmarks. |
| Researcher Affiliation | Collaboration | Yunfan Li1, Peng Hu1, Zitao Liu2, Dezhong Peng1,4,5, Joey Tianyi Zhou3, Xi Peng1* 1College of Computer Science, Sichuan University, China 2TAL Education Group, China 3Institute of High Performance Computing, A*STAR, Singapore 4Shenzhen Peng Cheng Laboratory, China 5College of Computer & Information Science, Southwest University, China |
| Pseudocode | Yes | Algorithm 1: Contrastive Clustering |
| Open Source Code | Yes | The code is available at https://github.com/XLearning-SCU/2021-AAAI-CC. |
| Open Datasets | Yes | We evaluate the proposed method on six challenging image datasets. A brief description of these datasets is summarized in Table 1. Both the training and test set are used for CIFAR-10, CIFAR-100 (Krizhevsky and Hinton 2009), and STL-10 (Coates, Ng, and Lee 2011), while only the training set is used for Image Net-10, Image Net Dogs (Chang et al. 2017a), and Tiny-Image Net (Le and Yang 2015). |
| Dataset Splits | Yes | Both the training and test set are used for CIFAR-10, CIFAR-100 (Krizhevsky and Hinton 2009), and STL-10 (Coates, Ng, and Lee 2011), while only the training set is used for Image Net-10, Image Net Dogs (Chang et al. 2017a), and Tiny-Image Net (Le and Yang 2015). (...) Table 1: CIFAR-10 Train+Test 60,000 10, CIFAR-100 Train+Test 60,000 20, STL-10 Train+Test 13,000 10, Image Net-10 Train 13,000 10, Image Net-Dogs Train 19,500 15, Tiny-Image Net Train 100,000 200. |
| Hardware Specification | Yes | The experiments are carried out on Nvidia TITAN RTX 24G and it takes about 70 gpu-hours to train the model on CIFAR-10, 90 gpu-hours for CIFAR-100, 160 gpu-hours on STL-10, 20 gpu-hours on Image Net-10, 30 gpu-hours on Image Netdogs, and 130 gpu-hours on Tiny-Image Net. |
| Software Dependencies | No | The paper mentions 'Adam optimizer', 'Res Net34', and 'Sim CLR' settings, but it does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | For the instance-level contrastive head, the dimensionality of the row space is set to 128 to keep more information of images, and the instance-level temperature parameter τI is fixed to 0.5 in all experiments. For the cluster-level contrastive head, the dimensionality of the column space is naturally set to the number of clusters, and the cluster-level temperature parameter τC = 1.0 is used for all datasets. The Adam optimizer with an initial learning rate of 0.0003 is adopted to simultaneously optimize the two contrastive heads and the backbone network. No weight decay or scheduler is used. The batch size is set to 256 due to the memory limitation, and we train the model from scratch for 1,000 epochs to compensate for the performance loss caused by small batch size as suggested by Chen et al. |