You Never Cluster Alone

Authors: Yuming Shen, Ziyi Shen, Menghan Wang, Jie Qin, Philip Torr, Ling Shao

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that TCC outperforms the state-of-the-art on challenging benchmarks.
Researcher Affiliation Collaboration Yuming Shen 1, Ziyi Shen2, Menghan Wang3, Jie Qin4, Philip H.S. Torr1, and Ling Shao5 1University of Oxford 2University College London 3e Bay 4Nanjing University of Aeronautics and Astronautics 5Inception Institute of Artificial Intelligence
Pseudocode Yes Algorithm 1: Training Algorithm of TCC
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes The experiments are conducted on five benchmark datasets, including CIFAR-10/100 [44], Image Net-10/Dog [9] and STL-10 [14].
Dataset Splits Yes Since most existing works have pre-defined cluster numbers, we adopt this practice and follow their training/test protocols [29, 33, 61, 73].
Hardware Specification Yes We train TCC for at least 1, 000 epochs on a single NVIDIA V100 GPU.
Software Dependencies No TCC is implemented with the deep learning toolbox Tensor Flow [1]. (TensorFlow is mentioned, but no version number is specified, nor are other library versions.)
Experiment Setup Yes We fix the contrastive temperature τ = 1, while using a slightly lower λ = 0.8 for the Gumbel softmax trick [32, 57]...We implement a fixed-length instance-level memory bank Q with a size of J = 12, 800... The size of the cluster-level memory bank P is set to L = 100 K... We have α = 0.5... The choice of batch size... We set it to 32 K by default... We employ the Adam optimizer [38] with a default learning rate of 3 10 3, without learning rate scheduling. We train TCC for at least 1, 000 epochs.