Contextually Affinitive Neighborhood Refinery for Deep Clustering

Authors: Chunlin Yu, Ye Shi, Jingya Wang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments 4.1 Datasets and Settings 4.2 Implementations 4.3 Comparison with the State-of-the-Art 4.4 Ablation Study Table 1: Clustering result comparison (in percentage %) with the state-of-the-art methods on five benchmarks.
Researcher Affiliation Academia 1 Shanghai Tech University 2 Shanghai Engineering Research Center of Intelligent Vision and Imaging
Pseudocode Yes Algorithm 1 The proposed algorithm Co NR
Open Source Code Yes Code is available at: https://github.com/cly234/Deep Clustering-Con NR.
Open Datasets Yes CIFAR-10 [22], CIFAR-20 [22], STL-10 [7], Image Net-10 [4], Image Net-Dogs [4].
Dataset Splits No For the dataset split, both train and test data are used for CIFAR-10 and CIFAR-20, both labeled and unlabeled data are used for STL-10, and only training data of Image Net-10 and Image Net-Dogs are used, which is strictly the same setting with [17, 38, 23, 24]. The paper does not explicitly state a validation dataset split.
Hardware Specification No The paper does not explicitly state the specific hardware (e.g., CPU, GPU models, or cloud instance types) used for running the experiments.
Software Dependencies No The paper mentions software components like ResNet and SGD optimizer, but does not provide specific version numbers for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes All datasets are trained with 1000 epochs, where the first 800 epochs are trained with standard BYOL loss LI sim, and the remaining 200 epochs are trained with our proposed LGAF sim . We adopt the stochastic gradient descent (SGD) optimizer and the cosine decay learning rate schedule with 50 epochs of warmup. The base learning rate is 0.05 with a batch size of 256. ... For group-aware concordance, we set k, k1, k2 to 20,30,10 for Image Net-Dogs and 10,10,2 for other datasets.