Top-Down Deep Clustering with Multi-Generator GANs

Authors: Daniel P. M. de Mello, Renato M. Assunção, Fabricio Murai7770-7778

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct several experiments to evaluate the proposed method against recent DC methods, obtaining competitive results. Last, we perform an exploratory analysis of the hierarchical clustering tree that highlights how accurately it organizes the data in a hierarchy of semantically coherent patterns.
Researcher Affiliation Collaboration Daniel P. M. de Mello 1, Renato M. Assunc ao 2, 1, Fabricio Murai 1 1 Universidade Federal de Minas Gerais, 2 Esri Inc.
Pseudocode Yes Algorithm 1: Split... Algorithm 2: Raw Split... Algorithm 3: Refinement... Algorithm 4: Train Refin Group...
Open Source Code Yes Code available at github.com/dmdmello/HC-MGAN and supplementary/implementation details at arxiv.org/abs/2112.03398.
Open Datasets Yes We consider three datasets: MNIST (Le Cun et al. 1998), Fashion MNIST (FMNIST) (Xiao, Rasul, and Vollgraf 2017) and Stanford Online Products (SOP) (Oh Song et al. 2016).
Dataset Splits No The paper states 'we used all available images for each dataset' and discusses metrics computed on the results, implying the entire dataset is used for clustering and evaluation. However, it does not specify explicit train/validation/test splits (e.g., percentages, sample counts, or predefined splits) for model training or evaluation in a traditional supervised learning sense.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments. It only mentions 'As unsupervised tasks forbid hyperparameter tuning, we used only slightly different tunings for each dataset...'
Software Dependencies No The paper does not provide specific version numbers for any software dependencies (e.g., Python version, deep learning framework version like PyTorch or TensorFlow, or specific library versions).
Experiment Setup No The paper mentions 'slightly different tunings for each dataset' but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or other concrete system-level training configurations in the main text.