Amortized Mixing Coupling Processes for Clustering

Authors: Huafeng Liu, Liping Jing

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To illustrate the superiority of the proposed method, we perform experiments on both synthetic data and real-world data in terms of clustering performance and computational efficiency.
Researcher Affiliation Academia 1Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, Beijing, China 2The Department of Mathematics, The University of Hong Kong, Hong Kong SAR, China huafeng.liu@outlook.com, lpjing@bjtu.edu.cn
Pseudocode Yes Algorithm 1 Iterative Unrolling Inference for for AMCP
Open Source Code Yes The supplementary material contains the code for reproducing the results presented in this paper.
Open Datasets Yes Real-world datasets, MNIST [19], Tiny-Image Net [18] and CIFAR-10 [34], are used to validate the performance.
Dataset Splits No For all original dataset, data containing half of the classes is used to generate the training sets, and the remaining half is used to generate the test sets, with no overlap between training classes and test classes.
Hardware Specification No The paper mentions that hardware details are in the Appendix, which is not provided in the given text.
Software Dependencies No The paper does not provide specific version numbers for software dependencies.
Experiment Setup Yes The synthetic data is generated via a 2D Gaussian mixture model (GMM) which is defined by the following process α EXP(1), c1:N CRP(α), µk N(0, σ2 µ1), xi N(µci, σ2I). We set σµ = σ = 10. At each training step, we generate 10 random datasets according to the above generative process. Each dataset contains 200 points on a 2D plane, and each sampled from one of 4 Gaussians. ... The average clustering results are recorded over 5 runs with different random parameter initialization.