Fast topological clustering with Wasserstein distance
Authors: Tananun Songdechakraiwut, Bryan M Krause, Matthew I Banks, Kirill V Nourski, Barry D Van Veen
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed method is demonstrated to be effective using both simulated networks and measured functional brain networks. |
| Researcher Affiliation | Academia | Tananun Songdechakraiwut Department of Electrical and Computer Engineering University of Wisconsin Madison, USA songdechakra@wisc.edu Bryan M. Krause & Matthew I. Banks Department of Anesthesiology Department of Neuroscience University of Wisconsin Madison, USA Kirill V. Nourski Department of Neurosurgery Iowa Neuroscience Institute University of Iowa, USA Barry D. Van Veen Department of Electrical and Computer Engineering University of Wisconsin Madison, USA |
| Pseudocode | No | The paper describes an iterative algorithm in Section 3.2 but does not provide it in a formally structured pseudocode or algorithm block. |
| Open Source Code | Yes | Code for topological clustering is available at https://github.com/topolearn. |
| Open Datasets | Yes | We evaluate our method using an extended brain network dataset from the anesthesia study reported by Banks et al. (2020) |
| Dataset Splits | No | This paper focuses on clustering, not supervised learning, and thus does not describe traditional train/validation/test splits. It evaluates clustering performance against ground truth labels for the entire dataset used in the experiments. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | The paper lists several external tools and their sources (e.g., GUDHI, B-ADMM, bctnet, GraKeL) that were used for baseline comparisons. However, it does not provide specific version numbers for the core software dependencies or environment (e.g., Python, TensorFlow/PyTorch, NumPy versions) used for their own proposed method. |
| Experiment Setup | Yes | Initial clusters for all methods are selected at random. ... We use µ = 1 and σ = 0.5 universally throughout the study. ... We calculate these performance metrics by running the algorithm for 100 different initial conditions |