High-dimensional Clustering onto Hamiltonian Cycle

Authors: Tianyi Huang, Shenghui Cheng, Stan Z. Li, Zhengjun Zhang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform experiments on six real-world datasets and a COVID-19 dataset to illustrate the effectiveness of our HCHC. The experimental result shows the effectiveness of HCHC.
Researcher Affiliation Academia 1School of Engineering, Westlake University, Hangzhou, China 2Westlake Institute for Advanced Study, Hangzhou, China 3School of Economics and Management, the University of Chinese Academy of Sciences, Beijing, China 4School of Computer, Data & Information Sciences, the University of Wisconsin, Madison, USA.
Pseudocode Yes The algorithm of GLDC is summarized in Algorithm 1. The whole algorithm of our mapping is summarized in Algorithm 2
Open Source Code Yes The source code can be downloaded from https://github.com/Tianyi Huang2022.
Open Datasets Yes We use seven datasets including MNIST (Deng, 2012), Fashion (Xiao et al., 2017), USPS (Hull, 1994), Reuters10k (Lewis et al., 2004), HHAR (Stisen et al., 2015), Pendigits (ASUNCION, 2007), and BH (Abdelaal et al., 2019) to illustrates the effectiveness of HCHC.
Dataset Splits No The paper discusses training and testing but does not explicitly provide specific percentages or counts for training, validation, and test splits needed for reproduction.
Hardware Specification No No specific hardware details (like CPU/GPU models, memory, or cloud instance types) used for running experiments are mentioned.
Software Dependencies No The paper mentions using Adam optimizer and outlines the autoencoder structure, but does not provide specific version numbers for software dependencies or libraries used in implementation.
Experiment Setup Yes The initial β1 is set as 5. As we can see, with the increase of the iteration numbers in training, the magnitudes of Lr and La will become smaller and smaller. Thus a discount factor γ is used to tune the magnitude of β1 in every iteration t as β1 = γtβ1 where γ is set as 0.8. β2 is set as 10. σ2 is set from {0.05, 0.1, 0.2}. ξ is set from {0.005, 0.05, 0.1, 0.2}. k is set from {3, 4, 5, 30}. The batch size is set as 128. We use Adam optimizer in our training and the learning rate is set as 0.002.