Gromov-Wasserstein Factorization Models for Graph Clustering
Authors: Hongtengl Xu6478-6485
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our model obtains encouraging results on clustering graphs. |
| Researcher Affiliation | Collaboration | Hongteng Xu1,2 1Infinia ML Inc. 2Department of ECE, Duke Univeristy |
| Pseudocode | Yes | Algorithm 1 GWD(M) PPA (Cs, Ct), Algorithm 2 GWD(M) BADMM(Cs, Ct), Algorithm 3 GWB(L)(U1:K, λ, B(0), μb), Algorithm 4 Learning GWF models |
| Open Source Code | Yes | The code is at https: //github.com/Hongteng Xu/Relational-Factorization-Model. |
| Open Datasets | Yes | We consider four commonly-used graph datasets (Kersting et al. 2016) in our experiments: the AIDS (Riesen and Bunke 2008), the PROTEIN dataset and its full version PROTEINF (Borgwardt et al. 2005), and the IMDB-B (Yanardag and Vishwanathan 2015). |
| Dataset Splits | Yes | We test different methods based on 10-fold cross-validation. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running the experiments. It only mentions using 'Adam' which is an optimizer. |
| Software Dependencies | No | The paper mentions "Adam" as an optimizer, but it does not list any specific software dependencies (e.g., Python, PyTorch, TensorFlow) with version numbers. |
| Experiment Setup | Yes | When implementing our GWF model, we set its hyperparameters as follows: the number of atoms is K = 30; the number of layers in the GWD module is M = 50; the weight of regularizer γ is 0.01 for the PPA-based GWD module and 1 for the BADMM-based GWD module; the number of layers in the GWB module is L = 2. [...] We use Adam (Kingma and Ba 2014) to train our GWF model. The learning rate of our algorithm is 0.05, and the number of epochs is 10. |