Scalable Gromov-Wasserstein Learning for Graph Partitioning and Matching

Authors: Hongteng Xu, Dixin Luo, Lawrence Carin

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare it with state-of-the-art methods for graph partitioning and matching. All the methods are run on an Intel i7 CPU with 4GB memory. Implementation details and a further set of experimental results are provided in Appendix B.
Researcher Affiliation Collaboration Hongteng Xu1,2 Dixin Luo2 Lawrence Carin2 1Infinia ML Inc. 2Duke University {hongteng.xu, dixin.luo, lcarin}@duke.edu
Pseudocode Yes Algorithms 1 and 2 show the details of our method, where and / represent elementwise multiplication and division, respectively.
Open Source Code Yes The implementation of our S-GWL method can be found at https://github.com/Hongteng Xu/s-gwl.
Open Datasets Yes The first dataset is the email network from a large European research institution [25]... The second dataset is the interactions among 1,991 villagers in 12 Indian villages [3]... The dataset is available on https://www3.nd.edu/~cone/MAGNA++/.... The dataset is available on http://vacommunity.org/ VAST+Challenge+2018+MC3...
Dataset Splits No The paper describes the datasets used and how some were made noisy, but it does not specify explicit training, validation, or test dataset splits in terms of percentages, counts, or predefined methodologies for reproducibility.
Hardware Specification Yes All the methods are run on an Intel i7 CPU with 4GB memory.
Software Dependencies No The paper mentions that “Metis is implemented in the C language while GWL and other methods are based on Python” but does not provide specific version numbers for Python or any other software libraries.
Experiment Setup Yes Specifically, we observed in our experiments that the γ in (3) should be set carefully according to observed graphs. Generally, for large-scale graphs we have to use a large γ and solve (3) with many iterations. The a and b in (5) are also significant for the performance of our method. The settings of these hyperparameters and their influences are shown in Appendix B.