Semidefinite Relaxations of the Gromov-Wasserstein Distance
Authors: Junyu Chen, Binh T. Nguyen, Shang Koh, Yong Sheng Soh
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our numerical experiments suggest that the proposed relaxation is strong in that it frequently computes the globally optimal solution. Our Python implementation is available at https://github.com/tbng/gwsdp. ... 4 Numerical Experiments with Off-the-shelf Convex Solvers |
| Researcher Affiliation | Academia | Junyu Chen Binh T. Nguyen Shang Hui Koh Yong Sheng Soh Department of Mathematics National University of Singapore chenjunyu@u.nus.edu,binhnt@nus.edu.sg,matsys@nus.edu.sg |
| Pseudocode | Yes | Algorithm 1 Computation of GW-SDP barycenters. Input: dataset {Ck, αk}K k=1; {λk}K k=1. Initialize C. repeat for k = 1 to K do πsdp,k solve_GW-SDP(Ck, C, αk, α). end for Update C using (8). until convergence |
| Open Source Code | Yes | Our Python implementation is available at https://github.com/tbng/gwsdp. |
| Open Datasets | Yes | We use a publicly available dataset of triangular meshes (Sumner and Popovi c, 2004). |
| Dataset Splits | No | The paper does not explicitly state specific train/validation/test dataset splits (e.g., percentages or sample counts) for any of its experiments. |
| Hardware Specification | Yes | Table 1 presents the run-time of the GW-SDP problem in Experiment 1, running on a PC with 8 cores CPU and 32GB of RAM. |
| Software Dependencies | Yes | We solve the GW-SDP instance implemented in CVXPY (Diamond and Boyd, 2016) using the SCS and MOSEK solvers (Ap S, 2022; O Donoghue et al., 2016). ... The MOSEK optimization toolbox for Python manual. Version 10.0. |
| Experiment Setup | No | The paper describes the general setup of different numerical experiments (e.g., matching Gaussian distributions, graph community matching) but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or system-level training settings for the optimization process itself. |