Solving graph compression via optimal transport

Authors: Vikas Garg, Tommi Jaakkola

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted several experiments to demonstrate the merits of our method.Table 1: Description of graph datasets, and comparison of accuracy on test data.
Researcher Affiliation Academia Vikas K. Garg CSAIL, MIT vgarg@csail.mit.eduTommi Jaakkola CSAIL, MIT tommi@csail.mit.edu
Pseudocode Yes Algorithm 1 Algorithm to compute Euclidean projection on the d-simplex under a diagonal transformation. Algorithm 2 Mirror Prox algorithm to (approximately) find ϵ in relaxation of (9).
Open Source Code No No explicit statement or link providing access to the authors' own source code was found.
Open Datasets Yes We used several standard graph datasets for our experiments, namely, DHFR [51], BZR-MD [52], MSRC-9, MSRC-21C [53], and Mutagenicity [54].
Dataset Splits Yes We partitioned each dataset into multiple train and test sets of varying sizes. Specifically, for each p {0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}, we divided each dataset randomly into a train set containing a fraction p of the graphs in the dataset, and a test set containing the remaining 1 p fraction. To mitigate the effects of chance, we formed 5 such independent train-test partitions for each fraction p for each dataset. For each method and each train-test split, we used a separate 5-fold cross-validation procedure to tune the coefficient of error term C over the set {0.1, 1, 10} for training an independent SVM model on the training portion.
Hardware Specification No No specific hardware details (e.g., CPU, GPU models, memory, or cluster specifications) used for running experiments were found in the paper.
Software Dependencies No The paper mentions software like SVMs and the Weisfeiler-Leman subtree kernel, but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We fixed the value of hyperparameters in Algorithm 2 for all our experiments. Specifically, we set the regularization coefficient λ = 1, and the gradient rates αℓ= 0.1, βℓ= 0.1, γℓ= 0.1 for each ℓ {0, 1, . . . , T}. We also let ρ0 be the stationary distribution by setting ρ0(v) for each v V as the ratio of deg(v), i.e. the degree of v, to the sum of degrees of all the vertices. We also fixed T = 25 for our algorithm. For each method and each train-test split, we used a separate 5-fold cross-validation procedure to tune the coefficient of error term C over the set {0.1, 1, 10} for training an independent SVM model on the training portion.