End to end learning and optimization on graphs
Authors: Bryan Wilder, Eric Ewing, Bistra Dilkina, Milind Tambe
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our CLUSTERNET system outperforms both pure end-to-end approaches (that directly predict the optimal solution) and standard approaches that entirely separate learning and optimization. We now show experiments on domains that combine link prediction with optimization. |
| Researcher Affiliation | Academia | Bryan Wilder Harvard University bwilder@g.harvard.edu Eric Ewing University of Southern California ericewin@usc.edu Bistra Dilkina University of Southern California dilkina@usc.edu Milind Tambe Harvard University milind_tambe@harvard.edu |
| Pseudocode | No | The paper describes algorithmic steps and mathematical formulations in its text (e.g., in sections 4.1 and 4.2) and presents a system diagram in Figure 1, but it does not include an explicitly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Code for our system is available at https://github.com/bwilder0/clusternet. |
| Open Datasets | Yes | We use several standard graph datasets: cora [40] (a citation network with 2,708 nodes), citeseer [40] (a citation network with 3,327 nodes), protein [14] (a protein interaction network with 3,133 nodes), adol [12] (an adolescent social network with 2,539 vertices), and fb [13, 32] (an online social network with 2,888 nodes). |
| Dataset Splits | Yes | First, a synthetic generator introduced by [48]... We use 20 training graphs, 10 validation, and 30 test. Second, a dataset obtained by splitting the pubmed graph into 20 components using metis [27]. We fix 10 training graphs, 2 validation, and 8 test. |
| Hardware Specification | No | The paper states that runtimes are provided in the appendix and that experiments were run 'on identical hardware', but it does not specify any particular hardware components such as GPU or CPU models in the main text. |
| Software Dependencies | No | The paper mentions using GCNs and node2vec features, but it does not provide specific version numbers for any software libraries, frameworks, or dependencies used in the experiments. |
| Experiment Setup | Yes | We used a graph dataset which is not included in our results to set our method s hyperparameters, which were kept constant across datasets (see appendix for details). We instantiate CLUSTERNET using a 2-layer GCN for node embeddings, followed by a clustering layer. We use K = 5 clusters; K = 10 is very similar and may be found in the appendix. |