GraphOpt: Learning Optimization Models of Graph Formation
Authors: Rakshit Trivedi, Jiachen Yang, Hongyuan Zha
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform extensive experiments on graphs with varying properties to gauge the efficacy of Graph Opt on the following measures: ... We comprehensively answer all three questions in the positive via experiments demonstrating effective transfer in the domain of citation graphs; competitive link prediction performance against dedicated baselines demonstrating compelling generalization properties; and consistently superior performance on graph construction experiments against strong baselines that learn from single input graph. |
| Researcher Affiliation | Academia | 1Georgia Institute of Technology, USA 2Institute for Data and Decision Analytics, the Chinese University of Hong Kong, Shenzhen. |
| Pseudocode | Yes | Algorithm 1 Graph Opt Algorithm |
| Open Source Code | No | The paper does not include an explicit statement about releasing its source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We conduct link prediction experiments on a variety of graphs from both non-relational and relational domains. Table 7 and 8 in Appendix C.1 provide dataset details. (e.g., Cora-ML, Citeseer, FB15K-237, WN18RR which are widely recognized public datasets). |
| Dataset Splits | No | For all experiments, we follow the protocol in (Zhang & Chen, 2018) by randomly removing 10% of edges to form a held-out test set and randomly sampling the same number of nonexistent links to form negative test samples. Training is then performed on the remaining graph. The paper specifies the test split but does not explicitly detail a separate validation split or its size/proportion. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., 'PyTorch 1.9' or 'Python 3.8'). |
| Experiment Setup | Yes | Training. All experiments begin by using the observed graph to learn the construction policy and latent reward function via Algorithm 1. A key advantage of using SAC as the base RL algorithm is that it largely eliminates the need for per-task hyperparameter tuning. To encourage creation of new edges during training, we terminate an episode when the number of repeated creations of each existing edge reaches a threshold k, which signifies that the policy has lost the ability to explore further. We provide more details on other training configurations in Appendix C.4. (Appendix C.4 states 'k=3 for non-relational datasets and k=5 for relational ones' and 'batch size of 512' and 'learning rate of 1e-4'). |