Graph Meta Learning via Local Subgraphs

Authors: Kexin Huang, Marinka Zitnik

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on seven datasets and nine baseline methods show that G-META outperforms existing methods by up to 16.3%. Unlike previous methods, G-META successfully learns in challenging, few-shot learning settings that require generalization to completely new graphs and never-before-seen labels.
Researcher Affiliation Academia Kexin Huang Harvard University kexinhuang@hsph.harvard.edu Marinka Zitnik Harvard University marinka@hms.harvard.edu
Pseudocode Yes The overview is in Algorithm 1 (Appendix E).
Open Source Code Yes Code and datasets are available at https://github.com/mims-harvard/G-Meta.
Open Datasets Yes Code and datasets are available at https://github.com/mims-harvard/G-Meta.
Dataset Splits Yes Hyperparameters are tuned via Dval. For disjoint label setups, we sample 5 labels for meta-testing, 5 for meta-validation, and use the rest for meta-training. For multiple graph shared labels setups, 10% (10%) of all graphs are held out for testing (validation).
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., CPU, GPU models, memory, or cloud instance types) used to run the experiments.
Software Dependencies No The paper mentions various software components and methods (e.g., GNNs, MAML, SGC, GIN, PyTorch is implicitly used based on the footnote but no version), but it does not specify any version numbers for these software dependencies, which would be necessary for reproducible setup.
Experiment Setup Yes For disjoint label setups, we sample 5 labels for meta-testing, 5 for meta-validation, and use the rest for meta-training. In each task, 2-ways 1-shot learning with 5 gradient update steps in meta-training and 10 gradient update steps in meta-testing is used for synthetic datasets. 3-ways 3-shots learning with 10 gradient update steps in meta-training and 20 gradient update steps in meta-testing are used for real-world datasets disjoint label settings. For multiple graph shared labels setups, 10% (10%) of all graphs are held out for testing (validation). The remaining graphs are used for training. For fold-PPI, we use the average of ten 2-way protein function tasks. Hyperparameters selection and a recommended set of them are in Appendix G.