Deep Ensembles for Graphs with Higher-order Dependencies

Authors: Steven Krieg, William Burgis, Patrick Soga, Nitesh Chawla

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally evaluate DGE against eight state-of-the-art baselines on six real-world data sets with known higher-order dependencies, and show that, even with similar parameter budgets, DGE consistently outperforms baselines on semisupervised (node classification) and supervised (link prediction) tasks.
Researcher Affiliation Academia Steven J. Krieg, William C. Burgis, Patrick M. Soga, & Nitesh V. Chawla Lucy Family Institute for Data and Society University of Notre Dame Notre Dame, IN 46556 {skrieg,wburgis,psoga,nchawla}@nd.edu
Pseudocode No The paper describes its methods textually but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Code and 3 data sets are available at https://github.com/sjkrieg/dge.
Open Datasets Yes Code and 3 data sets are available at https://github.com/sjkrieg/dge. ... clickstreams of users playing the Wikispeedia game (Wiki) (West et al., 2009)
Dataset Splits Yes Node classification results (mean micro F1 for 5-fold cross validation) under various parameter budgets.
Hardware Specification No The paper does not specify any hardware details such as GPU or CPU models, or memory used for experiments.
Software Dependencies Yes We used Python 3.7.3 and Tensorfow 2.4.1 for all experiments, and utilized Stellargraph 1.2.1 (Data61, 2018) for the implementation of DGE.
Experiment Setup Yes For DGE, unless noted otherwise, we fixed ℓ= 16 and used the mean-pooling variant of Graph SAGE as the base GNN... We manually tuned each model (details in Appendix C).