Fisher Information Embedding for Node and Graph Learning
Authors: Dexiong Chen, Paolo Pellizzoni, Karsten Borgwardt
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experiments on several node classification benchmarks, we demonstrate that our proposed method outperforms existing attention-based graph models like GATs. |
| Researcher Affiliation | Academia | 1Department of Biosystems Science and Engineering, ETH Z urich, Switzerland 2SIB Swiss Institute of Bioinformatics, Switzerland. |
| Pseudocode | No | The paper describes the EM algorithm steps and mathematical formulations but does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Our code is available at https://github.com/Borgwardt Lab/ fisher_information_embedding. |
| Open Datasets | Yes | We assess the performance of our method with six widely used benchmark datasets for node classification, including Cora, Citeseer, Pubmed (Sen et al., 2008) as semi-supervised transductive learning datasets and Reddit (Hamilton et al., 2017), ogbn-arxiv (Hu et al., 2020), ogbn-products (Hu et al., 2020) as mediumor large-scale supervised learning datasets. |
| Dataset Splits | Yes | All results are computed from 10 runs using different random seeds with the optimal hyperparameters selected on the validation set. |
| Hardware Specification | Yes | All experiments were performed on a shared GPU and CPU cluster equipped with GTX1080 and TITAN RTX. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' and 'Light GBM classifier', and 'FLAML' but does not specify their version numbers. |
| Experiment Setup | Yes | Full details on the datasets, experimental setup and implementation details can be found in the Appendix. The hyperparameters for training FIE models on different datasets are summarized in Table 3 and Table 4, respectively for unsupervised and supervised modes of FIE. For supervised learning tasks, a dropout with rate equal to 0.5 is used for training supervised embeddings of FIE. |