Rethinking Independent Cross-Entropy Loss For Graph-Structured Data
Authors: Rui Miao, Kaixiong Zhou, Yili Wang, Ninghao Liu, Ying Wang, Xin Wang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate our joint-cluster learning framework on 12 public datasets over 7 backbone models. The code is available at: https://github.com/MR9812/Joint-Cluster-Supervised-Learning |
| Researcher Affiliation | Academia | 1School of Artificial Intelligence, Jilin University, China. 2Institute for Medical Engineering Science, Massachusetts Institute of Technology, USA. 3School of Computing, University of Georgia, USA. 4College of Computer Science and Technology, Jilin University, China. Correspondence to: Xin Wang <xinwang@jlu.edu.cn>. |
| Pseudocode | Yes | A. Algorithm. The detailed description of our proposed joint-cluster learning framework. Algorithm 1 Joint-Cluster Learning Framework |
| Open Source Code | Yes | The code is available at: https://github.com/MR9812/Joint-Cluster-Supervised-Learning |
| Open Datasets | Yes | We use the benchmark datasets Cora, Cite Seer, Pub Med (Sen et al., 2008), DBLP (Bojchevski & Günnemann, 2017), and Facebook (Rozemberczki et al., 2021a) in the class-balanced setting... ogbn-arxiv (Hu et al., 2020). |
| Dataset Splits | Yes | For DBLP and Facebook, we use 20 labeled nodes per class as the training set, 30 nodes per class for validation, and the rest for testing. ... For Last FMAsia, we randomly split 25%/25%/50% of nodes for training, validation, and testing. For ogbn-arxiv, we follow the standard split proposed by (Hu et al., 2020). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | Appendix G mentions "Py Torch Geometric" and links to its GitHub examples, but it does not specify version numbers for PyTorch, PyTorch Geometric, or any other software libraries used. |
| Experiment Setup | Yes | Following the experimental settings of original papers, for GAT3, we choose the model parameters by utilizing an early stopping strategy with a patience of 100 epochs on classification loss. For other GNN models45, we utilize the model parameters which perform best on the validation set for testing. The remaining hyper-parameters including learning rate, dropout and weight decay are tuned for different models. |