Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
EGonc : Energy-based Open-Set Node Classification with substitute Unknowns
Authors: Qin Zhang, Zelin Shi, Shirui Pan, Junyang Chen, Huisi Wu, Xiaojun Chen
NeurIPS 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experimental evaluations of EGonc also demonstrate its superiority. |
| Researcher Affiliation | Academia | Qin Zhang1 Zelin Shi1 Shirui Pan2 Junyang Chen1 Huisi Wu1 Xiaojun Chen1 1Shenzhen University 2Griffith University |
| Pseudocode | Yes | The algorithm of EGonc is illustrated in Algorithm 1. |
| Open Source Code | Yes | Code are available at https://github.com/hiromisyo/EGonc. |
| Open Datasets | Yes | Experiments to evaluate the performance for open-set node classification were mainly performed on five benchmark graph datasets [54, 72], namely Cora2, Citeseer3, DBLP4, Pub Med5, and Ogbn_arxiv6 [24, 45], which are widely used citation network datasets. |
| Dataset Splits | Yes | 70% of the known class nodes were sampled for training, 10% for validation and 20% for testing. |
| Hardware Specification | Yes | All the experiments were conducted on a workstation equipped with an Intel(R) Xeon(R) Gold 6226R CPU and an Nvidia A100. |
| Software Dependencies | No | EGonc is implemented with PyTorch and the networks are optimized using stochastic gradient descent with a learning rate of 1eā3. Specific version numbers for PyTorch or other libraries are not provided. |
| Experiment Setup | Yes | The GCN is configured with two hidden GCN layers in the dimension of 512 and 128, followed by an additional multilayer perceptron layer of size 64. EGonc is implemented with PyTorch and the networks are optimized using stochastic gradient descent with a learning rate of 1eā3. The balance parameters Ī»1, Ī»2 and Ī»3 are chosen by a grid search in the interval from 10ā2 to 102 with a step size of 101. |