Structural Entropy Based Graph Structure Learning for Node Classification
Authors: Liang Duan, Xiang Chen, Wenjie Liu, Daliang Liu, Kun Yue, Angsheng Li
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on a variety of datasets and the results show the superiority of our proposed method. |
| Researcher Affiliation | Academia | 1Yunnan Key Laboratory of Intelligent Systems and Computing, Yunnan University, Kunming, China 2School of Information Science and Engineering, Yunnan University, Kunming, China 3School of Computer Science and Engineering, Beihang University, Beijing, China |
| Pseudocode | Yes | Algorithm 1: Encoding Tree Construction |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described. |
| Open Datasets | Yes | We choose eight open benchmark datasets for experiments, including (1) blog graph Polblogs (Pedregosa et al. 2011), (2) website networks from Web KB, Texas and Wisconsin (Bandyopadhyay et al. 2005), (3) citation networks, Citeseer (Kipf and Welling 2017), Wiki-CS (Mernyei and Cangea 2020) and MS Academic (Klicpera, Bojchevski, and Gunnemann 2019), and (4) non-graph datasets, Breast Cancer (Cancer) and Digits (Pedregosa et al. 2011). |
| Dataset Splits | Yes | We construct a k NN graph as an initial adjacency matrix for each non-graph dataset, and adopt the original splits on training, validation and test sets. |
| Hardware Specification | Yes | All experiments are conducted on a machine with Intel 13900KF CPU, 128GB RAM and RTX4090 GPU, running Windows 11. |
| Software Dependencies | No | The paper states "We implement our method in Py Torch," but does not provide specific version numbers for PyTorch or any other software dependencies needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions using "the same dimensionality of node embeddings and optimizer for all methods (except MGEDE)" and setting "other parameters to the values recommended in the original papers," but it does not provide concrete hyperparameter values or detailed training configurations within the main text. |