Long-range Meta-path Search on Large-scale Heterogeneous Graphs
Authors: Chao Li, Zijie Guo, qiuting he, Kun He
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments across diverse heterogeneous datasets validate LMSPS s capability in discovering effective long-range meta-paths, surpassing state-of-the-art methods. |
| Researcher Affiliation | Collaboration | 1School of Computer Science and Technology, Huazhong University of Science and Technology 2 China Mobile Information Technology Co.,Ltd. 3 School of Computer Science, Fudan University |
| Pseudocode | Yes | Algorithm 1 The search algorithm of LMSPS |
| Open Source Code | Yes | Our code is available at https://github.com/JHL-HUST/LMSPS. |
| Open Datasets | Yes | We evaluate LMSPS on several representative heterogeneous graph datasets, including DBLP, IMDB, ACM, and Freebase from HGB benchmark [34], and the large-scale dataset OGBN-MAG from OGB challenge [20]. |
| Dataset Splits | Yes | target type nodes are divided into 24% for training, 6% for validation, and 70% for testing. ... For the OGBN-MAG dataset, we use the official data partition, where papers published before 2018, in 2018, and since 2019 are nodes for training, validation, and testing, respectively. |
| Hardware Specification | Yes | We use Pytorch [38] to run all experiments on one Tesla V100 GPU with 16GB GPU memory. |
| Software Dependencies | No | The paper mentions "Pytorch [38]" but does not specify a version number for Pytorch or any other software dependencies, such as specific Python libraries or CUDA versions. |
| Experiment Setup | Yes | We set the number of selected meta-paths M = 30 for all datasets. The final search space V = 60. The maximum hop is 6 for ogbn-mag, DBLP, 5 for IMDB, ACM, and 3 for Freebase. ... For searching in the super-net, we train for 200 epochs. ... τ linearly decays with the number of epochs from 8 to 4. The learning rate is 0.001 for all search stages and HGB training stage, and 0.003 for OGBN-MAG training stage. The weight decay is always 0. |