HGDL: Heterogeneous Graph Label Distribution Learning
Authors: Yufei Jin, Heng Lian, Yi He, Xingquan Zhu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Both theoretical and empirical studies substantiate the effectiveness of our HGDL approach. Our code and datasets are available at https://github.com/Listener-Watcher/HGDL. |
| Researcher Affiliation | Academia | Dept. of Elec. Eng. & Computer Sci., Florida Atlantic University, Boca Raton, FL 33431, USA Dept. of Data Science, William & Mary, Williamsburg, VA 23185, USA |
| Pseudocode | Yes | Algorithm 1 HGDL Algorithm |
| Open Source Code | Yes | Our code and datasets are available at https://github.com/Listener-Watcher/HGDL. |
| Open Datasets | Yes | We prepare five datasets with ground-truth node label distributions using existing heterogeneous graphs, including DBLP [33], ACM [33], YELP, DRUG [34], and URBAN [1]. |
| Dataset Splits | Yes | The train:validation:test splits for DRUG, ACM, DBLP, YELP, and URBAN dataset are 8:1:1, 7:1:2, 4:1:5, 8:1:1, 5:3:2, respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. It only vaguely refers to "single-GPU machines" in the checklist. |
| Software Dependencies | No | The paper mentions "RDKit library [36]" and "Me SH-tree parsing tool [37]" but does not specify their version numbers. No other software dependencies are mentioned with specific version numbers. |
| Experiment Setup | Yes | Hyperparameters of all the baselines include the hidden dimension for the embedding learning and the hidden dimension for the semantic fusion learning (if the semantic fusion part exists), and the learning rate. In practice, a 0.005 learning rate is fixed for all the baselines unless it is observed that specific method cannot converge under such learning rate or can converge much faster with a large learning rate. An early stop patience signal is set and a maximum number of epochs is fixed. For the HDGLED method, the originally learned feature topology for HGDL is replaced by a 0-1 matrix generated by a Bernoulli distribution with a drop rate provided to convert to a 0-1 matrix. The drop rate for the HDGLED baseline in Table 2 in the main manuscript is 0.1. Fig 5 provides the results of different drop rate cases. For HGDL we have additional hyperparameters negative slope of Leaky Relu activation and γ for regularization loss. |