Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
HGOT: Self-supervised Heterogeneous Graph Neural Network with Optimal Transport
Authors: Yanbei Liu, Chongxu Wang, Zhitao Xiao, Lei Geng, Yanwei Pang, Xiao Wang
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on four real-world datasets demonstrate that our proposed HGOT model can achieve state-of-the-art performance on various downstream tasks. |
| Researcher Affiliation | Academia | 1School of Life Sciences, Tiangong University, Tianjin, China 2School of Electrical and Infomation Engineering, Tianjin University, Tianjin, China 3School of Software, Beihang University, Beijing, China. Correspondence to: Zhitao Xiao <EMAIL>, Xiao Wang < xiao EMAIL >. |
| Pseudocode | No | The paper describes the method using mathematical formulations and descriptive text, but no explicit pseudocode or algorithm block is present. |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code for the methodology, nor does it include a link to a code repository. |
| Open Datasets | Yes | Four public datasets (DBLP, ACM, IMDB and Yelp) are used to demonstrate the effectiveness of the proposed method. The detailed information on datasets is provided in attached page. DBLP: https://dblp.uni-trier.de ACM: http://dl.acm.org/ IMDB: http://komarix.org/ac/ds/ Yelp: https://www.yelp.com/dataset |
| Dataset Splits | No | The paper uses public datasets but does not explicitly provide information on how these datasets were split into training, validation, or test sets (e.g., percentages or specific file names). |
| Hardware Specification | Yes | CPU information: 12th Gen Intel(R) Core(TM) i9-12900H (20 CPUs), 2.5GHz GPU information: NVIDIA Ge Force RTX 3070 Ti Laptop GPU |
| Software Dependencies | No | The paper only mentions the operating system (Microsoft Windows 11 64-bit) but does not provide specific version numbers for software libraries or frameworks used for implementation. |
| Experiment Setup | Yes | We search the learning rate from 1e-4 to 5e-3, tune the patience for early stopping from 5 to 20. The Adam optimizer (Diederik P. Kingma et al. 2015) is adopted for gradient descent. For all methods, we set the embedding dimension as 64 and report the mean and standard deviation of 10 runs with different random seeds. |