Heterogeneous Graph Masked Autoencoders

Authors: Yijun Tian, Kaiwen Dong, Chunhui Zhang, Chuxu Zhang, Nitesh V. Chawla

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that HGMAE outperforms both contrastive and generative state-of-the-art baselines on several tasks across multiple datasets.
Researcher Affiliation Academia 1 Department of Computer Science and Engineering, University of Notre Dame, USA 2 Lucy Family Institute for Data and Society, University of Notre Dame, USA 3 Department of Computer Science, Brandeis University, USA
Pseudocode No The paper describes its methods and strategies using natural language and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Codes are available at https://github.com/meettyj/HGMAE.
Open Datasets Yes We employ four real datasets to evaluate the proposed model, including DBLP (Fu et al. 2020), Freebase (Li et al. 2020), ACM (Zhao et al. 2020), and AMiner (Hu, Fang, and Shi 2019).
Dataset Splits Yes Specifically, we use 20, 40, 60 labeled nodes per class as training set and 1000 nodes each for validation and test sets.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using HAN as the default encoder and decoder but does not provide specific version numbers for any software dependencies, such as programming languages, libraries, or frameworks.
Experiment Setup Yes We search the learning rate from 1e-4 to 5e-3, tune the patience for early stopping from 5 to 20, and test the leave unchanged and replaced rates from 0 to 0.5 with step 0.1. For dynamic mask rate, we set MINpa to 0.5, MAXpa to 0.8 and equals 0.005.