Where to Mask: Structure-Guided Masking for Graph Masked Autoencoders
Authors: Chuang Liu, Yuyao Wang, Yibing Zhan, Xueqi Ma, Dapeng Tao, Jia Wu, Wenbin Hu
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Furthermore, extensive experiments consistently demonstrate that our Struct MAE method outperforms existing state-of-the-art GMAE models in both unsupervised and transfer learning tasks. |
| Researcher Affiliation | Collaboration | 1School of Computer Science, Wuhan University, Wuhan, China 2JD Explore Academy, JD.com, China 3School of Computing and Information Systems, The University of Melbourne, Melbourne, Australia 4School of Computer Science, Yunnan University, Kunming, China 5School of Computing, Macquarie University, Sydney, Australia |
| Pseudocode | No | The paper describes the methodology in prose and mathematical formulas, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Codes are available at https: //github.com/Liu Chuang0059/Struct MAE. |
| Open Datasets | Yes | We employ seven real-world datasets, including MUTAG, IMDB-B, IMDB-M, PROTEINS, COLLAB, REDDIT-B, and NCI1, involving diverse domains and sizes... trained on a dataset comprising two million unlabeled molecules obtained from the ZINC15 [Sterling and Irwin, 2015] dataset... Subsequently, the model is fine-tuned on eight classification benchmark datasets featured in the Molecule Net dataset [Wu et al., 2018]. |
| Dataset Splits | Yes | The performance is assessed by measuring the mean accuracy obtained from a 10-fold cross-validation, and this evaluation is repeated five times to ensure robustness. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or types used for running the experiments. |
| Software Dependencies | No | The paper mentions using a '5-layer GIN model' but does not provide specific version numbers for software dependencies or libraries like GIN, PyTorch, or Python. |
| Experiment Setup | No | The paper mentions using a LIBSVM classifier and a 5-layer GIN model with a single-layer GIN decoder, and refers to 'default settings used in prior research', but does not provide specific hyperparameters like learning rate, batch size, or number of epochs in the main text. |