Hypergraph-enhanced Dual Semi-supervised Graph Classification
Authors: Wei Ju, Zhengyang Mao, Siyu Yi, Yifang Qin, Yiyang Gu, Zhiping Xiao, Yifan Wang, Xiao Luo, Ming Zhang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on real-world graph datasets verify the effectiveness of the proposed method against existing state-of-the-art methods. |
| Researcher Affiliation | Academia | 1School of Computer Science, National Key Laboratory for Multimedia Information Processing, Peking University-Anker Embodied AI Lab, Peking University, China 2School of Statistics and Data Science, Nankai University, China 3Department of Computer Science, University of California, Los Angeles, USA 4School of Information Technology & Management, University of International Business and Economics, China. |
| Pseudocode | Yes | Algorithm 1 Optimization Framework of the HEAL |
| Open Source Code | No | There is no explicit statement about releasing the code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We assess our HEAL on six publicly available datasets, comprising two bioinformatics datasets PROTEINS (Neumann et al., 2016) and DD (Dobson & Doig, 2003); three datasets derived from social networks, specifically IMDB-B, IMDB-M, and REDDIT-M-5k (Yanardag & Vishwanathan, 2015); and one dataset from scientific collaborations, COLLAB (Yanardag & Vishwanathan, 2015). |
| Dataset Splits | Yes | We employ the same data split with Dual Graph (Luo et al., 2022), where the labeled training set, unlabeled training set, validation set, and test set are proportioned in a 2:5:1:2 ratio. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts) are mentioned for running the experiments. |
| Software Dependencies | No | For the implementation of HEAL, we employ the GIN (Xu et al., 2019) to configure the GNN-based encoder. No specific version numbers for software dependencies are provided. |
| Experiment Setup | Yes | For the implementation of HEAL, we empirically set the embedding dimension to 32, the batch size to 64, and the training epochs to 300. For our hypergraph structure learning module, we empirically set the number of hyperedge k to 32. Moreover, we set the weight balance hyper-parameter β for Lcon to 0.01. The model HEAL is optimized using the Adam optimizer with an initial learning rate of 0.01, and the weight decay is set to 0.0005. |