Hypergraph Joint Representation Learning for Hypervertices and Hyperedges via Cross Expansion

Authors: Yuguang Yan, Yuanlin Chen, Shibo Wang, Hanrui Wu, Ruichu Cai

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform experiments on both hypervertex classification and hyperedge classification tasks to demonstrate the effectiveness of our proposed method.
Researcher Affiliation Academia 1School of Computer Science, Guangdong University of Technology, Guangzhou, China 2College of Information Science and Technology, Jinan University, Guangzhou, China 3Guangdong Provincial Key Laboratory of Public Finance and Taxation with Big Data Application, Guangzhou, China
Pseudocode No The paper describes the proposed method using equations and natural language, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We follow (Dong, Sawin, and Bengio 2020) to consider four benchmark datasets, including the co-citation datasets Citeseer (Bhattacharya and Getoor 2007) and Pubmed (Namata et al. 2012), the co-authorship datasets Cora (Sen et al. 2008), and DBLP (Rossi and Ahmed 2015).
Dataset Splits No The paper states 'For fair comparisons, we strictly follow the experimental setting in (Wu, Yan, and Ng 2022)', and mentions early stopping, which implies a validation set. However, it does not explicitly provide specific details about how the training, validation, and test splits were generated (e.g., percentages or exact counts) within this paper's text.
Hardware Specification Yes This experiment was conducted on a Linux server with an NVIDIA RTX 4090 (24GB) graphics card.
Software Dependencies No The paper mentions 'All the experiments are conducted on the Py Torch platform' but does not specify its version number or any other software dependencies with their respective versions.
Experiment Setup Yes We tune the hyperparameters λv and λe in the range {0.001, 0.01, 0.1, 1, 10} on one trial, and apply the selected values for the remaining trials. For our proposed method, we employ a two-layer graph convolutional network, and the hidden layer dimension is set to 512. We train the model for 200 epochs by the Adam optimizer and apply early stopping with a window of 100. The learning rate and weight decay factor are selected in the {0.0001, 0.001, 0.01, 0.1}. Leaky ReLU with a negative input slope 0.2 is adopted as the activation function.