Hypergraph Neural Architecture Search

Authors: Wei Lin, Xu Peng, Zhengtao Yu, Taisong Jin

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results for node classification on benchmark Cora, Citeseer, Pubmed citation networks and hypergraph datasets show that Hyper NAS outperforms existing HGNNs models and graph NAS methods.
Researcher Affiliation Academia Wei Lin1,2, Xu Peng1,2, Zhengtao Yu3,4, Taisong Jin1,2* 1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, China. 2School of Informatics, Xiamen University, 361005, China. 3Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China 4Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming, 650500, China
Pseudocode Yes Algorithm 1: Hypergraph neural architecture search
Open Source Code No No explicit statement or link for open-source code of the methodology is provided.
Open Datasets Yes Experimental results for node classification on benchmark Cora, Citeseer, Pubmed citation networks and hypergraph datasets show that Hyper NAS outperforms existing HGNNs models and graph NAS methods. Citation Network dataset (Sen et al. 2008).
Dataset Splits Yes Train/Validation/Test Split For the Cora citation dataset, we follow (Jiang et al. 2019) as the experimental setup of the proposed method. We choose the standard split (Yang, Cohen, and Salakhudinov 2016) and randomly select different proportions (2%, 5.2%, 10%, 20%, 30%, and 44%) of data as training sets to evaluate the performance of the compared methods. For the Citeseer and Pubmed datasets, we follow the experimental setup in (Welling and Kipf 2017), where 3.6% in the Citeseer dataset is used for training and 0.3% in the Pubmed dataset. For the Coauthor and Amazon datasets, we randomly select 30 nodes from each class to build the training and validation sets, and then use the remaining nodes as the test set.
Hardware Specification Yes All experiments are performed on a single NVIDIA 3090.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) are mentioned.
Experiment Setup Yes Dropout layers with the dropout rate of 0.5 are applied to avoid the over-fitting. We use Adam optimizer to optimize our cross-entropy loss function with a learning rate of 0.01.