Dynamic Hypergraph Neural Networks

Authors: Jianwen Jiang, Yuxuan Wei, Yifan Feng, Jingxuan Cao, Yue Gao

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have evaluated our method on standard datasets, the Cora citation network and Microblog dataset. Our method outperforms state-of-the-art methods. More experiments are conducted to demonstrate the effectiveness and robustness of our method to diverse data distributions.
Researcher Affiliation Academia 1Beijing National Research Center for Information Science and Technology(BNRist) 2KLISS, School of Software, Tsinghua University, Beijing, China 3School of Information Science and Engineering, Xiamen University, Xiamen, China
Pseudocode Yes Algorithm 1 Hypergraph Construction... Algorithm 2 Hypergraph Convolution
Open Source Code No The paper does not provide any links to open-source code or state that code will be made available.
Open Datasets Yes We have evaluated our method on standard datasets, the Cora citation network and Microblog dataset. Our method outperforms state-of-the-art methods. ... Cora dataset. Cora dataset is a benchmark dataset of citation network. ... Microblog dataset. The Microblog dataset contains 5,550 tweets crawled from Sina Microblog platform 1 during Feb. 2014 to Apr. 2014. 1https://www.weibo.com
Dataset Splits Yes Experimental setup. We have conducted experiments on different splits of the Cora dataset including standard split described in [Yang et al., 2016]... The proportion for training set is selected as 2%, 5.2%, 10%, 20%, 30% and 44%, respectively. ... We followed the experimental setup in [Ji et al.], where 4,650, 400, 500 tweets were randomly selected as training, validation and test set, respectively.
Hardware Specification Yes Experiments were conducted on a Nvidia GeForce GTX 1080 Ti GPU with 11G memory and 10.6 T-flops computing capacity.
Software Dependencies No The paper mentions using 'Chinese auto-segmentation system ICTCLAS [Zhang et al., 2003]' and 'Senti Bank [Borth et al., 2013]', but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes We used 2 layer dynamic hypergraph neural network with a GCN-style input layer for feature dimension reduction. We used 400 cluster centers in k-means clustering method and chose 64 as the receptive field size. We added two dropout layers with the dropout rate of 0.5 before two convolutional layers. ... Dimensions of each modality feature were reduced to 32 before hypergraph convolution. We constructed three hyperedge sets for three modality respectively and merged these sets as one multimoal hyperedge set. For each modality, we use 400 cluster centers in k-means clustering method and the number of vertices contained is 8 in each clusters. We select 2 nearest clusters from k-means clusters and one k-NN cluster as the adjacent hyperedge set of each vertex. We use identical activation and dropout setting with Section 4.1.