Hyper-SAGNN: a self-attention based graph neural network for hypergraphs
Authors: Ruochi Zhang, Yuesong Zou, Jian Ma
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. |
| Researcher Affiliation | Academia | Ruochi Zhang School of Computer Science Carnegie Mellon University Yuesong Zou School of Computer Science Carnegie Mellon University IIIS, Tsinghua University Jian Ma School of Computer Science Carnegie Mellon University jianma@cs.cmu.edu |
| Pseudocode | No | Figure 2: Structure of the neural network used in Hyper SAGNN. The input ( x1, x2, ..., xk), representing the features for nodes 1 to k, passes through two branches of the network resulting in static embeddings ( s1, s2, ..., sk) and dynamic embeddings ( d1, d2, ..., dk), respectively. The layer for generating dynamic embeddings is the multi-head attention layer. An example for its mechanism on node 1 here is shown in the figure as well. Then the pseudo-euclidean distance of each pair of static and dynamic embeddings is calculated by onelayered position-wise feed-forward network to produce probability scores (p1, p2, ..., pk). These scores are further averaged to represent whether this group of nodes form a hyperedge. (Explanation: The paper contains diagrams explaining the model structure, but no pseudocode or algorithm blocks are provided.) |
| Open Source Code | No | We downloaded the source code of DHNE from its Git Hub repository. (Explanation: The paper mentions using the source code of a *different* method (DHNE) but does not provide or explicitly state the availability of its own source code.) |
| Open Datasets | Yes | GPS (Zheng et al., 2010): GPS network. [...] Movie Lens (Harper & Konstan, 2015): Social network. [...] drug: Medicine network from FAERS1. [...] wordnet (Bordes et al., 2013): Semantic network from Word Net 3.0. [...] Ramani et al., 2017; Nagano et al., 2017 |
| Dataset Splits | Yes | We randomly split the hyperedge set into training and testing set by a ratio of 4:1. [...] The training is terminated when it reaches the maximum training epoch number (100) or the performance on the validation set no longer improves. |
| Hardware Specification | No | The paper does not provide specific hardware details (like GPU/CPU models or memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and refers to software like DHNE, node2vec, and skip-gram models, but it does not specify any programming languages, libraries, or frameworks with their version numbers. |
| Experiment Setup | Yes | For our Hyper-SAGNN, we set the representation size to 64, which is the same as DHNE. The number of heads in the multi-head attention layer is set to 8. [...] To train the model, we used the Adam optimizer with learning rate 1e-3. Each batch contains 96 positive hyperedges with 480 negative samples. The training is terminated when it reaches the maximum training epoch number (100) or the performance on the validation set no longer improves. |