SGAT: Simplicial Graph Attention Network
Authors: See Hian Lee, Feng Ji, Wee Peng Tay
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate the efficacy of our approach with node classification tasks on heterogeneous graph datasets and further show SGAT s ability in extracting structural information by employing random node features. Numerical experiments indicate that SGAT performs better than other current state-of-the-art heterogeneous graph learning methods. |
| Researcher Affiliation | Academia | See Hian Lee , Feng Ji and Wee Peng Tay Nanyang Technological University, Singapore seehian001@e.ntu.edu.sg, {jifeng, wptay}@ntu.edu.sg |
| Pseudocode | Yes | Algorithm 1 Construction of k-simplices Input: The adjacency list of heterogeneous graph Adj list, Node features X, Number of shared non-target neighbours ϵ, Number of hops away η, Maximal k-order considered, K, The maximum simplex order to construct λ. Output: Set of all k-simplices, All KSimplices. |
| Open Source Code | No | The paper does not provide any concrete access information (link or explicit statement of release) to the source code for the described methodology. |
| Open Datasets | Yes | The heterogeneous datasets utilized are two citation network datasets DBLP2 and ACM, and a movie dataset IMDB. 1https://www.imdb.com/interfaces/ 2https://dblp.uni-trier.de/ |
| Dataset Splits | No | The paper refers to a 'Standard split' in Table 1 but does not provide specific percentages or sample counts for training, validation, and test sets, nor does it cite a source for this standard split within the experimental setup description. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions the use of 'Adam optimizer' but does not specify versions for any key software components or libraries required to replicate the experiments. |
| Experiment Setup | Yes | For all models, the hidden units are set to 64, the Adam optimizer was used and its hyperparameters such as learning rate and weight decay, are respectively chosen to yield best performance. For SGAT, we set K, the dimension of the simplicial complexes to be 2, the number of layers to be 2 for all the datasets. Moreover, when η >= 2, the dimension of the attention vector qk,η (cf. (8)) is set to 128. Besides the parameters mentioned above, ϵk η, η and λ are tuned for each dataset. Specifically, for ACM, we choose η = 1, ϵ1 1 = 1, and λ = 20. For DBLP, η = 2, ϵ1 1 = 3, ϵ1 2 = 4, and λ = 10. For IMDB, η = 1, ϵ1 1 = 1, and λ = 10. |