Learning Conjoint Attentions for Graph Neural Nets

Authors: Tiantian He, Yew Soon Ong, L Bai

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental CATs utilizing the proposed Conjoint Attention strategies have been extensively tested in well-established benchmarking datasets and comprehensively compared with state-of-the-art baselines. The obtained notable performance demonstrates the effectiveness of the proposed Conjoint Attentions.
Researcher Affiliation Academia Tiantian He1,2 Yew-Soon Ong1,2 Lu Bai1,2 1Agency for Science, Technology and Research (A*STAR) 2DSAIR, Nanyang Technological University {He_Tiantian,Bai_Lu}@ihpc.a-star.edu.sg, Ong_Yew_Soon@hq.a-star.edu.sg {tiantian.he,bailu,asysong}@ntu.edu.sg
Pseudocode No The paper describes its methods mathematically and textually but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of its source code.
Open Datasets Yes Five widely-used network datasets, which are Cora, Cite, Pubmed [21, 28], Coauthor CS [29], and OGB-Arxiv [15], are used in our experiments.
Dataset Splits Yes For the training paradigms of both two learning tasks, we closely follow the experimental scenarios established in the related works [15, 17, 33, 43]. For the testing phase of different approaches, we use the test splits that are publicly available for classification tasks, and all nodes for clustering tasks.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not specify version numbers for any software dependencies used in the experiments.
Experiment Setup Yes In the training stage, we construct the two-layer network structure (i.e., one hidden layer possessed) for all the baselines and different versions of CATs. In each set of testing data, all approaches are run ten times to obtain the statistically steady performance. As for other details related to experimental settings, we leave them in the appendix.