SAIL: Self-Augmented Graph Contrastive Learning

Authors: Lu Yu, Shichao Pei, Lizhong Ding, Jun Zhou, Longfei Li, Chuxu Zhang, Xiangliang Zhang8927-8935

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Specifically, we derive a theoretical analysis and provide an empirical demonstration about the non-steady performance of GNNs over different graph datasets... We demonstrate the competitive performance of SAIL on a variety of graph applications.
Researcher Affiliation Collaboration 1 King Abdullah University of Science and Technology, Saudi Arabia 2 Ant Group, Hangzhou, China 3 University of Notre Dame, USA 4 Brandeis University, USA 5 Inception Institute of Artificial Intelligence, UAE
Pseudocode Yes Algorithm 1: SAIL
Open Source Code No The paper does not include an explicit statement or link indicating the public availability of the source code for the described methodology.
Open Datasets Yes We compare the proposed method with various state-of-the-art methods on six datasets including three citation networks (Cora, Citeseer, Pubmed) (Kipf and Welling 2017), product co-purchase networks (Computers, Photo), and one co-author network subjected to a subfield Computer Science (CS). The product and co-author graphs are benchmark datasets from pytorch geometric (Fey and Lenssen 2019).
Dataset Splits No The paper mentions using several benchmark datasets (Cora, Citeseer, Pubmed, Computers, Photo, CS) for evaluation but does not provide specific details on how these datasets were split into training, validation, and test sets, such as percentages or sample counts.
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., GPU/CPU models, memory, or type of computing cluster) used to run the experiments.
Software Dependencies No The paper mentions 'pytorch geometric' but does not specify its version or any other software dependencies with their respective version numbers required for replication.
Experiment Setup No The paper does not explicitly detail the experimental setup, such as specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or other training configurations, in the main text.