Graph-Wise Common Latent Factor Extraction for Unsupervised Graph Representation Learning

Authors: Thilini Cooray, Ngai-Man Cheung6420-6428

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments and analysis, we demonstrate that, while extracting common latent factors is beneficial for graph-level tasks to alleviate distractions caused by local variations of individual nodes or local neighbourhoods, it also benefits node-level tasks by enabling long-range node dependencies, especially for disassortative graphs.
Researcher Affiliation Academia Thilini Cooray, Ngai-Man Cheung Singapore University of Technology and Design (SUTD) thilini_cooray@mymail.sutd.edu.sg, ngaiman_cheung@sutd.edu.sg
Pseudocode No The paper describes the ACCUM mechanism with equations and figures but does not present it as a formal pseudocode or algorithm block.
Open Source Code Yes We propose deep GCFX1 : a novel autoencoder-based approach with iterative query-based reasoning and feature masking capability to extract common latent factors. 1Our source code: https://github.com/thilinicooray/deep GCFX
Open Datasets Yes To evaluate the discriminative ability of extracted common latent factors zc on downstream tasks, we select graph classification. We report results for deep GCFX when only zc is used as the graph embedding. deep GCFX++ combines both common and local factors with a gating mechanism as αzc + (1 α) P|V | j=1 zl(j), where α denotes the contribution from graph-wise common factors. We compare deep GCFX with existing state-of-the-art methods and report results in Table 1. Compared to existing work on skip-gram and contrastive learning, deep GCFX achieves comparable or better results for four datasets and our deep GCFX++(when the biggest contribution comes from common latent factors) achieves the state-of-the-art results for 5 datasets and is very competitive with Graph CL(You et al. 2020) which uses data augmentations for REDDI-MULTI-5K showing the effectiveness of GCFX over contrastive learning, whose model performance relies on the selection of negative samples. We achieve very competitive results with inter-graph similarity methods by only using current graph for embedding learning, compared to their pair-wise comparisons. These results show the effectiveness of utilizing graph-wise common factors as graph embeddings.
Dataset Splits Yes Table 1: Mean 10-fold cross validation accuracy on graph classification. Results in Bold indicate the best accuracy for both inter-graph similarity based and non-inter-graph similarity based methods separately. Underlined results show the second best performances.We follow strictly the experiment and evaluation setup and datasets as in (Sun et al. 2020; Hassani and Khasahmadi 2020) for deep GCFX and GVAE baseline. Results of other methods are taken from their papers.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU models, or memory specifications used for running experiments.
Software Dependencies No The paper mentions using a 'GNN' and refers to frameworks like GVAE implicitly, but it does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No The paper describes the model architecture and optimization objective, but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific optimizer settings.