Generalized Laplacian Eigenmaps

Authors: Hao Zhu, Piotr Koniusz

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show on popular benchmarks/backbones that GLEN offers favourable accuracy/scalability compared to state-of-the-art baselines. 6 Experiments We evaluate GLEN (its relaxation) on transductive and inductive node classification tasks and node clustering. GLEN is compared to popular unsupervised, contrastive, and (semi-)supervised approaches. Except for the classifier, unsupervised models do not use labels. To train a graph encoder in an unsupervised manner, GCN [17] minimizes a reconstruction error which only considers the similarity matrix and ignores the dissimilarity information.
Researcher Affiliation Collaboration Hao Zhu , Piotr Koniusz *, , Data61/CSIRO Australian National University allenhaozhu@gmail.com, piotr.koniusz@data61.csiro.au
Pseudocode No No pseudocode or clearly labeled algorithm block was found in the paper.
Open Source Code Yes *The corresponding author. Code: https://github.com/allenhaozhu/GLEN.
Open Datasets Yes Datasets. GLEN is evaluated on four citation networks: Cora, Citeseer, Pubmed, Cora Full [17, 4] for transductive setting. We also employ the large scale Ogbn-arxiv from OGB [14].
Dataset Splits Yes Metrics. As fixed data splits [45] often on transductive models benefit models that overfit, we average results over 50 random splits for each dataset. We evaluate performance for 5 and 20 samples per class. Nonetheless, we also evaluate our model on the standard splits.
Hardware Specification Yes For graphs with over 100 thousands nodes and 10 millions edges (Reddit), GLEN runs fast on NVIDIA 1080 GPU.
Software Dependencies No The paper mentions software used (e.g., GCN, SGC, S2GC) but does not provide specific version numbers for any software components or libraries (e.g., Python version, PyTorch version, etc.).
Experiment Setup No We set hyperparameters based on the settings described in prior papers. (Appendix E for implementation details, but not specified in the main text.)