Sterling: Synergistic Representation Learning on Bipartite Graphs

Authors: Baoyu Jing, Yuchen Yan, Kaize Ding, Chanyoung Park, Yada Zhu, Huan Liu, Hanghang Tong

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive empirical evaluation on various benchmark datasets and tasks demonstrates the effectiveness of STERLING for extracting node embeddings.
Researcher Affiliation Collaboration 1University of Illinois at Urbana-Champaign 2Northwestern University 3Korea Advanced Institute of Science & Technology 4MIT-IBM Watson AI Lab, IBM Research 5Arizona State University
Pseudocode No The paper describes the model architecture and training process using mathematical equations and textual descriptions, but it does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link regarding the availability of its source code.
Open Datasets Yes ML-100K and Wiki are processed by (Cao et al. 2021), where Wiki has two splits (50%/40%) for training. IMDB, Cornell and Citeceer are document-keyword bipartite graphs (Xu et al. 2019).
Dataset Splits Yes Wiki has two splits (50%/40%) for training.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for conducting the experiments.
Software Dependencies No The paper mentions general software components like neural networks and encoders but does not provide specific version numbers for any libraries, frameworks, or programming languages used (e.g., PyTorch 1.9, Python 3.8).
Experiment Setup Yes We set NK = NL for co-clusters. We perform grid search over several hyper-parameters such as Nknn, NK, , the number of layers, and embedding size d. We set δ as absolute activation. Please refer to Appendix for more details.