Generative Graph Dictionary Learning

Authors: Zhichen Zeng, Ruike Zhu, Yinglong Xia, Hanqing Zeng, Hanghang Tong

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of the obtained node and graph embeddings, and our algorithm achieves significant improvements over the state-of-the-art methods. and Extensive experiments show that FRAME achieves significant improvement on graph-level and node-level tasks, outperforming the state-of-the-art by 8.0% on graph classification, 0.5% on graph clustering, and 2.5% on node clustering, respectively.
Researcher Affiliation Collaboration 1Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL, USA 2Meta, CA, USA.
Pseudocode Yes Algorithm 1 FRAME
Open Source Code Yes The code is implemented by authors from the University of Illinois and available at https://github.com/zhichenz98/FraMe-ICML23.
Open Datasets Yes All the real-world datasets we use are from (Morris et al., 2020) and available online1. 1https://chrsmrrs.github.io/datasets/ and lists datasets like "ENZYMES (Borgwardt et al., 2005)".
Dataset Splits Yes For graph classification, we apply 10-fold cross-validation on the benchmark datasets.
Hardware Specification Yes All experiments are conducted on the Linux platform with an Intel Xeon Gold 6240R CPU and an NVIDIA Tesla V100 SXM2 GPU.
Software Dependencies No The paper mentions specific software libraries used (POT toolbox, Gra Kel library, Karate Club library) but does not provide their version numbers.
Experiment Setup No The paper refers to hyperparameters (α, q, T, L) in Algorithm 1 and discusses the effect of σ in Section 4.4, but it does not provide specific numerical values for these or other training configurations (e.g., learning rate, batch size, optimizer settings) in the main text.