Sign Rank Limitations for Inner Product Graph Decoders

Authors: Su Hyeong Lee, Qingqi Zhang, Risi Kondor

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Section 5 presents the experiments, and the conclusion and possible extensions are given in Section 6. For visual clarity, we first present several results using pedagogical graph structures.
Researcher Affiliation Academia 1Department of Statistics, University of Chicago 2Department of Computer Science, University of Chicago. Correspondence to: Su Hyeong Lee <sulee@uchicago.edu>.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks clearly labeled as "Pseudocode" or "Algorithm".
Open Source Code No The paper does not provide any concrete access information (e.g., specific repository link, explicit code release statement) for the source code of the methodology described.
Open Datasets Yes All molecules with more than 27, 36, 40, 60 nodes from QM-9, Zinc, TU-Enzymes, TU-Protein datasets (Wu et al. 2018; Gomez-Bombarelli et al. 2018; Morris et al. 2020) are treated via GAE and DGAE, totaling 35, 118, 162, 176 graphs. Treatment of Cora and Cite Seer graphs (Yang et al. 2016) with 2708, 3327 nodes, respectively, are given in Appendix E and similarly reinforce the strength of the augmented architectures.
Dataset Splits No The paper does not provide specific training/validation/test dataset splits, percentages, or explicit sample counts for data partitioning.
Hardware Specification Yes Training is done for 30000 epochs on NVIDIA Ge Force RTX 3060 GPU with learning rate 10−4 and weak regularization λ = 10−7.
Software Dependencies No The paper does not list specific software dependencies with their version numbers (e.g., 'Python 3.8, PyTorch 1.9, and CUDA 11.1').
Experiment Setup Yes The training loss is given by L (A, Z) = BCE(A, σ(Re(ZZ T ))) + λ||A Re(ZZ T )||F . Training is done for 30000 epochs on NVIDIA Ge Force RTX 3060 GPU with learning rate 10−4 and weak regularization λ = 10−7. The hyperparameters used were h1 = 2h2, λ = 10−7, γ = 10−3 and training was done for 20000 epochs for Cora and for 10000 epochs for Cite Seer.