Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking

Authors: Aleksandar Bojchevski, Stephan Günnemann

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on real world networks demonstrate the high performance of our approach, outperforming state-of-the-art network embedding methods on several different tasks.
Researcher Affiliation Academia Aleksandar Bojchevski, Stephan G unnemann Technical University of Munich, Germany {a.bojchevski,guennemann}@in.tum.de
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It describes the method using mathematical formulations and textual descriptions.
Open Source Code Yes We provide all datasets, the source code of G2G, and further supplementary material (https://www.kdd.in.tum.de/g2g).
Open Datasets Yes We use several attributed graph datasets. Cora (Mc Callum et al., 2000)... We provide all datasets, the source code of G2G, and further supplementary material (https://www.kdd.in.tum.de/g2g).
Dataset Splits Yes To evaluate the performance we hide a set of edges/non-edges from the original graph and train on the resulting graph. Similarly to Kipf & Welling (2016b) and Wang et al. (2016) we create a validation/test set that contains 5%/10% randomly selected edges respectively and equal number of randomly selected non-edges.We used the validation set for hyper-parameter tuning and early stopping and the test set only to report the performance.
Hardware Specification Yes In fact, for graphs beyond 15K nodes we had to revert to slow training on the CPU since the data did not fit on the GPU memory (12GB).
Software Dependencies No The paper mentions using Adam for optimization and rectifier units/exponential linear units as activation functions. However, it does not specify software versions for programming languages, libraries, or frameworks (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes The parameters are optimized using Adam (Kingma & Ba, 2014) with a fixed learning rate of 0.001. ... As a sensible default we recommend an encoder with a single hidden layer of size s1 = 512. ... small number of epochs T needed for convergence (T 2000 for all shown experiments, see e.g. Fig. 3(b)).