Distributed, Egocentric Representations of Graphs for Detecting Critical Structures

Authors: Ruo-Chun Tzeng, Shan-Hung Wu

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments and the results show that Ego-CNNs (1) can lead to comparable task performance as the state-of-the-art graph embedding models, (2) works nicely with CNN visualization techniques to illustrate the detected structures, and (3) is efficient and can incorporate with scale-free priors, which commonly occurs in social network datasets, to further improve the training efficiency.In this section, we conduct experiments using real-world datasets to verify (i) Ego-CNNs can lead to comparable task performance as compared to existing graph embedding approaches; (ii) the visualization technique discussed in Section 3.2 can output meaningful critical structures; and (iii) the scale-free regularizer introduced in Section 3.3 can detect the repeating patterns in a scale-free network.
Researcher Affiliation Collaboration Ruo-Chun Tzeng 1 Shan-Hung Wu 2 1Microsoft Inc. 2CS Department, National Tsing Hua University, Taiwan.
Pseudocode No The paper describes its model and methods verbally and with equations, but does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes 1The code is available at https://github.com/rutzeng/EgoCNN.
Open Datasets Yes We benchmark on both bioinformatic and social-network datasets pre-processed by (Kersting et al., 2016). In the bioinformatic datasets, graphs are provided with node/edge labels and/or attributes, while in the social network datasets, only pure graph structures are given. We consider the task of graph classification. See DGK (Yanardag & Vishwanathan, 2015) for more details about the task and benchmark datasets. URL http://graphkernels.cs.tu-dortmund.de.
Dataset Splits Yes We follow DGK to set up the experiments and report the average test accuracy using the 10-fold cross validation (CV).
Hardware Specification Yes All experiments run on a computer with 48-core Intel(R) Xeon(R) E5-2690 CPU, 64 GB RAM, and NVidia Geforce GTX 1070 GPU.
Software Dependencies No The paper states 'We use Tensorflow to implement our methods' but does not provide a specific version number for Tensorflow or other key software components.
Experiment Setup Yes The architecture is composed of 1 node embedding layer (Patchy San with 128 filters and K = 10) and 5 Ego-Convolution layers (each with D = 128 filters and K = 16) and 2 Dense layers (with 128 neurons for the first Dense layer) as the task model before the output. We apply Dropout (with drop rate 0.5) and Batch Normalization to the input and Ego Convolution layers and train the network using the Adam algorithm with learning rate 0.0001.