StructPool: Structured Graph Pooling via Conditional Random Fields

Authors: Hao Yuan, Shuiwang Ji

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on multiple datasets demonstrate the effectiveness of our proposed STRUCTPOOL. and 4 EXPERIMENTAL STUDIES
Researcher Affiliation Academia Hao Yuan Department of Computer Science & Engineering Texas A&M University College Station, TX 77843, USA hao.yuan@tamu.edu Shuiwang Ji Department of Computer Science & Engineering Texas A&M University College Station, TX 77843, USA sji@tamu.edu
Pseudocode Yes Algorithm 1 STRUCTPOOL
Open Source Code No The paper does not contain any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes We evaluate our proposed STRUCTPOOL on eight benchmark datasets, including five bioinformatics protein datasets: ENZYMES, PTC, MUTAG, PROTEINS (Borgwardt et al., 2005), D&D (Dobson & Doig, 2003), and three social network datasets: COLLAB (Yanardag & Vishwanathan, 2015b), IMDB-B, IMDB-M (Yanardag & Vishwanathan, 2015a).
Dataset Splits Yes For our STRUCTPOOL, we perform 10-fold cross validations and report the average accuracy for each dataset. The 10-fold splitting is the same as DGCNN.
Hardware Specification Yes We implement our models using Pytorch (Paszke et al., 2017) and conduct experiments on one Ge Force GTX 1080 Ti GPU.
Software Dependencies No We implement our models using Pytorch (Paszke et al., 2017) and conduct experiments on one Ge Force GTX 1080 Ti GPU. The model is trained using Stochastic gradient descent (SGD) with the ADAM optimizer (Kingma & Ba, 2014). (Specific version numbers for software libraries like PyTorch are not provided.)
Experiment Setup Yes The model is trained using Stochastic gradient descent (SGD) with the ADAM optimizer (Kingma & Ba, 2014). For the non-linear function, we employ tanh for GCNs and relu for 1D convolution layers. and Hard clustering assignments are employed in all experiments. and We conduct experiments to show how different iteration number m affects the prediction accuracy and the results are reported in Table 3. and We follow the DGCNN (Zhang et al., 2018) to select the number of clusters k. Specifically, we use a pooling rate r (0, 1) to control k.