Substructure Assembling Network for Graph Classification
Authors: Xiaohan Zhao, Bo Zong, Ziyu Guan, Kai Zhang, Wei Zhao
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present an experimental study on classification accuracy and interpretability of SAN. Dataset In the evaluation, we focus on public benchmark datasets for graph classification methods. |
| Researcher Affiliation | Collaboration | Xiaohan Zhao Snap Inc. homeisland03@gmail.com Bo Zong NEC Laboratories, America bzong@nec-labs.com Ziyu Guan Northwest University of China ziyuguan@nwu.edu.cn Kai Zhang Temple University zhang.kai@temple.edu Xidian University ywzhao@mail.xidian.edu.cn |
| Pseudocode | No | The paper describes the computational steps and provides diagrams (e.g., Figure 3), but it does not include formal pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | MUTAG (Debnath et al. 1991) includes a set of graphs, each of which represents nitro compounds, and their labels indicate if they have mutagenic effect on bacteria; NCI1 and NCI109 (Wale, Watson, and Karypis 2008) are graph representations of chemical compounds screened for activity against non-small cell lung cancer and ovarian cancer cell lines, respectively; ENZYMES (Borgwardt et al. 2005) contains graph representations of tertiary structure of 6 classes of enzymes; D&D (Dobson and Doig 2003) includes structures of enzymes and non-enzymes proteins, where nodes are amino acids, and edges indicate spatial closeness between nodes. |
| Dataset Splits | Yes | Following (Shervashidze et al. 2011), we perform 10-fold cross validation and present the average accuracy. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments. |
| Software Dependencies | No | We implement SAN and its variants in Tensorflow (Abadi et al. 2016), and use Adam optimizer with its default setting (Kingma and Ba 2015). No specific version numbers for Tensorflow or other libraries are provided. |
| Experiment Setup | Yes | The dimensionalities of hidden space and output feature space are two meta-parameters for each SAU... For each dataset, the specific SAN structure is varied by its input complexity, such as the number of nodes (edges) and the number of node (edge) attributes per graph. By cross-validation, we take the following SAN configurations in the evaluation. MUTAG. An SAN of sau(64, 32, 64) sau(64, 32, 64) p(64) d(0.5) fc(2) is taken for this dataset... Each SAU layer is pre-trained for 2,000 epochs, followed by end-to-end fine-tuning on the whole SAN for 10,000 epochs. |