Graph Information Bottleneck for Subgraph Recognition
Authors: Junchi Yu, Tingyang Xu, Yu Rong, Yatao Bian, Junzhou Huang, Ran He
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that the information-theoretic IB-subgraph enjoys superior graph properties. |
| Researcher Affiliation | Collaboration | 1NLPR&CRIPAC, Institute of Automation, Chinese Academy of Sciences, China 2University of Chinese Academy of Sciences, China 3Tencent AI LAB, China 4Center for Excellence in Brain Science and Intelligence Technology, CAS, China |
| Pseudocode | Yes | A.3 ALGORITHM and Algorithm 1 Optimizing the graph information bottleneck. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We evaluate different methods on the datasets of MUTAG (Rupp et al., 2012), PROTEINS (Borgwardt et al., 2005), IMDB-BINARY and DD (Rossi & Ahmed, 2015) datasets. and We construct the datasets for graph interpretation on four molecule properties based on ZINC dataset. Footnote 2 also states We follow the protocol in https://github.com/rusty1s/pytorch geometric/tree/master/benchmark/kernel. |
| Dataset Splits | Yes | We use 85% of these molecules for training, 5% for validating and 10% for testing. and We use 70% of these graphs for training, 5% for validating and 25% for testing. |
| Hardware Specification | No | The paper details model configurations like '2-layer GNN with 16 hidden-size' but does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory used for running the experiments. |
| Software Dependencies | No | The paper mentions general software concepts like 'GNNs' but does not specify any software dependencies with version numbers (e.g., 'Python 3.x', 'PyTorch 1.x', 'TensorFlow 2.x') that would be needed for replication. |
| Experiment Setup | Yes | For fare comparisions, all the backbones for different methods consist of the same 2-layer GNN with 16 hidden-size. and The hyper-parameter of Lcon, α, is set to be 5 on four datasets. and Algorithm 1 defines inner-step T, outer-step N, η1, η2. |