EPIC: Graph Augmentation with Edit Path Interpolation via Learnable Cost
Authors: Jaeseung Heo, Seungbeom Lee, Sungsoo Ahn, Dongwoo Kim
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental evaluations across several benchmark datasets demonstrate that our approach outperforms existing augmentation techniques in many tasks. In this section, we first show the effect of EPIC in graph classification tasks over 11 datasets. |
| Researcher Affiliation | Academia | Jaeseung Heo1 , Seungbeom Lee1 , Sungsoo Ahn1,2 and Dongwoo Kim1,2 1Graduate School of Artificial Intelligence, POSTECH, South Korea 2Department of Computer Science & Engineering, POSTECH, South Korea |
| Pseudocode | Yes | Algorithm 1 Applying node operation |
| Open Source Code | No | The paper does not provide any explicit statements about open-source code availability or links to code repositories. |
| Open Datasets | Yes | We used eight classification datasets: NCI1, BZR, COX2, Mutagenicity, IMDB-BINARY, IMDB-MULTI, PROTEINS, ENZYMES from TUDataset [Morris et al., 2020] and three classification dataset: BBBP, BACE, HIV from Molecule Net [Wu et al., 2018]. |
| Dataset Splits | No | The paper mentions using a 'validation set' and following 'Open Graph Benchmark setting' for Molecule Net, but it does not explicitly state specific train/validation/test split percentages or sample counts for any of the datasets used. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU specifications, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer', 'GIN', and 'GCN' as backbone models, but it does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We use Adam optimizer [Kingma and Ba, 2014] with a learning rate decay of 0.1 every 25 epochs. We train the cost function for 100 epochs on TUDataset. While we use the Sinkhorn-Knopp approximation with k = 10 in Equation 5 for training, the Hungarian algorithm is used for inference to obtain an optimal assignment given costs. |