Graph Contrastive Learning with Reinforcement Augmentation
Authors: Ziyang Liu, Chaokun Wang, Cheng Wu
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments to evaluate GA2C on unsupervised learning, transfer learning, and semi-supervised learning. The experimental results demonstrate the performance superiority of GA2C over the state-of-the-art GCL models. |
| Researcher Affiliation | Academia | School of Software, BNRist, Tsinghua University, Beijing, China |
| Pseudocode | Yes | Algorithm 1 Training process of GA2C |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is openly available. |
| Open Datasets | Yes | The experimental datasets are from TU datasets [Morris et al., 2020] and Open Graph Benchmark (OGB) datasets [Hu et al., 2020a]. They cover four types of graphs including biochemical molecules (NCI1, PROTEINS, MUTAG, and DD), social graphs (REDDIT-B, REDDIT-M5K, IMDB-B, COLLAB, and GITHUB), physiology (Tox Cast and BBBP), and biophysics (MUV and BACE). The statistics of these datasets are shown in Table 2. |
| Dataset Splits | Yes | In transfer learning, we pre-train GA2C on the ZINC-2M dataset [Hu et al., 2020b] with 10 epochs and set the hidden sizes of the encoder network (3-layer GIN) and projection network (2-layer MLP) as 300 and 64, respectively. Then we fine-tune GA2C on a specific OGB dataset with 100 epochs. The split ratio of the training, validation, and testing sets is 8:1:1. |
| Hardware Specification | No | The paper does not provide specific hardware details such as CPU models, GPU models, or memory specifications used for running the experiments. It only refers to general components like 'encoder network' and 'projection network' without explicit hardware information. |
| Software Dependencies | No | The paper mentions using a 'graph isomorphism network (GIN)' and a '2-layer MLP', but does not specify the software libraries (e.g., PyTorch, TensorFlow) or their version numbers used for implementation, nor other software dependencies with version numbers. |
| Experiment Setup | Yes | For REDDIT-B and REDDIT-M-5K, we use a 5-layer graph isomorphism network (GIN) [Xu et al., 2019] with a hidden size of 128 as our encoder network and a 2-layer MLP with a hidden size of 128 as our projection network (training 150 epochs in total); for the other datasets, we use a 3-layer GIN with a hidden size of 32 as our encoder network and a 2-layer MLP with a hidden size of 128 as our projection network (training 60 epochs in total). The submodel of Actor or Critic is implemented as the same network architecture of the encoder network. The learning rates for the Actor, Critic, and encoder network are consistently set to 1e-3. |