Analyzing Data-Centric Properties for Graph Contrastive Learning
Authors: Puja Trivedi, Ekdeep S Lubana, Mark Heimann, Danai Koutra, Jayaraman Thiagarajan
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Overall, our work rigorously contextualizes, both empirically and theoretically, the effects of data-centric properties on augmentation strategies and learning paradigms for graph SSL. |
| Researcher Affiliation | Collaboration | Puja Trivedi University of Michigan pujat@umich.edu Ekdeep Singh Lubana University of Michigan CBS, Harvard University eslubana@umich.edu Mark Heimann Lawrence Livermore National Labs heimann2@llnl.gov Danai Koutra Unversity of Michigan dkoutra@umich.edu Jayaraman J. Thiagarajan Lawrence Livermore National Labs jjayaram@llnl.gov |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that its source code is open or publicly available. |
| Open Datasets | No | We evaluate seven graph SSL methods on seven, popular benchmark datasets. (...) The paper mentions using 'standard benchmarks' but does not provide specific links, DOIs, or direct citations for accessing these datasets, nor does it specify public access for the custom synthetic dataset. |
| Dataset Splits | No | After training, all models are evaluated using the linear probe protocol [1] at varying style ratios. (...) The paper does not explicitly provide quantitative details about train/validation/test splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware specifications (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper mentions general software concepts like 'GIN encoder' and 'Adam optimizer' but does not specify any software libraries or dependencies with version numbers. |
| Experiment Setup | Yes | A 5-layer GIN encoder is used and models are trained for 60 epochs using Adam (with a learning rate of 0.01). |