Beyond GNNs: An Efficient Architecture for Graph Problems
Authors: Pranjal Awasthi, Abhimanyu Das, Sreenivas Gollapudi6019-6027
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically show the effectiveness of our proposed architecture for a variety of graph problems and real world classification problems. |
| Researcher Affiliation | Industry | Pranjal Awasthi,1 Abhimanyu Das, 1 Sreenivas Gollapudi 1 1 Google Research pranjalawasthi@google.com, abhidas@google.com, sgollapu@google.com |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. |
| Open Source Code | No | No explicit statement about open-source code availability or repository links was found. |
| Open Datasets | Yes | We generated synthetic random graphs between 500 and 1000 nodes (n)... We experiment with the following real world datasets (Yanardag and Vishwanathan 2015) that have been used in recent works for evaluating various GNN architectures (Xu et al. 2019a): 1) IMDB-BINARY and 2) IMDB-MULTI datasets: These are movie collaboration... 3) COLLAB... 4) PROTEINS... 5) PTC, 6) NCI1 and 7) MUTAG: These are various datasets of chemical compounds... |
| Dataset Splits | No | The paper mentions generating 30,000 training examples for synthetic data and refers to training on real-world datasets but does not provide specific details on how the data was split into training, validation, and test sets (e.g., percentages or counts). |
| Hardware Specification | No | No specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running experiments were provided. |
| Software Dependencies | No | The paper mentions using the GIN model implementation but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For both baseline and GNN+, we used node degree as the input node features for MINCUT and MST. For Shortest Paths, Effective Resistance and Affinity, we set input node features to be Booleans indicating if the node is a source/destination node or not... when tuning hyperparameter tuning we allow the GNNmp architecture to explore depth up to 9 whereas the GNN+ architecture is tuned by restricting the depth upto 3. |