GSSNN: Graph Smoothing Splines Neural Networks
Authors: Shichao Zhu, Lewei Zhou, Shirui Pan, Chuan Zhou, Guiying Yan, Bin Wang7007-7014
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In extensive experiments on biological and social datasets, we demonstrate that our model achieves state-of-the-arts and GSSNN is superior in learning more robust graph representations. |
| Researcher Affiliation | Collaboration | Shichao Zhu,1,3 Lewei Zhou,2,4 Shirui Pan,5 Chuan Zhou,2,3 Guiying Yan,2,4 Bin Wang6 1Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China 2Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China 3School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 4University of Chinese Academy of Sciences, Beijing, China 5Faculty of Information Technology, Monash University, Melbourne, Australia, 6Xiaomi AI Lab, Beijing, China |
| Pseudocode | No | No explicit pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper mentions using 'PyTorch Geometric' but does not provide a specific link or explicit statement for the release of their own source code. |
| Open Datasets | Yes | We validate the performance of generated graph representations on classification task over 4 biological datasets and 3 social datasets (Kersting et al. 2016). The statistics of the datasets are summarized in Table 2. |
| Dataset Splits | Yes | In our experiments, we evaluated all the GNN-based methods over the same random seed using 10-fold cross validation. 10 percent of the data was used for testing and the rest were used for training. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running the experiments were provided. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' and 'PyTorch Geometric' but does not specify their version numbers. |
| Experiment Setup | Yes | The optimal hyper-parameters are obtained by grid search. The ranges of grid search are summarized in Table 3. Table 3: The grid search space for the hyperparameters. Hyperparameter Range Hidden dimension. 16, 32, 64 Weight decay 1e-4, 5e-4 S3 layer 1, 2, 3 Convolutional layer 2, 3, 4, 5 ξ number 3, 4, 5 |