Transfer Learning for Deep Learning on Graph-Structured Data
Authors: Jaekoo Lee, Hyunjae Kim, Jongsun Lee, Sungroh Yoon
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We thoroughly tested our approach with large-scale real-world text data and confirmed the effectiveness of the proposed transfer learning framework for deep learning on graphs. We performed an extensive set of experiments, using both synthetic and real-world data, to show the effectiveness of our approach. |
| Researcher Affiliation | Academia | Jaekoo Lee, Hyunjae Kim, Jongsun Lee, Sungroh Yoon Electrical and Computer Engineering Seoul National University Seoul 08826, Republic of Korea sryoon@snu.ac.kr |
| Pseudocode | No | The paper describes the proposed method in steps (A-E) and provides a diagram (Figure 2), but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | No explicit statement about making the source code available or a link to a repository is provided. |
| Open Datasets | Yes | We utilized large-scale public NLP data (Zhang, Zhao, and Le Cun 2015), which contained multiple corpora of news articles, Internet reviews, or ontology entries (Table 1). |
| Dataset Splits | Yes | We carried out 10-fold cross validation. Table 1 lists the number of training and test samples for each dataset (e.g., Train 120,000 for AG, Test 7,600 for AG). |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are mentioned. |
| Software Dependencies | No | We implemented the deep networks with Torch and Cuda using Ada Grad as the optimizer and Re LU as the activation. However, specific version numbers for these software components are not provided. |
| Experiment Setup | Yes | For training, we set the kernel degree K = 60, learning rate to 0.01 and used cross-entropy cost function with Ada Grad optimizer. GC8 means the use of graph convolutional layers with 8 feature maps, and FC500/FC1K means the use of fully connected layer with 500/1000 hidden units. |