Network Structure and Transfer Behaviors Embedding via Deep Prediction Model
Authors: Xin Sun, Zenghui Song, Junyu Dong, Yongbo Yu, Claudia Plant, Christian Böhm5041-5048
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental studies are conducted on various real-world datasets including social networks and citation networks. The results show that the learned representations can be effectively used as features in a variety of tasks, such as clustering, visualization and classification, and achieve promising performance compared with state-of-the-art models. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Technology, Ocean University of China, Qingdao, China 2Faculty of Computer Science, University of Vienna, Vienna, Austria 3Data Science @ University of Vienna, Vienna, Austria 4Ludwig-Maximilians-Universit at M unchen, Munich, Germany |
| Pseudocode | No | The paper includes a diagram (Figure 1) of the proposed framework, but it does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | Blog Catalog (Tang and Liu 2009) is social network dataset about blogger authors. ... Co RA, Cite Seer and Pub Med (Sen et al. 2008) are collections of scientific publications from different databases. ... The 20-Newsgroups (Lang 1995) dataset is a collection of 20,000 newsgroup documents, partitioned into 20 different categories. |
| Dataset Splits | No | In multi-label classification experiments, we randomly sample a portion (from 10% to 90%) of the labeled nodes, and use them as training data. The rest of the nodes are used as test data. ... We randomly sample 10% to 90% of the nodes as the training samples and use the left nodes to test the performance. The paper describes train and test splits, but it does not explicitly define or mention a separate validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions algorithms and models like LSTM, Adam (optimizer), and Backpropagation, but it does not specify any software libraries or dependencies with their version numbers (e.g., PyTorch 1.x, TensorFlow 2.x, Python 3.x). |
| Experiment Setup | Yes | For the compared methods, we set the optimal parameters as suggested in their original papers. For example, Deep Walk, we set window size as 10, walk length as 40 and walks per vertex as 40. For LINE, the number of negative samples is set as 5 and the total number of samples is 10 billion. For SDNE, we set the number of layers in the model as 3, the hyper-parameter α and β as 0.1 and 10. For Struc2Vec, we set window size as 10, walk length as 80, walks per vertex as 10. All methods get representation with dimensions of 128. For our method, we set walk length to be 100. We use different γ( walks per node) for different datasets. For Blog Catalog and Pubmed datasets, we set walks per node as 30. For other datasets, we set walks per node as 100. LSTM learning rate is 0.001. For convenience, we set LSTM timesteps equal to walk length l . |