Integrative Network Embedding via Deep Joint Reconstruction
Authors: Di Jin, Meng Ge, Liang Yang, Dongxiao He, Longbiao Wang, Weixiong Zhang
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on seven real-world networks demonstrate a superior performance of our method over nine state-of-the-art methods for embedding. |
| Researcher Affiliation | Academia | 1 School of Computer Science and Technology, Tianjin University, Tianjin, China 2 School of Computer Science and Engineering, Hebei University of Technology, Tianjin, China 3 College of Math and Computer Science, Jianghan University, Wuhan, China 4 Department of Computer Science and Engineering, Washington University, St. Louis, USA |
| Pseudocode | No | No pseudocode or clearly labeled algorithm blocks were found in the paper. |
| Open Source Code | No | No statement about releasing source code or a direct link to a code repository was found. |
| Open Datasets | Yes | Seven publicly available datasets with varying sizes and characteristics are used. ... We considered on the shared LASTFM data in Het Rec 2011, which consists of 1,892 users and 17,632 artists from an online music system Last.fm. http://ir.ii.uam.es/hetrec2011 |
| Dataset Splits | Yes | For each network, we used 10-fold cross-validation, and accuracy (AC) [Liu et al., 2012] as the gold metric to evaluate the performance of all methods. ... The training data had 90% of the user-artist relations after December 2010, and the rest 10% were used as the test data. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory, or cloud instance types) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | We also used the Theano deep learning tools to construct stacked Autoencoders with a learning rate of 0.1. No specific version number for Theano was provided. |
| Experiment Setup | Yes | The final embedding dimension are often set to power of 2, while any dimension can be set up in essence. To ensure fair, here we uniformly set it to 64 for all of the methods. The parameters of the methods compared were set to their default values. For our method, we set k = 9 to construct k-nearest neighbor graphs since we often reached stable results when k is from 6 to 9. We also used the Theano deep learning tools to construct stacked Autoencoders with a learning rate of 0.1, and our activation function is sigmoid( ) in the experiment. |