Network-Specific Variational Auto-Encoder for Embedding in Attribute Networks
Authors: Di Jin, Bingyi Li, Pengfei Jiao, Dongxiao He, Weixiong Zhang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on large real-world networks demonstrate a superior performance of the new approach over the state-of-the-art methods. |
| Researcher Affiliation | Academia | Di Jin1 , Bingyi Li1 , Pengfei Jiao2,1, , Dongxiao He1 and Weixiong Zhang3 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2Center of Biosafety Research and Strategy, Tianjin University, Tianjin, China 3Department of Computer Science and Engineering, Washington University, St. Louis, USA |
| Pseudocode | No | The paper describes the model architecture and equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the described methodology. |
| Open Datasets | Yes | We used seven public datasets with varying sizes (Table 1). Among these datasets, Cornell, Texas, Washington and Wisconsin (which are sub-datasets of Web KB) are webpage datasets from four universities. Citeseer is a citation network. UAI2010 contains articles information from Wikipedia pages. Pubmed is a scientific publications dataset. |
| Dataset Splits | Yes | We used 10-fold cross-validation to train the classifier. |
| Hardware Specification | No | The paper mentions using 'Tensorflow deep learning tool' but does not provide any specific hardware details such as GPU/CPU models or memory used for experiments. |
| Software Dependencies | No | The paper mentions 'Tensorflow deep learning tool' but does not specify any version numbers for TensorFlow or other software dependencies. |
| Experiment Setup | Yes | For fair comparison, we set the embedding dimension l = 64 for all methods and used the default values for the other parameters of these methods. We also used the Tensorflow deep learning tool with a learning rate of 0.001. |