Deep Attributed Network Embedding
Authors: Hongchang Gao, Heng Huang
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark datasets have verified the effectiveness of our proposed approach. ... 4 Experiments |
| Researcher Affiliation | Academia | Hongchang Gao, Heng Huang Department of Electrical and Computer Engineering University of Pittsburgh, USA |
| Pseudocode | No | The paper describes the proposed method in text and mathematical equations but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about open-sourcing the code or a link to a code repository. |
| Open Datasets | Yes | In our experiments, we employ four benchmark datasets 1: Cora, Citeseer, Pub Med, and Wiki. The first three datasets are paper citation networks. ... 1https://linqs.soe.ucsc.edu/data |
| Dataset Splits | Yes | we randomly select {10%, 30%, 50%} nodes as the training set and the rest as the testing set respectively. With these randomly chosen training sets, we use five-fold cross-validation to train the classifier and then evaluate the classifier on the testing sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Leaky Re Lu [Maas et al., ] as the activation function' but does not specify version numbers for any software dependencies. |
| Experiment Setup | Yes | For Deep Walk and Node2Vec, we set the window size as 10, the walk length as 80, the number of walks as 10. For Gra Rep, the maximum transition step is set to 5. For LINE, we concatenate the first-order and second-order result together as the final embedding result. At last, the dimension of the node representation is set as 200. ... the architecture of our approach for four datasets is summarized in Table 2. ... We use Leaky Re Lu [Maas et al., ] as the activation function. |