Adversarial Network Embedding
Authors: Quanyu Dai, Qiang Li, Jian Tang, Dan Wang
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | As shown by the empirical results, our method is competitive with or superior to state-of-the-art approaches on benchmark network embedding tasks. |
| Researcher Affiliation | Academia | 1Department of Computing, The Hong Kong Polytechnic University, Hong Kong 2School of Software, FEIT, The University of Technology Sydney, Australia 3HEC Montreal, Canada 4Montreal Institute for Learning Algorithms, Canada |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The source code will be available online. |
| Open Datasets | Yes | Cora and Citeseer are paper citation networks constructed by (Mc Callum et al. 2000). Wiki (Sen et al. 2008) is a network... Cit-DBLP is a paper citation network extracted from DBLP dataset (Tang et al. 2008). |
| Dataset Splits | No | The paper states 'We range the training ratio from 10% to 90% for comprehensive evaluation' but does not specify validation splits or ratios explicitly. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running experiments. |
| Software Dependencies | No | All experiments are carried out with support vector classifier in Liblinear package (Fan et al. 2008). The paper mentions Liblinear but does not provide its version number. |
| Experiment Setup | Yes | For both Deep Walk and node2vec, the window size s, the walk length l and the number of walks η per node are set to 10, 80 and 10, respectively, for fair comparison. ... Specifically, the generator is a single-layer network with leaky Re LU activations (with a leak of 0.2) and batch normalization ... The number of negative samples K is set to 5... We use RMSProp optimizer with learning rate as 0.001. |