SDGNN: Learning Node Representation for Signed Directed Networks
Authors: Junjie Huang, Huawei Shen, Liang Hou, Xueqi Cheng196-203
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we first review two fundamental sociological theories (i.e., status theory and balance theory) and conduct empirical studies on real-world datasets to analyze the social mechanism in signed directed networks. |
| Researcher Affiliation | Academia | Junjie Huang, Huawei Shen , Liang Hou, Xueqi Cheng CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China University of Chinese Academy of Sciences, Beijing, China |
| Pseudocode | No | The paper describes the proposed model and its components mathematically but does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper mentions using authors' released code for baselines (e.g., "We use the authors released code for Deep Walk... Si GAT12.") but does not provide a statement or link for the source code of their own proposed SDGNN model. |
| Open Datasets | Yes | We do experiments on five real-world signed social network datasets (i.e., Bitcoin-Alpha1, Bitcoin-otc2, Wikirfa3, Slashdot4 and Epinions5). The paper provides numbered footnotes with direct URLs to each of these datasets: 1http://snap.stanford.edu/data/soc-sign-bitcoin-alpha.html, 2http://snap.stanford.edu/data/soc-sign-bitcoin-otc.html, etc. |
| Dataset Splits | Yes | We randomly select 80% edges as training set and the remaining 20% as the test set. |
| Hardware Specification | Yes | All experiments run on a computer with Intel Xeon E5-2640 CPU and 128GB RAM, which installs Linux Cent OS 7.1. |
| Software Dependencies | No | The paper states: "Our models are implemented by Py Torch with the Adam optimizer". While it mentions the software, it does not provide specific version numbers for PyTorch or the Adam optimizer library. |
| Experiment Setup | Yes | Our models are implemented by Py Torch with the Adam optimizer (Learning Rate = 0.001, Weight Decay = 0.001). We use the 2-layer-GAT aggregators to build our model and set λ1 = 1 and λ2 = 1. |