Discrete Embedding for Latent Networks

Authors: Hong Yang, Ling Chen, Minglong Lei, Lingfeng Niu, Chuan Zhou, Peng Zhang

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on real-world datasets show that the proposed model outperforms the state-of-the-art network embedding methods. We conduct experiments on real-world network data to validate the performance of the DELN model.
Researcher Affiliation Collaboration 1 Centre for Artificial Intelligence, University of Technology Sydney, Australia 2 Faculty of Information Technology, Beijing University of Technology, China 3 School of Economics and Management, University of Chinese Academy of Sciences, China 4 Academy of Mathematics and Systems Science, Chinese Academy of Sciences, China 5 Ant Financial Services Group, Hangzhou, China
Pseudocode Yes Algorithm 1 Discrete Embedding Latent Networks (DELN) Require: Cascades C, feature X, dimension d, # of iterations τ1 and τ2, parameters T, α, β, Ensure: Discrete representation matrix B 1: Initialize W, Z, B randomly 2: W-Step: Calculate W using Eq.(6) 3: Calculate P using Eq.(1) 4: Repeat until converge or reach τ1 5: Z-Step: Calculate Z using Eq.(8) 6: B-Step: Repeat until converge or reach τ2 7: for l = 1, , d do 8: update bl using Eq.(13) 9: end for 10: return matrix B
Open Source Code No The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Table 1 summarizes the datasets, where Wiki [Yang et al., 2015] is a network of webpages, Citeseer [Lu and Getoor, 2003; Sen et al., 2008] is a scientific network where the nodes represent papers and the edges are paper citations, Cora [Lu and Getoor, 2003; Sen et al., 2008] is another citation network which focuses on publications in the machine learning area, and Blog Catalog [Huang et al., 2017] is a social network which concerns blog users.
Dataset Splits No Following the setups in Deepwalk, we randomly sample a portion of nodes for training and the rest for testing. The ratio of training samples ranges from 10% to 90%.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies, such as library names with version numbers, used for the experiments.
Experiment Setup Yes For all of the models, we set the embedding dimension as d = 128. The parameters of all baselines are set as the default values. We test DELN with respect to different parameters to validate its robustness. We test the Micro-F1 and Macro-F1 scores of DELN with d varying from 16 to 256.