SepNE: Bringing Separability to Network Embedding

Authors: Ziyao Li, Liang Zhang, Guojie Song4261-4268

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated Sep NE with real-world networks of different sizes and topics. With comparable accuracy, our approach significantly outperforms state-of-the-art baselines in running times on large networks. We demonstrate the effectiveness of this approach on several real-world networks with different scales and subjects.
Researcher Affiliation Academia 1Yuanpei College, Peking University, China 2Key Laboratory of Machine Perception, Ministry of Education, Peking University, China {leeeezy, zl515, gjsong}@pku.edu.cn
Pseudocode Yes Algorithm 1 General framework of Sep NE.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Wiki, Cora and Citeseer are thousand-level document networks. 6Available at https://linqs.soe.ucsc.edu/data Flickr and Youtube are million-level social networks. 7Available at http://socialnetworks.mpi-sws.org/datasets.html
Dataset Splits No The paper mentions 'The training percentage was varied from 1% to 90%' for classification tasks, but does not provide explicit details on how the dataset is split into training, validation, and test sets, nor does it specify fixed percentages or methodologies for creating these splits for reproducibility.
Hardware Specification Yes All efficiency experiments were conducted on a single machine with 128GB memory, 32 cores 2.13GHz CPU with 16 workers.
Software Dependencies No The paper mentions the use of existing algorithms and methods like LINE, Deep Walk, PageRank, Louvain, and SVD, but does not specify any software libraries or packages with their version numbers that were used for implementation or experimentation.
Experiment Setup Yes On document networks, parameters were set as iter = 100, λ = 0.4 and η = 0.1, M = A + A2 and k = 200; on social networks, parameters were iter = 5, λ = 50, η = 1, M = I + A and k = 1000. Except otherwise noted, the representation dimension for all algorithms was d = 128.