MEGAN: A Generative Adversarial Network for Multi-View Network Embedding

Authors: Yiwei Sun, Suhang Wang, Tsung-Yu Hsieh, Xianfeng Tang, Vasant Honavar

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The results of our experiments on two real-world multi-view data sets show that the embeddings obtained using MEGAN outperform the state-of-the-art methods on node classification, link prediction and visualization tasks.
Researcher Affiliation Academia Yiwei Sun1 , Suhang Wang2 , Tsung-Yu Hsieh1 , Xianfeng Tang2 , Vasant Honavar12 1Department of Computer Science and Engineering, The Pennsylvania State University, USA 2College of Information Sciences and Technology, The Pennsylvania State University, USA {yus162,szw494,tuh45,xut10,vuh14}@psu.edu
Pseudocode Yes Algorithm 1 MVGAN framework
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that the code is being released.
Open Datasets Yes We use the following multi-view network data sets[Bui et al., 2016] in our experiments: (i). Last.fm: Last.fm data were collected from the online music network Last.fm1. (ii). Flickr: Flickr data were collected from the Flickr photo sharing service. 1https://www.last.fm
Dataset Splits No The paper mentions training data and testing data splits but does not specify a separate validation set split or exact percentages for all three phases (e.g., 70/15/15).
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software components like 't-SNE package', but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The embedding dimension was set to 128 in all of our experiments. We chose 50% of the nodes randomly for training and the remaining for testing. We used different choices of the dimension d {16, 32, 64, 128, 256, 512}.