GraphGAN: Graph Representation Learning With Generative Adversarial Nets

Authors: Hongwei Wang, Jia Wang, Jialin Wang, Miao Zhao, Weinan Zhang, Fuzheng Zhang, Xing Xie, Minyi Guo

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments on real-world datasets, we demonstrate that Graph GAN achieves substantial gains in a variety of applications, including link prediction, node classification, and recommendation, over state-of-the-art baselines.
Researcher Affiliation Collaboration 1Shanghai Jiao Tong University, wanghongwei55@gmail.com, {wnzhang, myguo}@sjtu.edu.cn 2Microsoft Research Asia, {fuzzhang, xing.xie}@microsoft.com 3The Hong Kong Polytechnic University, {csjiawang, csmiaozhao}@comp.polyu.edu.hk 4Huazhong University of Science and Technology, wangjialin@hust.edu.cn
Pseudocode Yes Algorithm 1 Online generating strategy for the generator. Algorithm 2 Graph GAN framework.
Open Source Code Yes 1https://github.com/hwwang55/Graph GAN
Open Datasets Yes We utilize the following five datasets in our experiments: ar Xiv-Astro Ph2 is from the e-print ar Xiv... ar Xiv-Gr Qc3 is also from ar Xiv... Blog Catalog4... Wikipedia5... Movie Lens-1M6... 2https://snap.stanford.edu/data/ca-Astro Ph.html 3https://snap.stanford.edu/data/ca-Gr Qc.html 4http://socialcomputing.asu.edu/datasets/Blog Catalog 5http://www.mattmahoney.net/dc/textdata 6https://grouplens.org/datasets/movielens/1m/
Dataset Splits Yes The above hyper-parameters are chosen by cross validation.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running the experiments. It only mentions performing 'stochastic gradient descent'.
Software Dependencies No The paper mentions using 'logistic regression method' and refers to 'Skip-Gram' as a component of baselines, but it does not specify software dependencies with version numbers (e.g., Python 3.x, TensorFlow 2.x, scikit-learn 0.x).
Experiment Setup Yes For all three experiment scenarios, we perform stochastic gradient descent to update parameters in Graph GAN with learning rate 0.001. In each iteration, we set s as 20 and t as the number of positive samples in the test set for each vertex, then run G-steps and D-steps for 30 times, respectively. The dimension of representation vectors k for all methods is set as 20.