Graph Game Embedding

Authors: Xiaobin Hong, Tong Zhang, Zhen Cui, Yuge Huang, Pengcheng Shen, Shaoxin Li, Jian Yang7711-7720

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test the proposed method on three public datasets about citation networks, and the experimental results verify the effectiveness of our method.
Researcher Affiliation Collaboration School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China. 2Youtu Lab, Tencent
Pseudocode Yes Algorithm 1 Graph Game Embedding Algorithm
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes Three citation networks, named Cora, Citeseer and Pubmed, are employed to evaluate our proposed method. Cora dataset consists of 2708 scientific publications of seven classes with 5429 existing links, while Citeseer dataset consists of 3312 scientific publications classified into one of six classes with totally 4732 links. As the largest dataset among the three, Pubmed citation network consists of 19717 scientific publications of three classes with 44338 links.
Dataset Splits Yes For all three datasets, there are 20 samples in each class for training, 500 samples for validation, and 1000 samples for testing.
Hardware Specification No No specific details about the hardware (e.g., GPU/CPU models, memory, or cloud resources) used for running the experiments are provided in the paper.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library names or programming language versions) needed to replicate the experiments.
Experiment Setup Yes In this process, the architectures of the models are kept the same while slight difference exists in parameter settings and implementation details. For both tasks, in Eqn. (11) of parametrized learning, we stack the function f( ) twice in the model and set K to be 8. Specifically, the projection matrix Wk yields a 64-dimensional output vector in unsupervised learning while a 128-dimensional vector in semi-supervised learning. For node sampling, the number of negative samples is set twice of the positive, where the positive number is set as 5 for all experiments. Specifically, positive samples come from the neighbours while negative samples are chosen from the nodes that cannot reach the central node in two steps.