JANE: Jointly Adversarial Network Embedding

Authors: Liang Yang, Yuexue Wang, Junhua Gu, Chuan Wang, Xiaochun Cao, Yuanfang Guo

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the remarkable superiority of the proposed JANE on link prediction (3% gains in both AUC and AP) and node clustering (5% gain in F1 score).
Researcher Affiliation Academia 1School of Artificial Intelligence, Hebei University of Technology, China 2State Key Laboratory of Information Security, Institute of Information Engineering, CAS, China 3School of Computer Science and Engineering, Beihang University, China
Pseudocode No The paper describes the framework components and objective function mathematically but does not include any pseudocode or algorithm blocks.
Open Source Code No No explicit statement about providing open-source code for the described methodology or a link to a code repository was found in the paper.
Open Datasets Yes To demonstrate the superiority of the proposed JANE framework, link prediction and node clustering tasks on three widely used citation networks (Cora, Citeseer and Pubmed as shown in Table 1) are adopted.
Dataset Splits Yes For each citation network, the edges are randomly divided into three groups. 85%, 5% and 10% of the edges are utilized in training, validation (hyper-parameters tuning) and performance testing, respectively.
Hardware Specification No No specific hardware details (e.g., GPU models, CPU types, or cloud instance specifications) used for running experiments were mentioned in the paper.
Software Dependencies No The paper mentions 'Adam optimizer is adopted' but does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes For all the experiments, two attention layers with 8 and 1 attention heads are adopted for attention-based embedding module, and three fully-connected layers are employed in both the discriminator and generator. For fair comparison, the dimension of each embedding, i.e. P, is set to 16 for all the methods. Adam optimizer is adopted with the initial learning rates for the discriminator and other two components as 0.001 and 0.008, respectively. Both L2 regularization and dropout are exploited.