Adversarial Deep Network Embedding for Cross-Network Node Classification

Authors: Xiao Shen, Quanyu Dai, Fu-lai Chung, Wei Lu, Kup-Sze Choi2991-2999

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate that the proposed ACDNE model achieves the state-of-the-art performance in cross-network node classification.
Researcher Affiliation Academia The Hong Kong Polytechnic University xiao.shen@connect.polyu.hk The Hong Kong Polytechnic University quanyu.dai@polyu.hk Fu-lai Chung The Hong Kong Polytechnic University cskchung@comp.polyu.edu.hk University of Electronic Science and Technology of China luwei@uestc.edu.cn Kup-Sze Choi The Hong Kong Polytechnic University thomasks.choi@polyu.edu.hk
Pseudocode Yes Algorithm 1: ACDNE
Open Source Code No The paper does not provide any explicit statements or links to open-source code for the described methodology.
Open Datasets Yes ACDNE was evaluated on the cross-network datasets (Shen et al. 2019), the statistics are shown in Table 1. Blog1 and Blog2 are two disjoint social networks extracted from the Blog Catalog dataset (Li et al. 2015)... Citationv1, DBLPv7 and ACMv9 are three citation networks extracted from the Arnet Miner datasets (Tang et al. 2008)
Dataset Splits No The paper describes training on the source network and testing on the target network, but does not explicitly mention a separate validation split for hyperparameter tuning or early stopping criteria.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., CPU, GPU models, memory).
Software Dependencies No The paper mentions using PCA (Mackiewicz and Ratajczak 1993) as a pre-processing step, but does not provide specific software dependencies or library versions used for the implementation of ACDNE or the experiments.
Experiment Setup Yes In the experiments, we set K-step as 3 when measuring the PPMI topological proximities between nodes within each network. In ACDNE, both FE1 and FE2 are constructed with two hidden layers, with the hidden dimensionalities set as 𝑓(1) = 512, 𝑓(2) = 128 . The dimensionality of node representations learned by ACDNE is set as 𝑑= 128. For fair comparison, the same dimensionality is also set for other baselines. In addition, the domain discriminator is constructed with two hidden layers with dimensionalities as 𝑑(1) = 𝑑(2) = 128. The weight of pairwise constraint 𝑝 is set as 0.1 for the sparse citation networks and as 10 3 for the dense Blog networks. Besides, a L2-norm regularization term with a weight of 10 3 is imposed on the trainable weights to prevent overfitting. ACDNE is trained by SGD with a momentum rate of 0.9 over shuffled minibatches with a batch size of 100. Following (Ganin et al. 2016), the learning rate is decayed as 𝜇𝑝= 𝜇0 (1+10𝑝)0.75 , where 𝜇0 is the initial learning rate (set as 0.01 for the Blog networks and 0.02 for the citation networks), 𝑝 is the training progress linearly changing from 0 to 1, and the domain adaptation weight 𝜆 is progressively increased as 1+exp ( 10𝑝) 1.