Adversarial Learning for Weakly-Supervised Social Network Alignment

Authors: Chaozhuo Li, Senzhang Wang, Yukun Wang, Philip Yu, Yanbo Liang, Yun Liu, Zhoujun Li996-1003

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we evaluate the proposed models over multiple datasets, and the results demonstrate the superiority of our proposals.
Researcher Affiliation Collaboration 1State Key Lab of Software Development Environment, Beihang University; 2 College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics; 3Department of Electrical Computer Engineering, National University of Singapore; 4Tsinghua University; 5Computer Science Department, University of Illinois at Chicago; 6Hortonworks,USA
Pseudocode Yes Algorithm 1 Training process of SNNAu
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes DBLP: DBLP (http://dblp.uni-trier.de/) is a computer science bibliography website, and its dataset is publicly available
Dataset Splits No The paper specifies 'training data' and a 'test set' for evaluation but does not explicitly mention a 'validation set' or a split used for validation purposes.
Hardware Specification No The paper discusses model parameters and training configurations but does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions software components like 'NTLK stemmer' and 'RMSProp' but does not provide specific version numbers for any of the software dependencies used in the experiments.
Experiment Setup Yes For our proposals, the dimension d of the latent feature space is set to 100. The discriminator D in all SNNA models is a multi-layer perceptron network with only one hidden layer... The size of minimal training batch is 256, and the learning rate α is set to 0.0001. As mentioned in Algorithm 1, the discriminator will be trained nd times in each training iteration and nd is set to 5. The clipping weight c is 0.01, the annotation weight λc is set to 0.2 and the reconstruction weight λr is set to 0.3.