Adversarial Mutual Information Learning for Network Embedding

Authors: Dongxiao He, Lu Zhai, Zhigang Li, Di Jin, Liang Yang, Yuxiao Huang, Philip S. Yu

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A range of empirical results demonstrate the effectiveness of this new approach. 4 Experiments We first give the experimental setup, and then compare the new approach AMIL with some state-of-the-art methods on three network analysis tasks, i.e., node classification, node clustering and network visualization.
Researcher Affiliation Academia 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2School of Artificial Intelligence, Hebei University of Technology, Tianjin, China 3Data Science, George Washington University, Washington, D.C., USA 4Department of Computer Science, University of Illinois at Chicago, Chicago, IL, USA 5Institute for Data Science, Tsinghua University, Beijing, China
Pseudocode No The paper describes the model and processes using mathematical equations and text, but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing its source code or a direct link to a code repository for the methodology described.
Open Datasets Yes Datasets. Seven publicly available datasets1 with varying sizes and characteristics are used... 1https://linqs.soe.ucsc.edu/data
Dataset Splits Yes For each network, we used 10-fold cross-validation and accuracy (AC) as the metric to evaluate the performance of all methods.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cluster specifications) used for running the experiments.
Software Dependencies No The paper mentions 'pytorch deep learning tools' and 'Lib SVM and Lib LINEAR software packages in Weka' but does not specify their version numbers or any other software dependencies with versioning.
Experiment Setup Yes Parameter Settings. The final embedding dimension is often set to power of 2. To ensure fairness, we uniformly set it to 64 for all the methods on all the datasets. ... In our approach AMIL, for the encoder, we use the classic two-layer GCN, which has 128 units in the first layer and 64 units in the second layer. For the discriminators (DNSG and D), we use three layers of fully connected neural network. In specific, 512 units for first two layers and 1 unit for last layer. We use ReLU( ) as the activation function in the first two layers, and use Sigmoid( ) in the last layer. We use the pytorch deep learning tools to learn the model with a learning rate of 0.001.