ActiveHNE: Active Heterogeneous Network Embedding

Authors: Xia Chen, Guoxian Yu, Jun Wang, Carlotta Domeniconi, Zhao Li, Xiangliang Zhang

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental study on three real-world HINs demonstrates the effectiveness of Active HNE on embedding HINs, and on saving the query cost. Experiments on public datasets demonstrate the effectiveness of Active HNE and its advantage on reducing the query cost.
Researcher Affiliation Collaboration Xia Chen1 , Guoxian Yu1,2, , Jun Wang1 , Carlotta Domeniconi3 , Zhao Li4 , Xiangliang Zhang2 1College of Computer and Information Sciences, Southwest University, Chongqing, China 2CEMSE, King Abdullah University of Science and Technology, Thuwal, SA 3Department of Computer Science, George Mason University, VA, USA 4Alibaba Group, Hangzhou, China {xchen, gxyu, kingjun}@swu.edu.cn, carlotta@cs.gmu.edu, lizhao.lz@alibaba-inc.com, xiangliang.zhang@kaust.edu.sa
Pseudocode No The paper describes algorithms and methods in text and mathematical formulations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code and supplemental file of Active HNE are available at http://mlda.swu.edu.cn/codes.php?name=Active HNE.
Open Datasets Yes We evaluate Active HNE on three real-world HINs extracted from DBLP1, Cora2, and Movie Lens3. 1https://dblp.uni-trier.de/db/ 2http://web.cs.ucla.edu/ yzsun/data/ 3https://grouplens.org/datasets/movielens/
Dataset Splits Yes Following the experimental settings in [Kipf and Welling, 2017], we randomly divide the labeled nodes into three parts: the training set (25% of the labeled nodes), the validation set (25% of the labeled nodes for hyperparameter optimization in DHNE), and the remaining as the testing set.
Hardware Specification No The paper does not provide specific details about the hardware used for the experiments (e.g., GPU models, CPU types, or memory specifications).
Software Dependencies No The paper mentions software like 'Adam' (an optimizer) and refers to 'GCN [Kipf and Welling, 2017]' for parameters, but it does not specify version numbers for any software dependencies or libraries.
Experiment Setup Yes For the proposed DHNE and Active HNE, we simply set K = 1 for comparative evaluation, and leave the investigation on K in the supplemental file. We train DHNE using a network with two convolutional layers and one fully connected layer as described in Section 3.1, with a maximum of 200 epochs (training iterations) using Adam. The dimensionality of the two convolutional filters is 16 and C, respectively. We use an L2 regularization factor for all the three layers. The remaining parameters are fixed as in GCN [Kipf and Welling, 2017]. For metapath2vec and HHNE, we apply the commonly used meta-path schemes APA and APCPA on DBLP and Cora, and we use DMTMD and DMUMD on Movie Lens to guide metapath-based random walks. The walk length and the number of walks per node are set to 80 and 40 as in HHNE, respectively.