Dynamic Embedding on Textual Networks via a Gaussian Process

Authors: Pengyu Cheng, Yitong Li, Xinyuan Zhang, Liqun Chen, David Carlson, Lawrence Carin7562-7569

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate the proposed approach, the learned node embeddings are used for link prediction and node classification. Performance on those downstream tasks demonstrates that learned embeddings capture relevant information. We also perform these tasks on dynamic textual networks and visualize the learned inducing points. Empirically, Det GP outperforms other models in downstream tasks, yielding efficient and accurate predictions on newly added nodes.
Researcher Affiliation Academia Pengyu Cheng, Yitong Li, Xinyuan Zhang, Liqun Chen, David Carlson, Lawrence Carin Duke University pengyu.cheng@duke.edu
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for open-source code availability.
Open Datasets Yes Cora is a paper citation network... DBLP is a paper citation network... Hep Th (High Energy Physics Theory) (Leskovec, Kleinberg, and Faloutsos 2007) is another paper citation network.
Dataset Splits No The paper describes training and testing sets, and mentions a 'hold-out set' for evaluation. However, it does not provide specific percentages or methodology for a dedicated validation split, distinct from the test set.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions 'Adam (Kingma and Ba 2015) is used to optimize the parameters' and 'a linear SVM classifier' but does not specify version numbers for these or any other software dependencies.
Experiment Setup Yes The embedding for each node has dimension 200, a concatenation of a 100-dimensional textual embedding and a 100-dimensional structural embedding. The maximum number of hops J in P is set to 3. The inducing points are initialized as the k-means centers of the encoded text features. We update inducing points with a smaller learning rate, which is set to one-tenth of the learning rate for the text encoder.