Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Representation Learning for Dynamic Graphs: A Survey

Authors: Seyed Mehran Kazemi, Rishab Goel, Kshitij Jain, Ivan Kobyzev, Akshay Sethi, Peter Forsyth, Pascal Poupart

JMLR 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this survey, we review the recent advances in representation learning for dynamic graphs, including dynamic knowledge graphs. We describe existing models from an encoder-decoder perspective, categorize these encoders and decoders based on the techniques they employ, and analyze the approaches in each category. We also review several prominent applications and widely used datasets and highlight directions for future research.
Researcher Affiliation Industry Seyed Mehran Kazemi EMAIL Rishab Goel EMAIL Borealis AI, 310-6666 Saint Urbain, Montreal, QC, Canada
Pseudocode No The paper describes algorithms and models primarily using mathematical equations and descriptive text, without structured pseudocode or algorithm blocks. For example, Section 2.6 'Sequence Models' presents equations for RNNs and LSTMs but not in a pseudocode format.
Open Source Code No Section 7.3 'Open-Source Software' lists implementations for various techniques discussed in the survey by other papers. However, the survey itself does not describe or provide source code for its own methodology, as it is a review paper.
Open Datasets Yes To explore more network datasets, we refer readers to several popular network repositories such as Stanford Large Network Dataset Collection (https://snap.stanford.edu/ data/index.html), Network Repository (http://networkrepository.com/index.php), Social Computing data repository (http://socialcomputing.asu.edu/pages/datasets), LINQS (https://linqs.soe.ucsc.edu/data), UCI Network Data Repository (https: //networkdata.ics.uci.edu/), CNet S Data Repository (http://cnets.indiana.edu/ resources/data-repository/) and Koblenz Network Collection (http://konect.uni-koblenz. de/networks/). Section 7 of Cui et al. (2018) and Section 7.1 of Zhang et al. (2018b) are also good starting points to explore other datasets. Table 2 gives a brief summary of the datasets.
Dataset Splits No The paper is a survey and reviews existing research; it does not conduct its own experiments or define dataset splits for its own work. It discusses datasets and their properties in a general context rather than specific experimental setups.
Hardware Specification No The paper is a survey and does not conduct its own experiments, thus no hardware specifications are provided for experimental setups.
Software Dependencies No The paper is a survey and reviews existing research; it does not present its own experimental methodology with specific software dependencies and version numbers.
Experiment Setup No The paper is a survey and does not conduct its own experiments, thus no experimental setup details, including hyperparameters or system-level training settings, are provided.