Large Scale Evolving Graphs with Burst Detection

Authors: Yifeng Zhao, Xiangwei Wang, Hongxia Yang, Le Song, Jie Tang

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The performance of the proposed algorithm is demonstrated on both a simulated dataset and a world-leading E-Commerce company dataset, showing that they are able to discriminate recurrent events from extremely bursty events in terms of action propensity. Experiments on both simulated dataset and real datasets are presented in Section 4 with discussions. Table 3: Results(%) comparison of different embedding methods. We use bold to highlight winners.
Researcher Affiliation Collaboration Yifeng Zhao1, Xiangwei Wang2, Hongxia Yang2, Le Song3 and Jie Tang1 1Department of Computer Science and Technology, Tsinghua University 2DAMO Academy, Alibaba Group 3Ant Financial
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Burst Graph4 (footnote 4: https://github.com/eric Zhao93/Burst Graph)
Open Datasets No The paper mentions a 'Simulated Dataset' that they generate and an 'Alibaba Dataset' that they collect from an E-Commerce company, but does not provide any concrete access information (link, DOI, formal citation for public access) for either dataset.
Dataset Splits No The test dataset contains 10% randomly selected vertices, while the negative links are also randomly selected with the same number of positive links for each link type (vanilla/burst). The paper specifies the test split but does not provide specific training and validation dataset splits (percentages or counts) or explicitly state their absence.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions several software components and algorithms used (e.g., 'Adam optimizer', 'Graph SAGE', 'Deep Walk') but does not specify their version numbers or the versions of any underlying programming languages or libraries.
Experiment Setup Yes We adopt Adam optimizer [Kingma and Ba, 2014] to optimize the objective and also introduce dropout with weight penalties into our proposed model. As expected, we penalize L1-norm of weight Ws to induce the sparsity of the output. We investigate the sensitivity of different hyperparameters in Burst Graph including importance weight λ and random variable dimension d.