Stochastic Nonparametric Event-Tensor Decomposition

Authors: Shandian Zhe, Yishuai Du

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations on both synthetic and real-world datasets show that our model not only improves upon the predictive performance of existing methods, but also discovers interesting clusters underlying the data.
Researcher Affiliation Academia Shandian Zhe, Yishuai Du School of Computing, University of Utah zhe@cs.utah.edu, yishuai.du@utah.edu
Pseudocode No The paper describes the 'Doubly Stochastic Variational Expectation-Maximization Inference' algorithm in Section 4.2 textually but does not provide a formal pseudocode block or algorithm listing.
Open Source Code No The paper does not contain any explicit statement about the release of its source code, nor does it provide a link to a code repository or mention code in supplementary materials.
Open Datasets Yes To examine the predictive performance, we used three real-world datasets, Article(www. kaggle.com/gspmoreira/articles-sharing-reading-from-cit-deskdrop/ data), UFO(www.kaggle.com/NUFORC/ufo-sightings/data) and 911(www.kaggle. com/mchirico/montcoalert/data).
Dataset Splits Yes For training, we used the first 50K, 40K and 40K events in Article, UFO and 911 respectively, and the remaining 22.3K, 19.3K and 30.4K events for testing.
Hardware Specification No The paper does not specify any particular hardware used for running the experiments, such as CPU or GPU models, memory, or cloud computing instances.
Software Dependencies No The paper mentions using 'Ada Delta' for adjusting step-size but does not provide specific software dependencies like programming languages, libraries, or frameworks with version numbers (e.g., Python, PyTorch, TensorFlow versions) that would be needed for replication.
Experiment Setup Yes We varied the number of latent factors from {1, 2, 5, 8}. For both GP-PTF and our method, we used the ARD kernel and set the number of pseudo inputs to 100... The mini-batch sizes of tensor entries (for all the methods), and events (for our method only) are both set to 100. We used Ada Delta (Zeiler, 2012) to adjust the step-size for the stochastic gradient ascent, and ran 100 epochs for each method.