Doubly Decomposing Nonparametric Tensor Regression

Authors: Masaaki Imaizumi, Kohei Hayashi

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify the theoretical convergence rate by simulation and demonstrate the empirical performance for real application in network science. ... 6. Experiments ... 6.1. Synthetic Data ... 6.4. Prediction of Epidemic Spreading
Researcher Affiliation Academia Masaaki Imaizumi INSOU11@HOTMAIL.COM University of Tokyo Kohei Hayashi HAYASHI.KOHEI@GMAIL.COM National Institute of Advanced Industrial Science and Technology, JST, ERATO, Kawarabayashi Large Graph Project
Pseudocode Yes 3.2. Estimation Method and Algorithm. The details of the entire procedure are summarized as follows. Note that Q denotes the number of random samples. Step 1: CP decomposition of input tensors ... Step 2: Construction of the GP prior distribution π(F) ... Step 3: Computation of likelihood ... Step 4: Prediction with the predictive distribution
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes As a real network, we use the Enron email dataset (Klimt & Yang, 2004), which is a collection of emails.
Dataset Splits No The paper mentions optimizing parameters 'in the training phase' and using 'one-half of the samples' for testing, but it does not explicitly describe a separate validation split or cross-validation strategy.
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software libraries, frameworks, or operating systems used in the experiments.
Experiment Setup Yes In all experiments, we generate datasets by the data generating process (dgp) as Y = f (X) + u and fix the noise variance as σ2 = 1. We set the size of X R20x20... varying the sample size as n ∈ {100, 200, 300, 400, 500} ... For TGP and AMNR, we optimize the parameter of the kernel function by grid search in the training phase.