Tensor Gaussian Process with Contraction for Multi-Channel Imaging Analysis

Authors: Hu Sun, Ward Manchester, Meng Jin, Yang Liu, Yang Chen

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach via extensive simulation studies and applying it to the solar flare forecasting problem.
Researcher Affiliation Collaboration 1Department of Statistics, University of Michigan, Ann Arbor 2Department of Climate and Space Sciences and Engineering, University of Michigan, Ann Arbor 3Solar & Astrophysics Lab, Lockheed Martin 4W.W. Hansen Experimental Physics Laboratory, Stanford University.
Pseudocode Yes Algorithm 1 Alternating Proximal Gradient Descent Algorithm for Tensor-GPST Estimation
Open Source Code Yes Our code is available on Git Hub at https://github.com/husun0822/Tensor GPST.
Open Datasets No The paper describes a custom solar flare dataset collected by the authors and simulated data. For the solar flare dataset, it states: "In our dataset, we have 1,329 flare samples from year 2010 to 2018". There is no specific link, DOI, repository name, or formal citation provided for public access to this specific dataset.
Dataset Splits No The paper explicitly states splitting data into training and testing sets (e.g., "75% for training and 25% for testing") but does not mention a distinct validation set or its split percentage.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. It only generally refers to "machine learning algorithms".
Software Dependencies No The paper mentions "GPy Torch and Tensorly-Torch packages in Python" but does not specify their version numbers, which are required for reproducible software dependencies.
Experiment Setup Yes We set the latent tensor dimension as 3 3 3 for GPST and SE+TC and the rank for K1, K2, K3 of GP as 3 and the CP rank as 9 for CP and the multi-linear rank as 3 3 3 for Tucker such that the low-rankness is comparable across all methods. We select the regularization tuning parameter for all models with hyperparameters by 5-fold cross validation.