Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Shifting Time: Time-series Forecasting with Khatri-Rao Neural Operators

Authors: Srinath Dama, Kevin Course, Prasanth B. Nair

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive numerical studies across diverse temporal and spatio-temporal benchmarks demonstrate that our approach achieves state-of-the-art or competitive performance with leading methods.
Researcher Affiliation Academia 1Institute for Aerospace Studies, University of Toronto, ON, Canada. Correspondence to: Srinath Dama <EMAIL>, Kevin Course <EMAIL>, Prasanth B. Nair <EMAIL>.
Pseudocode Yes D. Algorithm for Khatri-Rao structured matrix-vector products In this section, we present an algorithm to efficiently compute the matrix-vector product associated with the Khatri-Rao product structured matrix defined in (8), without the need to explicitly construct the full matrix of size q N p N. ... def khatri_rao_mmprod(...)
Open Source Code Yes The codebase used to generate the results is available at https: //github.com/srinathdama/Shifting Time.
Open Datasets Yes We demonstrate the efficacy of the proposed approach on a suite of challenging test cases, including shallow water simulation (Kissas et al., 2022), a climate modeling problem (Kissas et al., 2022), a set of challenging irregularly sampled time-series benchmarks, the Darts datasets (Herzen et al., 2022), and the M4 dataset (Makridakis et al., 2020)... The dataset is taken from Kissas et al. (2022) which is based on the Physical Sciences Laboratory meteorological data (Kalnay et al., 1996); see https://psl.noaa.gov/data/gridded/data. ncep.reanalysis.surface.html... The MIMIC dataset (Johnson et al., 2016), a publicly available clinical database... The United States Historical Climatology Network (USHCN) dataset (Menne et al., 2015) is utilized... The Crypto (Ticchi et al., 2021) dataset... URL https://kaggle.com/competitions/ g-research-crypto-forecasting... The Player Trajectory dataset... https://github.com/linouk23/NBA-Player-Movements
Dataset Splits Yes For all the datasets in Darts, we used 60%-20%-20% as a train-validation-test split. ... For all M4 datasets, the last 10% of the data for each time series in the training data is used as validation data. ... For these two datasets [Crypto and Player Trajectory], we used the same train-test splits as Wang et al. (2023). Similar to the M4 datasets, 10% of the data corresponding to each time series in the training data is used for validation. ... The training and testing datasets each consist of 1000 simulations with different initial conditions. ... The training data consists of daily temperature and pressure from 2000 to 2005 (1825 days)... The test data contains observations from the years 2005 to 2010...
Hardware Specification Yes All the computations were carried out on a single Nvidia RTX 4090 with 24GB memory.
Software Dependencies No The Adam W (Loshchilov & Hutter, 2019) optimizer is used for training all the models. The paper does not mention specific versions of programming languages, frameworks, or libraries used for replication.
Experiment Setup Yes The default KRNO architecture used in all experiments has 3 kernel integral layers with 20 channels each, lifting and projection layers are parametrized by MLPs with one hidden layer containing 128 hidden units, and the kernels in the integral layers are parametrized by MLPs with 3 hidden layers. The component-wise kernel function used in KRNO is parameterized by a neural network with three hidden layers... The Adam W optimizer is used for training and the learning rate is set to 10 3.