Decomposing Temporal High-Order Interactions via Latent ODEs

Authors: Shibo Li, Robert Kirby, Shandian Zhe

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For evaluation, we examined THIS-ODE in both simulation and real-world applications. The simulation experiments show that THIS-ODE can accurately capture the underlying complex dynamics, and the learned representations further reflected the hidden structures of the objects. We then examined THIS-ODE in four real-world applications. In terms of prediction accuracy, THIS-ODE nearly always outperforms the competing methods by a large margin, in predicting long-term interaction results where the test time frame do not overlap with the training time frame.
Researcher Affiliation Academia 1School of Computing, University of Utah 2Scientific Computing and Imaging (SCI) Institute, University of Utah. Correspondence to: Shandian Zhe <zhe@cs.utah.edu>.
Pseudocode Yes Algorithm 1 THIS-ODE
Open Source Code No For our method THIS-ODE, we used the Torchdiffeq library1 to solve the ODEs, with the explicit Runge-Kutta method of order 5 and a fixed step-size 10 4.1https://github.com/rtqichen/torchdiffeq
Open Datasets Yes Fit Record3, workout logs of Endo Mondo users in outdoor exercises. ... 3https://cseweb.ucsd.edu/ jmcauley/ datasets.html#endomondo ... (2) Beijing Air4, a two-way interaction dataset ... 4https://archive.ics.uci.edu/ml/datasets/ Beijing+Multi-Site+Air-Quality+Data ... (3) Server Room5, temperature data ... 5https://zenodo.org/record/3610078# .Ye EHmlj MLAx ... (4) Indoor Condition6, house conditions data ... 6https://archive.ics.uci.edu/ml/datasets/ Appliances+energy+prediction
Dataset Splits No For interpolation, We randomly sampled 80% interactions and used their first 1/3 and last 1/3 interaction results for training, and then tested on the remaining interaction results in the middle. For extrapolation, we used the first 1/2 interaction results for training, and tested on remaining half.
Hardware Specification No No specific hardware details (like GPU/CPU models, memory, or specific computing infrastructure) are provided in the paper.
Software Dependencies No All these methods were implemented with Py Torch (Paszke et al., 2019). For our method THIS-ODE, we used the Torchdiffeq library1 to solve the ODEs...
Experiment Setup Yes All these methods were implemented with Py Torch (Paszke et al., 2019). For our method THIS-ODE, we used the Torchdiffeq library1 to solve the ODEs, with the explicit Runge-Kutta method of order 5 and a fixed step-size 10 4. For the initial state, we simply used the CP form for β (see (3)). Following (Zhe et al., 2016b), we used the Square-Exponential (SE) kernel for GPTF-time and sparse variational GP approximation with 50 pseudo inputs for efficient inference. For both NTF-time and THIS-ODE, we used one layer neural network with 50 neurons and tanh activation. We ran all the methods with ADAM optimization (Kingma and Ba, 2014) with learning rate 10 3. We ran 500 epochs, which is sufficient for convergence.