Inherently Interpretable Time Series Classification via Multiple Instance Learning

Authors: Joseph Early, Gavin Cheung, Kurt Cutajar, Hanting Xie, Jas Kandola, Niall Twomey

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate MILLET on 85 UCR TSC datasets and also present a novel synthetic dataset that is specially designed to facilitate interpretability evaluation. On these datasets, we show MILLET produces sparse explanations quickly that are of higher quality than other well-known interpretability methods.
Researcher Affiliation Collaboration Joseph Early , Gavin KC Cheung , Kurt Cutajar, Hanting Xie, Jas Kandola, & Niall Twomey Corresponding authors: J.A.Early@soton.ac.uk; njtwomey@amazon.co.uk University of Southampton, UK work completed during an internship at Amazon Prime Video, UK. Amazon Prime Video, UK
Pseudocode No The paper provides architectural details in tables but does not contain explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes To the best of our knowledge, our work with MILLET, which is available on Git Hub1, is the first to develop general MIL methods for TSC and apply them to an extensive variety of domains. The code for this project was implemented in Python 3.8, with Py Torch as the main library for machine learning. A standalone code release is available at: https://github.com/JAEarly/ MILTime Series Classification. This includes our synthetic dataset and the ability to use our plug-and-play MILLET models.
Open Datasets Yes We evaluate MILLET on 85 UCR TSC datasets and also present a novel synthetic dataset that is specially designed to facilitate interpretability evaluation. For the UCR datasets, we used the original train/test splits as provided from the archive source.8 https://www.cs.ucr.edu/~eamonn/time_series_data_2018/ This includes our synthetic dataset and the ability to use our plug-and-play MILLET models.
Dataset Splits No No validation datasets were used during training or evaluation .
Hardware Specification Yes Model training was performed using an NVIDIA Tesla V100 GPU with 16GB of VRAM and CUDA v12.0 to enable GPU support.
Software Dependencies Yes The code for this project was implemented in Python 3.8, with Py Torch as the main library for machine learning.
Experiment Setup Yes We used the Adam optimiser with a fixed learning rate of 0.001 for 1500 epochs, and trained to minimise cross entropy loss. Training was performed in an end-to-end manner, i.e. all parts of the networks (including the backbone feature extraction layers) were trained together, and no pre-training or fine-tuning was used. Dropout (if used) was set to 0.1, and batch size was set to min(16, num training time series/10 ) to account for datasets with small training set sizes.