REST: Efficient and Accelerated EEG Seizure Analysis through Residual State Updates

Authors: Arshia Afzal, Grigorios Chrysos, Volkan Cevher, Mahsa Shoaran

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our model demonstrates high accuracy in both seizure detection and classification tasks. Notably, REST achieves a remarkable 9-fold acceleration in inference speed compared to state-of-the-art models, while simultaneously demanding substantially less memory than the smallest model employed for this task.
Researcher Affiliation Academia 1INL, EPFL, Switzerland 2LIONS, EPFL, Switzerland 3Department of Electrical and Computer Engineering, University of Wisconsin-Madison, USA.
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Visit our web site at https://arshiaafzal.github.io/REST/
Open Datasets Yes We used two extensive publicly available datasets for the seizure detection and classification task: the Temple University Hospital EEG Seizure Corpus (TUSZ) (Obeid & Picone, 2016; Shah et al., 2018) and the Children s Hospital Boston (Goldberger et al., 2000) dataset.
Dataset Splits Yes The original TUSZ Train-set was randomly split into training and validation sets with a ratio of 90/10. For the CHB-MIT dataset, since predefined splits for training, evaluation, and testing are not provided, we randomly selected 80% of the data for training, 10% for evaluation, and 10% for testing.
Hardware Specification Yes We conducted training on a single NVIDIA A100 GPU with a batch size of 128 EEG clips.
Software Dependencies No The paper mentions using 'Py Torch' but does not provide specific version numbers for software libraries or frameworks.
Experiment Setup Yes We optimized the following hyperparameters for REST based on the lowest validation error: a) Number of neurons in each graph convolution layer within the range [16, 32, 64]; b) Initial learning rate within the range [5e-4, 1e-4]; c) Success probability of the random binary mask within [0.1, 0.3, 0.5, 0.7, 1]. For multi-update REST, the number of updates for each time point was randomly selected an inteager from the interval [1, 10]. We conducted training for 500 epochs using a Multistep learning rate scheduler. Five experiments were run in Py Torch with different random seeds.