A deep convolutional neural network that is invariant to time rescaling
Authors: Brandon G Jacques, Zoran Tiganj, Aakash Sarkar, Marc Howard, Per Sederberg
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compared the performance of SITHCon to a Temporal Convolution Network (TCN) on classification and regression problems with both univariate and multivariate time series. |
| Researcher Affiliation | Academia | 1Department of Psychology, University of Virginia, Charlottesville, VA, United States 2Department of Computer Science, Indiana University, Bloomington, IN, United States 3Department of Psychological and Brain Sciences, Boston University, Boston, MA, United States. |
| Pseudocode | No | The paper describes the network architecture and operations using mathematical equations and diagrams, but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions using 'the TCN implementation supplied by Bai et al. (2018) at https://github.com/locuslab/TCN' for the baseline, but does not provide concrete access to the source code for SITHCon, the methodology described in this paper. |
| Open Datasets | Yes | Finally, the Audio MNIST task (Becker et al., 2019) requires the networks to classify spoken digits 0-9 in recordings by many different speakers. |
| Dataset Splits | No | The paper describes how training and test data were used, for example, for Audio MNIST: 'We created a training dataset consisting of 45 out of 50 stimuli for each digit from all speakers. The remaining 5 stimuli per digit from each speaker were used for testing.' However, it does not provide specific details about a separate validation dataset split or a full train/validation/test split. |
| Hardware Specification | No | The paper states: 'The authors acknowledge Research computing at The University of Virginia for providing computational resources and technical support that have contributed to the results reported within this publication.' However, it does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using a specific implementation for TCN ('https://github.com/locuslab/TCN') but does not list general software dependencies with specific version numbers (e.g., Python version, library versions like TensorFlow/PyTorch). |
| Experiment Setup | Yes | In all experiments, SITHCon was similarly configured, with two SITHCon layers, each with 400 values of τ log-spaced from 1 to 3000 or 4000 and k of 35. The width of the convolution kernels was set to 23 with a dilation of 2. ... The TCN was also largely similar across experiments, with 8 total layers, only varied in number of input channels and the kernel width. We list the comparable parameters between the TCN and SITHCon networks in Table 1. |