CKConv: Continuous Kernel Convolution For Sequential Data
Authors: David W. Romero, Anna Kuzina, Erik J Bekkers, Jakub Mikolaj Tomczak, Mark Hoogendoorn
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 EXPERIMENTS We validate our approach against several existing models and across several tasks selected from the corresponding papers. Specifically, we benchmark its ability to handle long-term dependencies, data at different resolutions and irregularly-sampled data. |
| Researcher Affiliation | Academia | David W. Romero1, Anna Kuzina1, Erik J. Bekkers2, Jakub M. Tomczak1, Mark Hoogendoorn1 1 Vrije Universiteit Amsterdam 2 University of Amsterdam The Netherlands {d.w.romeroguzman, a.kuzina}@vu.nl |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is publicly available at https://github.com/dwromero/ckconv. In order to make our paper reproducible, we have release the source code used in our experiments to the public. |
| Open Datasets | Yes | The MNIST dataset (Le Cun et al., 1998) consists of 70K grayscale 28 28 handwritten digits divided into training and test sets of 60K and 10K samples, respectively. ... The Character Trajectories dataset is part of the UEA time series classification archive (Bagnall et al., 2018). ... The Speech Commands dataset (Warden, 2018) consists of 105809 one-second audio recordings... |
| Dataset Splits | Yes | The Penn Tree Bank (PTB) (Marcinkiewicz, 1994) is a language corpus which consists of 5,095K characters for training, 396K for validation and 446K for testing. |
| Hardware Specification | Yes | We utilize wandb (Biewald, 2020) to log our results, and use NVIDIA TITAN RTX GPUs throughout our experiments. |
| Software Dependencies | No | The paper mentions that the code is implemented in PyTorch and uses wandb, but it does not specify version numbers for these software components. |
| Experiment Setup | Yes | Table 8: Hyperparameter specifications of the best performing CKCNN models. PARAMS. COPY MEMORY ADDING PROBLEM SMNIST PMNIST SCIFAR10 CT SC SC_RAW PTB Small / Big Small / Big Small / Big Epochs... Batch Size... Optimizer Adam Learning Rate... # Blocks... Hidden Size... ω0... Dropout... Input Dropout... Embedding Dropout... Weight Dropout... Weight Decay... Scheduler... Patience... Scheduler Decay... |