Deep Signature Transforms

Authors: Patrick Kidger, Patric Bonnier, Imanol Perez Arribas, Cristopher Salvi, Terry Lyons

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present the results of empirical experiments to back up the theoretical justification. Code available at github.com/patrick-kidger/Deep-Signature-Transforms. ... Sections 4 covers experiments; we demonstrate positive results for generative, supervised, and reinforcement learning problems.
Researcher Affiliation Academia Patric Bonnier1, Patrick Kidger1,2, Imanol Perez Arribas1,2, Cristopher Salvi1,2, Terry Lyons1,2 1 Mathematical Institute, University of Oxford 2 The Alan Turing Institute, British Library {bonnier, kidger, perez, salvi, tlyons}@maths.ox.ac.uk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code available at github.com/patrick-kidger/Deep-Signature-Transforms.
Open Datasets Yes Figure 3 shows four handwritten digits from the Pen Digits dataset [31]. [31] D. Dua and C. Graff, UCI Machine Learning Repository, 2017.
Dataset Splits No The paper does not explicitly provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning in the main text.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software like 'PyTorch' [46] and the 'Signatory' project [22] but does not provide specific version numbers for these or other ancillary software components required for replication.
Experiment Setup No The paper states "Further implementation details are in Appendix B" for its experiments, but the main text does not contain specific experimental setup details such as concrete hyperparameter values or training configurations.