Deep reconstruction of strange attractors from time series
Authors: William Gilpin
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that our technique reconstructs the strange attractors of synthetic and real-world systems better than existing techniques, and that it creates consistent, predictive representations of even stochastic systems. We conclude by using our technique to discover dynamical attractors in diverse systems such as patient electrocardiograms, household electricity usage, neural spiking, and eruptions of the Old Faithful geyser demonstrating diverse applications of our technique for exploratory data analysis. 4.2 Experiments |
| Researcher Affiliation | Academia | William Gilpin Quantitative Biology Initiative Harvard University Cambridge, MA 02138 wgilpin@fas.harvard.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code available at: https://github.com/williamgilpin/fnn |
| Open Datasets | Yes | temperature measurements of the irregularly-firing Old Faithful geyser; a human electrocardiogram; hourly electricity usage measurements for 321 households; and spiking rates for neurons in a mouse thalamus [63, 64, 65]. [63] Goldberger, A. L. et al. Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. Circulation 101, e215 e220 (2000). [64] Dua, D. & Graff, C. UCI machine learning repository (2017). URL http://archive.ics. uci.edu/ml. [65] Chaudhuri, R., Gercek, B., Pandey, B., Peyrache, A. & Fiete, I. The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nature neuroscience 22, 1512 1520 (2019). |
| Dataset Splits | Yes | For all datasets, 5000 timepoints are used to construct separate Hankel matrices for training and validation of g and g , and 5000 separate timepoints are used as a test dataset for embedding. For each dataset, different replicates or initial conditions are used for train and test partitions whenever possible; for single-series datasets, sets of 5000 timepoints separated by at least 1000 timepoints are excerpted to prevent overlap between train, validation, and test. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using LSTM and MLP models, but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | Across all experiments, we only tune the regularizer strength λ and the learning rate γ. Because blind embedding is an unsupervised learning problem, we do not change the network architecture, optimizer, and other hyperparameters. As a general heuristic, we adjust λ to be just small enough to avoid dimensionality collapse in the reconstructed attractor (an easily-recognized phenomenon discussed in the next section), and we vary γ only to ensure convergence within the constant number of training epochs used for all experiments. For all results, we train five replicate networks with random initializations. |