Time-series Generation by Contrastive Imitation
Authors: Daniel Jarrett, Ioana Bica, Mihaela van der Schaar
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Theoretically, we illustrate the correctness of this formulation and the consistency of the algorithm. Empirically, we evaluate its ability to generate predictively useful samples from realworld datasets, verifying that it performs at the standard of existing benchmarks. |
| Researcher Affiliation | Academia | Daniel Jarrett University of Cambridge, UK daniel.jarrett@maths.cam.ac.uk Ioana Bica University of Oxford, UK Alan Turing Institute, UK ioana.bica@eng.ox.ac.uk Mihaela van der Schaar University of California, Los Angeles University of Cambridge, UK Alan Turing Institute, UK mv472@cam.ac.uk |
| Pseudocode | Yes | Algorithm 1 Time-series Generation by Contrastive Imitation Details in Appendix B |
| Open Source Code | No | The paper refers to publicly available source code for benchmark algorithms and for computing metrics ([94-98]), but it does not explicitly provide a link or statement that the source code for *their own* described methodology (Time GCI) is available. |
| Open Datasets | Yes | All datasets are accessible from their sources, and we use the original source code for preprocessing sines and the UCI datasets by [12], publicly available at [94]. (Section 5, Datasets). The paper also cites specific papers for the datasets: [90] (Energy), [91] (Gas), [92] (Metro), and [93] (MIMIC-III), with [93] explicitly stating "a freely accessible critical care database". |
| Dataset Splits | No | The paper describes using a "Train-on-Synthetic, Test-on-Real (TSTR)" framework for evaluation, but it does not provide specific details on how the *original* real datasets were split into training, validation, and test sets for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The implementation relies on PyTorch as a deep learning framework, and follows the API of the garage repository [103] for actor-critic method, and the baselines repository [104] for the conditional MLE regularization. (Appendix B). However, no version numbers are provided for these software components. |
| Experiment Setup | Yes | Table 4 lists selected hyperparameters for Time GCI (our method) across all datasets. In addition, we regularize the policy with conditional MLE (see Algorithm 1) with weight κ = 0.05 and train for 500 epochs. (Appendix C). |