COT-GAN: Generating Sequential Data via Causal Optimal Transport
Authors: Tianlin Xu, Li Kevin Wenliang, Michael Munn, Beatrice Acciaio
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show effectiveness and stability of COT-GAN when generating both lowand high-dimensional time series data. and We now validate COT-GAN empirically1. |
| Researcher Affiliation | Collaboration | Tianlin Xu London School of Economics t.xu12@lse.ac.uk Li K. Wenliang University College London kevinli@gatsby.ucl.ac.uk Michael Munn Google, NY munn@google.com Beatrice Acciaio London School of Economics ETH Zurich beatrice.acciaio@math.ethz.ch |
| Pseudocode | Yes | Algorithm 1: training COT-GAN by SGD |
| Open Source Code | Yes | Code and data are available at github.com/tianlinxu312/cot-gan |
| Open Datasets | Yes | This dataset is from the UCI repository [18] and [18] D. Dua and C. Graff. UCI Machine Learning Repository. 2017. |
| Dataset Splits | No | The paper mentions using datasets like AR-1, noisy oscillations, EEG, Sprites, and human action sequences, but it does not explicitly provide specific training, validation, or test split percentages or sample counts in the main text. |
| Hardware Specification | No | The paper does not specify the GPU models, CPU models, or other detailed hardware specifications used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or their version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We pre-process the Sprites sequences to have a sequence length of T = 13, and the human action sequences to have length T = 16. Each frame has dimension 64 64 3. We employ the same architecture for the generator and discriminator to train both datasets. Both the generator and discriminator consist of a generic LSTM with 2-D convolutional layers. Details of the data pre-processing, GAN architectures, hyper-parameter settings, and training techniques are reported in Appendix B.2. |