Conditional Loss and Deep Euler Scheme for Time Series Generation

Authors: Carl Remlinger, Joseph Mikael, Romuald Elie8098-8105

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, CEGEN outperforms state-of-the-art and GANs on both marginal and temporal dynamic metrics. Besides, correlation structures are accurately identified in high dimension. When few real data points are available, we verify the effectiveness of CEGEN when combined with transfer learning methods on model-based simulations. Finally, we illustrate the robustness of our methods on various real-world data sets. [...] A thorough numerical study on synthetic and various real world data sets demonstrate the robustness of our generators. CEGEN outperforms the other considered methods on five distinct metrics.
Researcher Affiliation Collaboration Carl Remlinger1,2,3, , Joseph Mikael2,3, Romuald Elie1 1Université Gustave Eiffel, 2EDF Lab, 3Fi ME (Laboratoire de Finance des Marchés de l Energie) carl.rlgr@pm.me
Pseudocode Yes Pseudocode of EWGAN is given in Alg.2 in Appendix C. [...] The pseudocode of CEGEN is given in Alg.1 and details are provided in Appendix D.
Open Source Code No The paper does not include an unambiguous statement or a direct link indicating that the authors are releasing the source code for the methodology described in this paper.
Open Datasets No The paper mentions using synthetic data from Black-Scholes and Ornstein-Uhlenbeck models, and real-world datasets like 'Spot prices', 'Stocks', 'Electric Load', and 'Jena climate' detailed in Appendix F. However, it does not provide concrete access information (specific links, DOIs, or formal citations with author/year for public datasets) for any of these.
Dataset Splits No The paper describes the training process for models, including a transfer learning setup, but it does not explicitly provide specific dataset split information (e.g., percentages, sample counts, or explicit mention of validation sets) needed to reproduce the data partitioning for training, validation, and testing.
Hardware Specification No The paper describes network architectures (e.g., '3-layers of 4 times the data dimension neurons each') but does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions general software components like 'recurrent networks (LSTMs)' but does not provide specific software names with version numbers (e.g., Python 3.x, PyTorch 1.x) that are needed to replicate the experiment.
Experiment Setup No The paper states 'Hyper-parameters are described in Appendix D.' However, Appendix D is not provided in the given text, and therefore, specific hyperparameter values (e.g., learning rate, batch size) or other detailed experimental setup configurations are not present in the main content.