Directed Chain Generative Adversarial Networks
Authors: Ming Min, Ruimeng Hu, Tomoyuki Ichiba
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed DC-GANs are examined on four datasets, including two stochastic models from social sciences and computational neuroscience, and two real-world datasets on stock prices and energy consumption. |
| Researcher Affiliation | Academia | 1Department of Statistics and Applied Probability, University of California, Santa Barbara, CA 93106-3110, USA. 2Department of Mathematics, University of California, Santa Barbara, CA 93106-3080, USA. |
| Pseudocode | Yes | Algorithm 1 Generator in the Decorrelating and Branching |
| Open Source Code | Yes | The example implementation of DCGAN is available at https://github.com/mmin0/Directed_Chain_SDE. |
| Open Datasets | Yes | The third real-world data set of stock price time series was extracted from Yahoo Finance1, and the fourth real-world energy consumption data were obtained from Ireland s open data portal2. 1https://finance.yahoo.com/quote/GOOG?p=GOOG&.tsrc=fin-srch. 2https://data.gov.ie/dataset?theme=Energy. |
| Dataset Splits | No | We first generate the same amount of fake data paths as true data paths to avoid imbalance, and choose 80% from both real and fake data as training data, leaving the rest 20% as testing data. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as exact GPU or CPU models. |
| Software Dependencies | No | We remark that DC-GANs can be adapted to torchsde3 framework and use their adjoint method for back-propagation. 3See the Python package https://github.com/google-research/torchsde. |
| Experiment Setup | Yes | We use a two-layer LSTM classifier with channels/2 as the size of the hidden state, where channels is the dimension of generated and real series. We will minimize the cross-entropy loss, and the optimization is done by Adam optimizer with a learning rate of 0.001 for 5000 iterations. |