Implicit Generative Copulas

Authors: Tim Janke, Mohamed Ghanmi, Florian Steinke

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Section 4 shows experimental results for synthetic and real data from finance, physics, and image generation.
Researcher Affiliation Academia Tim Janke, Mohamed Ghanmi, Florian Steinke Energy Information Networks and Systems TU Darmstadt, Germany {tim.janke},{florian.steinke}@tu-darmstadt.de
Pseudocode No The paper describes the model and training procedure in text and mathematical equations, but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available from https://github.com/Tim CJanke/igc.
Open Datasets Yes We consider a data set of size N = 5844 that contains 15 years of daily exchange rates... The data was obtained from the R package qrm_data. MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov) Telescopes data set available from the UCI repository (https://archive.ics.uci.edu/ml/ datasets/MAGIC+Gamma+Telescope). Fashion MNIST [44] data set.
Dataset Splits Yes To ensure a test data set of appropriate size, we use a 5-fold cross-validation scheme where 20% of the data is used for training and 80% for testing.
Hardware Specification Yes All experiments besides the training of the autoencoder models were carried out on a desktop PC with a Intel Core i7-7700 3.60Ghz CPU and 8GB RAM. For the training of the autoencoders we used Google Colab [14].
Software Dependencies No The paper mentions using 'tensorflow with the Keras API' and 'pyvinecopulib' and 'the R package kdecopula', but it does not specify any version numbers for these software components.
Experiment Setup Yes For all experiments except the image generation task, we use a fully connected neural network with two layers, 100 units per layer, Re LU activation functions, and train for 500 epochs. For the image generation experiment we use a three layer, fully connected neural network with 200 neurons in each layer, and train for 100 epochs. In all cases we train with a batch size of Nbatch = 100 and generate M = 200 samples from the model per batch. The number of noise distributions is set as K = 3D and T = 106. We use the Adam optimizer [20] with default parameters.