Continuous-Time Flows for Efficient Inference and Density Estimation
Authors: Changyou Chen, Chunyuan Li, Liqun Chen, Wenlin Wang, Yunchen Pu, Lawrence Carin Duke
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on various tasks demonstrate promising performance of the proposed CTF framework, compared to related techniques. |
| Researcher Affiliation | Academia | 1SUNY at Buffalo 2Duke University. |
| Pseudocode | Yes | The algorithm is presented in Algorithm 1 in Section E of the SM. |
| Open Source Code | No | Some experiments are based on the excellent code for Stein GANk (Wang & Liu, 2017), where their default parameter setting are adopted. khttps://github.com/Dart ML/Stein GAN. There is no explicit statement or link provided for the authors' own code for the proposed CTF framework. |
| Open Datasets | Yes | We test Mac GAN on three datasets: MNIST, CIFAR-10 and Celab A. |
| Dataset Splits | No | The paper mentions 'training epochs' and 'testing ELBOs' but does not explicitly provide details about validation dataset splits (e.g., percentages or counts) or a distinct validation set being used. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments, such as specific GPU or CPU models. |
| Software Dependencies | No | The paper mentions 'DCGAN architecture' but does not provide specific version numbers for any software dependencies, programming languages, or libraries used in the experiments. |
| Experiment Setup | Yes | We define the inference network as a deep neural network with two fully connected layers of size 300 with softplus activation functions. The generator Gφ is defined as a 3-layer CNN with the Re LU activation function (except for the top layer which uses tanh as the activation function, see SM G for details). Following (Wang & Liu, 2017), the stepsizes are set to (me e) lr me 50 , where e indexes the epoch, me is the total number of epochs, lr = 1e-4 when updating , and lr = 1e-3 when updating φ. The stepsize in L1 is set to 1e-3. |