Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (GIN)
Authors: Peter Sorrenson, Carsten Rother, Ullrich Köthe
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on artificial data and EMNIST demonstrate that theoretical predictions are indeed verified in practice. |
| Researcher Affiliation | Academia | Peter Sorrenson, Carsten Rother, Ullrich K othe Visual Learning Lab Heidelberg University |
| Pseudocode | No | The paper describes network architectures and mathematical formulations but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | The data comes from the EMNIST Digits training set of 240,000 images of handwritten digits with labels (Cohen et al., 2017). |
| Dataset Splits | No | The paper uses the EMNIST Digits training set but does not explicitly specify the proportions or sizes of training, validation, and test splits needed for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions algorithms like Adam optimizer and Real NVP, but does not provide specific software library names with version numbers needed for replication. |
| Experiment Setup | Yes | Training converges quickly and stably using the Adam optimizer (Kingma & Ba, 2014) with initial learning rate 10^-2 and other values set to the usual recommendations. Batch size is 1,000 and the data is augmented with Gaussian noise (σ = 0.01) at each iteration. After convergence of the loss, the learning rate is reduced by a factor of 10 and trained again until convergence." and "Optimization is with the Adam optimizer, with initial learning rate 3e-4. Batch size is 240 and the data is augmented with Gaussian noise (σ = 0.01) at each iteration. The model is trained for 45 epochs, then for a further 50 epochs with the learning rate reduced by a factor of 10. |