Explicitly Minimizing the Blur Error of Variational Autoencoders
Authors: Gustav Bredell, Kyriakos Flouris, Krishna Chaitanya, Ertunc Erdil, Ender Konukoglu
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show the potential of the proposed loss on three different data sets, where it outperforms several recently proposed reconstruction losses for VAEs. |
| Researcher Affiliation | Academia | Gustav Bredell, Kyriakos Flouris, Krishna Chaitanya, Ertunc Erdil & Ender Konukoglu Department of Information Technology and Electrical Engineering ETH Zurich gustav.bredell@vision.ee.ethz.ch |
| Pseudocode | No | The paper describes the approach using mathematical formulations and textual descriptions but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | Details on the architecture can be found in the code that will be publicly available upon acceptance. |
| Open Datasets | Yes | To evaluate the potential of the proposed approach on natural images we make use of the popular Celeb A dataset as provided by Liu et al. (2015) and Lee et al. (2020) for low and high-resolution images, respectively. |
| Dataset Splits | No | For all the datasets we use 80% for the training set and 20% for the test set. |
| Hardware Specification | No | The paper does not specify any particular hardware components (e.g., CPU, GPU models, or memory size) used for running the experiments. |
| Software Dependencies | No | The code is written in Python and Py Torch (Paszke et al. (2019)) is used as library for the deep learning models. |
| Experiment Setup | Yes | Furthermore, the Adam optimizer is used with a learning rate of 1e 4. For the lowand high-resolution Celeb A the number of training epochs were, 100 and 200, respectively. For the HCP dataset the models were trained for 400 epochs. |