Network-to-Network Translation with Conditional Invertible Neural Networks
Authors: Robin Rombach, Patrick Esser, Bjorn Ommer
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on diverse conditional image synthesis tasks, competitive image modification results and experiments on image-to-image and text-to-image generation demonstrate the generic applicability of our approach. For example, we translate between BERT and Big GAN, state-of-the-art text and image models to provide text-to-image generation, which neither of both experts can perform on their own. 4 Experiments |
| Researcher Affiliation | Academia | Robin Rombach Patrick Esser Björn Ommer IWR, HCI, Heidelberg University firstname.lastname@iwr.uni-heidelberg.de |
| Pseudocode | No | The provided text does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Both authors contributed equally to this work. Code available at https://github.com/CompVis/net2net. |
| Open Datasets | Yes | During training, access to textual descriptions is obtained by using a captioning model as in [77], trained on the COCO [40] dataset. [...] The autoencoder is trained on a combination of all carnivorous animal classes in Image Net and images of the Aw A2 dataset [75], split into 211306 training images and 10000 testing images, which we call the Animals dataset. |
| Dataset Splits | No | The paper states: 'split into 211306 training images and 10000 testing images' for the Animals dataset, but does not explicitly provide details for a validation split for reproducibility. |
| Hardware Specification | Yes | As our method does not require gradients w.r.t. the models f and g, training of the c INN can be conducted on a single Titan X GPU. |
| Software Dependencies | No | The paper mentions various models and frameworks (e.g., BERT, Big GAN, ResNet, DeepLab V2) but does not specify their version numbers or the versions of any underlying software dependencies like Python or PyTorch. |
| Experiment Setup | No | The paper states 'Technical details regarding the training of our c INN can be found in Sec. G.2.' but this section is not provided in the main paper text, thus specific experimental setup details like hyperparameters are not available. |