Positional Normalization
Authors: Boyi Li, Felix Wu, Kilian Q. Weinberger, Serge Belongie
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct our experiments on unpaired and paired image translation tasks using Cycle GAN [77] and Pix2pix [29] as baselines, respectively. Our code is available at https://github.com/Boyiliee/PONO. |
| Researcher Affiliation | Academia | Boyi Li1,2 , Felix Wu1 , Kilian Q. Weinberger1, Serge Belongie1,2 1Cornell University 2Cornell Tech {bl728, fw245, kilian, sjb344}@cornell.edu |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | We explore the beneļ¬ts of PONO with MS in several initial experiments across different model architectures and image generation tasks and provide code online at https://github.com/Boyiliee/PONO. |
| Open Datasets | Yes | We use four datasets: 1) Maps (Maps aerial photograph)... 2) Horse Zebra... downloaded from Image Net [11]... 3) Cityscapes (Semantic labels photos) [9]... 4) Day Night including 17,823 natural scene images from Transient Attributes dataset [37]... |
| Dataset Splits | No | The paper specifies training and testing splits for its datasets, for example, '1096 training images scraped from Google Maps and 1098 images in each domain for testing.' However, it does not explicitly mention a separate validation split or how it was used. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types) used to run the experiments. |
| Software Dependencies | Yes | LPIPS is based on pretrained Alex Net [36] features3, which has been shown [76] to be highly correlated to human judgment. 3https://github.com/richzhang/Perceptual Similarity, version 0.1. |
| Experiment Setup | Yes | We train for 200 epochs and compare the results with/without PONO-MS, under similar conditions with matching number of parameters. [...] Throughout we use the hyper-parameters suggested by the original authors. |