DeepExposure: Learning to Expose Photos with Asynchronously Reinforced Adversarial Learning

Authors: Runsheng Yu, Wenyu Liu, Yasen Zhang, Zhi Qu, Deli Zhao, Bo Zhang

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive experiments verify that our algorithms are superior to state-of-the-art methods in terms of quantitative accuracy and visual illustration. 5 Experiment
Researcher Affiliation Collaboration Runsheng Yu Xiaomi AI Lab South China Normal University runshengyu@gmail.com Wenyu Liu Xiaomi AI Lab Peking University liuwenyu@pku.edu.cn Yasen Zhang Xiaomi AI Lab zhangyasen@xiaomi.com Zhi Qu Xiaomi AI Lab quzhi@xiaomi.com Deli Zhao Xiaomi AI Lab zhaodeli@xiaomi.com Bo Zhang Xiaomi AI Lab zhangbo@xiaomi.com
Pseudocode Yes The pseudo-codes are presented in Appendix E.
Open Source Code No The paper does not provide a specific repository link or an explicit statement about releasing the source code for their methodology. It only mentions using pre-trained models or demos from other works.
Open Datasets Yes We train our model on MIT-Adobe Five K [3], a dataset which contains 5,000 RAW photos and corresponding retouched ones edited by five experts for each photo.
Dataset Splits No The paper states: 'We separate the dataset into three subsets: 2, 000 input unretouched images, 2, 000 retouched images by expert C, and 1, 000 input RAW images for testing.' It does not explicitly mention a validation split or provide specific details for one.
Hardware Specification Yes The codes are run on P40 Tesla GPU.
Software Dependencies No The paper states 'All the networks are implemented via Tensorflow.' but does not provide a specific version number for Tensorflow or any other software dependencies.
Experiment Setup Yes Here we present some details between different networks. For discriminator network, the original learning rate is 5 10 5 with an exponential decay to 10 3 of the original value. The batch size for adversarial learning is 8. For policy network, the original learning rate is 1.5 10 5 with an exponential decay to 10 3 of the original value. The Ornstein-Uhlenbeck process [34] is used to perform the exploration 2. The mini-batch size for policy network is 8. ... For value network, if it is Deep Exposure I, the original learning rate is 5 10 4 with an exponential decay to 10 3 of the original value. ... The γ parameter is set 0.99.