A Large-Scale Film Style Dataset for Learning Multi-frequency Driven Film Enhancement

Authors: Zinuo Li, Xuhang Chen, Shuqiang Wang, Chi-Man Pun

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments reveal that the performance of our model is superior than state-of-the-art techniques.
Researcher Affiliation Academia 1University of Macau 2Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
Pseudocode No The paper describes the proposed method in detail and provides a network architecture diagram (Figure 3), but it does not include any pseudocode or algorithm blocks.
Open Source Code Yes The link of code and data is https://github.com/CXH-Research/Film Net.
Open Datasets Yes Datasets In this section, three datasets are used for training and evaluation in total: MIT-Adobe Five K [Bychkovsky et al., 2011], HDR+ [Hasinoff et al., 2016] and our Film Set.
Dataset Splits No It is configured with 4657 training samples and 638 testing samples. For easier training and validation, all images are transformed to 512 512 resolution and standard PNG format. For Five K and HDR+, we use the same dataset configuration as [Zeng et al., 2020] and transform all images to the more common 480p resolution and standard PNG format.
Hardware Specification Yes The typical Adam optimizer with its default parameters is used to train our model by NVIDIA RTX A6000.
Software Dependencies No Our implementation is based on the Py Torch. The paper mentions PyTorch but does not specify a version number or other software dependencies with versions.
Experiment Setup Yes The batch size is set to 1 and the learning rate is set to 1e 4. Random cropping, horizontal flipping, and tweaks to brightness and saturation are used to enrich data.