Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Learning Adaptive Lighting via Channel-Aware Guidance
Authors: Qirui Yang, Peng-Tao Jiang, Hao Zhang, Jinwei Chen, Bo Li, Huanjing Yue, Jingyu Yang
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on four representative light-related tasks demonstrate that LALNet significantly outperforms state-of-the-art methods on benchmark tests and requires fewer computational resources. We conduct comprehensive experiments and demonstrate the state-of-the-art performance of our LALNet on four light-related tasks, as shown in Fig. 1. Our contributions can be summarized as follows: Extensive experiments on four representative light-related tasks show that LALNet significantly outperforms state-of-the-art methods in benchmarking and that LALNet requires fewer computational resources. We conduct comprehensive breakdown ablations to evaluate the effects of our proposed framework. |
| Researcher Affiliation | Collaboration | 1 Tianjin University, Tianjin, China. 2 vivo Mobile Communication Co., Ltd, Hangzhou, China. Correspondence to: Peng-Tao Jiang <EMAIL>, Jingyu Yang <EMAIL>. |
| Pseudocode | No | The paper describes the methodology using textual explanations, mathematical formulations, and block diagrams (Figure 3 for LALNet architecture). However, it does not include any explicitly labeled pseudocode or algorithm blocks with structured, code-like steps. |
| Open Source Code | No | We provide an online demo at LALNet. More results and visual comparisons are presented in our Appendix and LALNet. The paper mentions an "online demo" and refers to "LALNet" for more results, but there is no explicit statement or link indicating the release of the source code for the described methodology. |
| Open Datasets | Yes | We evaluate our method on four representative light-related tasks: exposure correction (SCIE (Cai et al., 2018)), image retouching (HDR+ Burst Photography (Hasinoff et al., 2016)), low-light enhancement (LOL dataset (Wei et al., 2018)), and tone mapping (HDRI Haven (Yang et al., 2024). We further evaluate the effectiveness of our model on the exposure correction (Afifi et al., 2021), HDR Survey (Fairchild, 2023), and UVTM (Cao et al., 2023) datasets. |
| Dataset Splits | Yes | Following the settings of (Huang et al., 2022a) for SICE, it contains 1000 training images and 24 test images. The HDR+ dataset is a staple for image retouching, especially in mobile photography. We utilize 675 image sets for training and 248 for testing. The LOL dataset (Wei et al., 2018) contains 500 image pairs in total, with 485 pairs used for training and 15 test images. The HDRI Haven dataset is a new benchmark for evaluating tone mapping (Su et al., 2021; Cao et al., 2023), which includes 570 HDR images of diverse scenes under various light conditions. We select 456 image sets for training and 114 for testing. The MSEC dataset (Afifi et al., 2021) provides images rendered with relative exposure values (EVs) ranging from -1.5 to +1.5, comprising 17,675 training images, 750 validation images, and 5,905 test images. |
| Hardware Specification | Yes | We implement our model with Pytorch on the NVIDIA L40s GPU platform. |
| Software Dependencies | No | We implement our model with Pytorch on the NVIDIA L40s GPU platform. The paper mentions "Pytorch" as the implementation framework but does not specify a version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | The model is trained with the Adam optimizer (β1 = 0.9, β2 = 0.999) for 4 × 10^5 iterations. The learning rate is initially set to 1 × 10^−4. We utilize three objective losses to optimize our network, including reconstruction loss (LRe and LSSIM), perceptual loss (LP), and high-frequency loss (LHF). To summarize, the complete objective of our proposed model is combined as follows: Ltotal = α LRe + β LSSIM + γ LHF + η LP, where α, β, γ, and η are the corresponding weight coefficients. |