Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption

Authors: Ziteng Cui, Lin Gu, Xiao Sun, Xianzheng Ma, Yu Qiao, Tatsuya Harada

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Furthermore, we present a comprehensive multi-view dataset captured under challenging illumination conditions for evaluation. Our code and proposed dataset are available at https://github.com/cuiziteng/Aleth-Ne RF. Extensive experiments show that our Aleth-Ne RF achieves satisfactory enhancement quality and multi-view consistency. Our evaluation metrics include PSNR (P) , SSIM (S) and LPIPS (L).
Researcher Affiliation Collaboration Ziteng Cui1,2, Lin Gu3,1, Xiao Sun2*, Xianzheng Ma4, Yu Qiao2, Tatsuya Harada1,3 1The University of Tokyo 2Shanghai AI Laboratory 3RIKEN AIP 4University of Oxford
Pseudocode No The paper describes the approach using mathematical equations and descriptive text but does not include any pseudocode or algorithm blocks.
Open Source Code Yes Our code and proposed dataset are available at https://github.com/cuiziteng/Aleth-Ne RF.
Open Datasets Yes We contribute a challenging illumination multi-view dataset, with paired s RGB low-light & normal-light & over-exposure images, dataset would also be public. Our code and proposed dataset are available at https://github.com/cuiziteng/Aleth-Ne RF. In our proposed LOM dataset, we collected 5 scenes ( buu , chair , sofa , bike , shrub ) in real-world.
Dataset Splits Yes For dataset split, in each scene, we choose 3 5 images as the testing set, 1 image as the validation set, and other images to be the training set, details of training and evaluation views split is shown in Table. 1. Table 1: Details of the dataset split for LOM. scene buu chair sofa bike shrub collected views 25 48 33 40 35 training views 22 43 29 36 30 evaluation views 3 5 4 4 5
Hardware Specification No The paper does not specify any particular hardware details such as GPU models, CPU models, or memory used for experiments.
Software Dependencies No The paper mentions building the framework on 'the open-source Py Torch toolbox Ne RF-Factory', but it does not specify version numbers for PyTorch or any other libraries.
Experiment Setup Yes We utilize the Adam optimizer with an initial learning rate of 5e 4 and employ a cosine learning rate decay strategy every 2500 iterations. The training batch size is set at 4096 for a total of 62500 iterations. The overall training loss is then represented as: L = Lit mse + λ1 Lde + λ2 Lco + λ3 Lcc, where λ1, λ2 and λ3 are three non-negative parameters to balance total loss weights, which we set to 1e 3, 1e 3 and 1e 8 respectively.