Neural Lighting Simulation for Urban Scenes
Authors: Ava Pun, Gary Sun, Jingkang Wang, Yun Chen, Ze Yang, Sivabalan Manivasagam, Wei-Chiu Ma, Raquel Urtasun
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that Light Sim generates more realistic relighting results than prior work. Importantly, training perception models on data generated by Light Sim can significantly improve their performance. |
| Researcher Affiliation | Collaboration | Waabi1 University of Toronto2 University of Waterloo3 MIT4 |
| Pseudocode | No | The paper describes methods in text and uses diagrams, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | Our project page is available at https://waabi.ai/lightsim/. (This is a project page, not a direct link to the source code for the proposed method.) |
| Open Datasets | Yes | We evaluate our method primarily on the public real-world driving dataset Panda Set [87] ... To showcase generalizability, we also demonstrate our approach on ten dynamic scenes from the nu Scenes [11] dataset. |
| Dataset Splits | Yes | Specifically, we train on 68 snippets collected in the city and evaluate on 35 snippets in a suburban area, since these two collections are independent and exposed to different lighting conditions. |
| Hardware Specification | Yes | In this project, we ran the experiments primarily on NVIDIA Tesla T4s provided by Amazon Web Services (AWS). For prototype development and small-scale experiments, we used local workstations with RTX A5000s. |
| Software Dependencies | No | We use the official repository for training and evaluating our model on Panda Set. ... Models were trained for five epochs using the Adam W optimizer [48], coupled with the cosine learning rate schedule.8 (No specific version numbers for software dependencies are provided.) |
| Experiment Setup | Yes | We adopt the BEVFormer-small architecture7 with a batch size of two per GPU. Models were trained for five epochs using the Adam W optimizer [48], coupled with the cosine learning rate schedule. |