READ: Large-Scale Neural Scene Rendering for Autonomous Driving

Authors: Zhuopeng Li, Lu Li, Jianke Zhu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment To fairly compare the qualitative and quantitative results of various methods, we conduct the experiments on Nvidia GeForce RTX 2080 GPU and evaluate our proposed approach on the two datasets for autonomous driving.
Researcher Affiliation Collaboration Zhuopeng Li1, Lu Li1, Jianke Zhu1,2* 1Zhejiang University, Zhejiang, China 2 Alibaba-Zhejiang University Joint Institute of Frontier Technologies {lizhuopeng, lu.lee, jkzhu}@zju.edu.cn
Pseudocode No The paper describes the proposed methods in detail but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about open-sourcing the code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes KITTI Dataset (Geiger, Lenz, and Urtasun 2012): KITTI is a large dataset of real driving scenarios... and Brno Urban Dataset (Ligocki, Jelinek, and Zalud 2020): Compared to KITTI s single-view trajectory, the Brno Urban Dataset contains four views...
Dataset Splits No We evaluated every 10 frames (e.g., frame 0, 10, 20...) by following the training and testing split of (Aliev et al. 2020; R uckert, Franke, and Stamminger 2022). The rest image frames are used for training. To demonstrate the effectiveness of our method, we conducted the more challenging experiment by discarding 5 test frames before and after every 100 frames as the new testing data, results are given in the supplementary materials.
Hardware Specification Yes we conduct the experiments on Nvidia GeForce RTX 2080 GPU and Our method synthesized large-scale driving scenarios within two days of training on a PC having two GPUs.
Software Dependencies No The paper mentions 'PyTorch version' in the context of Instant NGP, but does not provide specific version numbers for the software dependencies used in their own method (READ).
Experiment Setup Yes At each iteration, we sample ten target patches with the size of 256 256 in the KITTI Dataset. Due to the high resolution of images in Brno Urban Dataset, the patch with the size of 336 336 is used for training. In Monte Carlo sampling, we set the sampling ratio to 80%.