Unsupervised Adaptation from Repeated Traversals for Autonomous Driving

Authors: Yurong You, Cheng Perng Phoo, Katie Luo, Travis Zhang, Wei-Lun Chao, Bharath Hariharan, Mark Campbell, Kilian Q. Weinberger

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experiment with our approach on two large-scale driving datasets and show remarkable improvement in 3D object detection of cars, pedestrians, and cyclists, bringing us a step closer to generalizable autonomous driving.
Researcher Affiliation Academia 1Cornell University, Ithaca NY 2The Ohio State University, Columbus, OH
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/Yurong You/ Rote-DA.
Open Datasets Yes We validate our approach on a single source dataset, the KITTI dataset [8] and two target datasets: the Lyft Level 5 Perception dataset [11] and the Ithaca-365 dataset [5].
Dataset Splits No The paper specifies train/test splits (e.g., "This results a train /test split of 11,873/4,901 point clouds for the Lyft dataset.") but does not explicitly mention a validation dataset split.
Hardware Specification Yes All models are trained/fine-tuned with 4 GPUs (NVIDIA 2080Ti/3090/A6000).
Software Dependencies No The paper mentions "We use the default implementation/configuration of Point RCNN [26] from Open PCDet [19]" but does not specify version numbers for Open PCDet or any other software dependencies.
Experiment Setup Yes For fine-tuning, we fine-tune the model for 10 epochs with learning rate 1.5 10 3 (pseudo-labels are regenerated and refined after each epoch). In our experiments, we set αFB-F = 20 and γFB-F = 0.5 (We find these values are not sensitive). We use 5 traversals to compute PP-score for each scene.