RobustLoc: Robust Camera Pose Regression in Challenging Driving Environments
Authors: Sijie Wang, Qiyu Kang, Rui She, Wee Peng Tay, Andreas Hartmannsgruber, Diego Navarro Navarro
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that Robust Loc surpasses current stateof-the-art camera pose regression models and achieves robust performance in various environments. |
| Researcher Affiliation | Collaboration | Sijie Wang1*, Qiyu Kang1*, Rui She1*, Wee Peng Tay1, Andreas Hartmannsgruber2, Diego Navarro Navarro2 1 Continental-NTU Corporate Lab, Nanyang Technological University 2 Continental Automotive Singapore {wang1679@e.; qiyu.kang@; rui.she@; wptay@}ntu.edu.sg, {andreas.hartmannsgruber; diego.navarro.navarro}@continental.com |
| Pseudocode | No | The paper does not contain any pseudocode or explicitly labeled algorithm blocks. |
| Open Source Code | Yes | Our code is released at: https://github.com/sijieaaa/Robust Loc |
| Open Datasets | Yes | Oxford Robot Car. The Oxford Robot Car dataset(Maddern et al. 2017) is a large autonomous driving dataset collected by a car driving along a route in Oxford, UK. ... The 4Seasons dataset (Wenzel et al. 2020) is a comprehensive dataset for autonomous driving SLAM. |
| Dataset Splits | No | The paper mentions training, but does not provide specific details on the train/validation/test dataset splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | Yes | All of the experiments are conducted on an NVIDIA A5000. |
| Software Dependencies | No | The paper mentions using ResNet34 and Adam optimizer, but does not specify version numbers for general software dependencies or libraries (e.g., Python, PyTorch, TensorFlow, CUDA). |
| Experiment Setup | Yes | We set the maximum number of input images as 11. We resize the shorter side of each input image to 128 and set the batch size to 64. The Adam optimizer with a learning rate 2 10 4 and weight decay 5 10 4 is used to train the network. Data augmentation techniques include random cropping and color jittering. We set the integration times t0 = 0, t1 = 1, and t2 = 2. The number of attention heads is 8. We train our network for 300 epochs. |