CMDA: Cross-Modal and Domain Adversarial Adaptation for LiDAR-Based 3D Object Detection
Authors: Gyusam Chang, Wonseok Roh, Sujin Jang, Dongwook Lee, Daehyun Ji, Gyeongrok Oh, Jinsun Park, Jinkyu Kim, Sangpil Kim
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our extensive experiments with largescale benchmarks, such as nu Scenes, Waymo, and KITTI, those mentioned above provide significant performance gains for UDA tasks, achieving state-of-the-art performance. |
| Researcher Affiliation | Collaboration | 1Department of Artificial Intelligence, Korea University, Republic of Korea 2Samsung Advanced Institute of Technology (SAIT), Republic of Korea 3School of Computer Science and Engineering, Pusan National University, Republic of Korea 4Department of Computer Science and Engineering, Korea University, Republic of Korea |
| Pseudocode | Yes | Algorithm 1: Overview of our framework CMDA. |
| Open Source Code | No | The paper does not include an explicit statement or a direct link to the open-source code for the described methodology. |
| Open Datasets | Yes | We evaluate overall performance on landmark datasets for 3D object detection task: nu Scenes (Caesar et al. 2020), Waymo (Sun et al. 2020), and KITTI (Geiger, Lenz, and Urtasun 2012). |
| Dataset Splits | No | The paper specifies using nu Scenes, Waymo, and KITTI datasets and mentions train/target domains, but does not explicitly provide percentages or counts for training, validation, and test splits within the text. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | The paper discusses the overall framework and loss functions, but it does not specify concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations within the main text. |