PPEA-Depth: Progressive Parameter-Efficient Adaptation for Self-Supervised Monocular Depth Estimation
Authors: Yue-Jiang Dong, Yuan-Chen Guo, Ying-Tian Liu, Fang-Lue Zhang, Song-Hai Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that PPEA-Depth achieves state-of-the-art performance on KITTI, City Scapes and DDAD datasets. |
| Researcher Affiliation | Academia | 1BNRist, Department of Computer Science and Technology, Tsinghua University, China 2Victoria University of Wellington, New Zealand |
| Pseudocode | No | The paper describes the architecture and mathematical formulations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides a 'Project homepage: https://yuejiangdong.github.io/PPEADepth/'. However, this is described as a 'homepage' and not explicitly a direct link to a source-code repository for the described methodology, nor does it state that code is provided in supplementary material. |
| Open Datasets | Yes | The KITTI dataset (Geiger, Lenz, and Urtasun 2012), City Scapes dataset (Cordts et al. 2016), and DDAD (Guizilini et al. 2020a) are cited and used for training, indicating public availability. |
| Dataset Splits | Yes | We adhere to the established training protocols (Eigen, Puhrsch, and Fergus 2014) and utilize the data pre-processing approach introduced by (Zhou et al. 2017), yielding 39,810 monocular triplets for training, 4,424 for validation, and 697 for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies such as programming languages, libraries, or frameworks used in the experiments. |
| Experiment Setup | Yes | We consistently select Rep LKNet-B as the depth encoder and set the adapter bottleneck ratio to 0.25. ... all adapter weights are initialized to zero to ensure stable training. |