Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Zero-shot Depth Completion via Test-time Alignment with Affine-invariant Depth Prior
Authors: Lee Hyoseok, Kyeong Seon Kim, Kwon Byung-Ki, Tae-Hyun Oh
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments In this section, we demonstrate the effectiveness of our priorbased depth completion method in indoor (NYUv2 (Silberman et al. 2012), Scene Net (Mc Cormac et al. 2017), VOID (Wong et al. 2020)) and outdoor (Waymo (Sun et al. 2020), nu Scenes (Caesar et al. 2020), KITTI DC (Uhrig et al. 2017)) scenarios, through both quantitative and qualitative evaluations. For evaluation, we use the Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE), both standard metrics in depth completion where lower values indicate better performance. |
| Researcher Affiliation | Academia | Lee Hyoseok1, Kyeong Seon Kim2, Kwon Byung-Ki1, Tae-Hyun Oh1,2,3 1Grad.School of Artificial Intelligence, POSTECH 2Dept. of Electrical Engineering, POSTECH 3Institute for Convergence Research and Education in Advanced Technology, Yonsei University EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Prior-based outlier filtering algorithm. 1: Parameters: Number of segments N, Filter threshold τ 2: Input: Estimated relative depth Dr, Sparse metric depth y, Set of sparse point locations Ω(y). 3: Output: Set of reliable sparse point locations Ω(y ). 4: {Ω(Si)}i=1 N Super Pixel (Dr, N) 5: for i = 1 to N do 6: Ω(yi) Ω(y) Ω(Si) 7: ˆyi RANSAC Regressor(1Ω(yi) Dr, yi) 8: Ω(y i ) |ˆyi yi| > τ 9: Ω(y ) SN i=1 Ω(y i ) |
| Open Source Code | Yes | Code https://github.com/postech-ami/Zero-Shot-Depth Completion Project page https://hyoseok1223.github.io/zero-shot-depth-completion/ |
| Open Datasets | Yes | 4 Experiments In this section, we demonstrate the effectiveness of our priorbased depth completion method in indoor (NYUv2 (Silberman et al. 2012), Scene Net (Mc Cormac et al. 2017), VOID (Wong et al. 2020)) and outdoor (Waymo (Sun et al. 2020), nu Scenes (Caesar et al. 2020), KITTI DC (Uhrig et al. 2017)) scenarios, through both quantitative and qualitative evaluations. |
| Dataset Splits | Yes | We compare our zero-shot depth completion method with unsupervised methods (Wong and Soatto 2021; Ma, Cavalheiro, and Karaman 2019; Wong, Cicek, and Soatto 2021) trained on the split training dataset of each benchmark, i.e., in-domain training. |
| Hardware Specification | No | No specific hardware details (such as GPU or CPU models, memory, or processing units) used for conducting the experiments are mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies, such as programming language versions or library versions, are provided in the paper. |
| Experiment Setup | No | The paper mentions varying sampling steps (50, 2, 1) for the base models and refers to a 'constant C' for the Lr_ssim and 'Filter threshold τ' for Algorithm 1, but does not provide specific values for key experimental setup details such as learning rate, batch size, optimizer, or the number of epochs for the optimization process. |