Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Always Clear Depth: Robust Monocular Depth Estimation Under Adverse Weather
Authors: Kui Jiang, Jing Cao, Zhaocheng Yu, Junjun Jiang, Jingchun Zhou
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our ACDepth surpasses md4all-DD by 2.50% for night scene and 2.61% for rainy scene on the nu Scenes dataset in terms of the abs Rel metric. |
| Researcher Affiliation | Academia | 1Harbin Institute of Technology 2Dalian Maritime University EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods using mathematical equations and textual descriptions, but does not include any clearly labeled pseudocode blocks or algorithms. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code, nor does it include a link to a code repository or mention code in supplementary materials. |
| Open Datasets | Yes | In this study, the commonly used nu Scenes [Caesar et al., 2020] and Robot Car [Maddern et al., 2017] datasets are used for training and comparison. |
| Dataset Splits | Yes | Following [Gasperini et al., 2023], we adopt 15,129 generated samples (day-clear, day-rain, night) for training and 6,019 samples (including 4449 day-clear, 1088 rain, and 602 night) for testing. Robot Car is a large outdoor dataset collected in Oxford, UK. Following [Gasperini et al., 2023], we adopt 16,563 generated samples (day, night) for training and 1,411 samples (including 702 day 709 night) for testing. |
| Hardware Specification | Yes | All experiments are conducted on the same Res Net18 architecture [He et al., 2016]. We train the student model and teacher model on a single NVIDIA 3090 GPU with a batch size of 16, using the Adam optimizer. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and a ResNet18 architecture, but it does not specify any software names with version numbers (e.g., specific deep learning frameworks like PyTorch or TensorFlow, or their versions). |
| Experiment Setup | Yes | We train the student model and teacher model on a single NVIDIA 3090 GPU with a batch size of 16, using the Adam optimizer. We set initial learning rate to 5e-4, reducing it by a factor of 0.1 every 15 epoch. The student model are trained for 25 epochs. Following the experimental protocol of [Gasperini et al., 2023], we maintain identical hyperparameter settings for self-supervised learning. Through experimental validation of different parameter combinations, the weights for the loss functions are set to λ1 = 0.01, λ2 = 0.02. |