Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
PromptHaze: Prompting Real-world Dehazing via Depth Anything Model
Authors: Tian Ye, Sixiang Chen, Haoyu Chen, Wenhao Chai, Jingjing Ren, Zhaohu Xing, Wenxue Li, Lei Zhu
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on widely-used real-world dehazing benchmarks demonstrate the superiority of Prompt Haze in recovering authentic backgrounds and fine details from various haze scenes, outperforming state-of-the-art methods across multiple quality metrics. We compare the performance of our proposed Prompt Haze against several state-of-the-art dehazing methods, including classic image dehazing networks... Our experimental setup is designed from both quantitative and qualitative perspectives. Quantitative Comparison. We perform a quantitative comparison on publicly available real-world dehazing datasets using non-reference metrics. Ablation Study In the ablation study Table 2, we demonstrate the impact of different configurations on image quality assessment metrics on the RTTS dataset. |
| Researcher Affiliation | Academia | 1The Hong Kong University of Science and Technology (Guangzhou) 2University of Washington 3The Hong Kong University of Science and Technology EMAIL, EMAIL, EMAIL,EMAIL |
| Pseudocode | No | The paper describes the method's steps through textual explanations and figures (Figure 3 and 4), but it does not include a distinct, structured pseudocode block or algorithm listing. |
| Open Source Code | No | No explicit statement about the release of source code or a link to a code repository is provided in the paper. |
| Open Datasets | Yes | We qualitatively and quantitatively evaluate our Prompt Haze method on the RTTS dataset (Li et al. 2018), which comprises over 4,000 real hazy images with diverse scenes, times, resolutions, and degradations. Additionally, we also conduct some visual comparisons using Fattal s dataset (Fattal 2014), consisting of 31 classic real hazy cases. We utilize the same clean image dataset as RIDCP (Wu et al. 2023), which comprises 500 clean images paired with depth maps. |
| Dataset Splits | No | The paper mentions using the RTTS dataset for quantitative comparison, qualitative comparison, and a user study (selecting 80 images). It also uses an online haze data generation pipeline from 500 clean images for training. However, it does not specify explicit training, validation, or test splits for the RTTS dataset, nor does it detail the splits for the generated data beyond the training context. |
| Hardware Specification | Yes | We implement our Prompt Haze using the Py Torch framework, harnessing four NVIDIA RTX 4090 GPUs. |
| Software Dependencies | No | The paper mentions implementing Prompt Haze using the 'Py Torch framework' but does not specify a version number for PyTorch or any other libraries or solvers used, which is required for a reproducible software description. |
| Experiment Setup | Yes | We utilize the Adam W optimizer with beta values set to 0.9 and 0.999. The batch size is set to 7, and the initial learning rate is set at 2 10 4, employing a cosine annealing strategy for gradual learning rate reduction. Data augmentation techniques, including horizontal flipping, random resizing and cropping, and random image rotation at 45 and 90 , are applied during training. Each paired data is cropped to a size of 256x256. We set the ฮปcr and the ฮปreg to 0.5 and 0.2, respectively. The proposed Prompt Haze is trained with our Online Haze Data Generation Pipeline for 15K iterations. |