FFA-Net: Feature Fusion Attention Network for Single Image Dehazing
Authors: Xu Qin, Zhilin Wang, Yuanchao Bai, Xiaodong Xie, Huizhu Jia11908-11915
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results demonstrate that our proposed FFANet surpasses previous state-of-the-art single image dehazing methods by a very large margin both quantitatively and qualitatively, boosting the best published PSNR metric from 30.23 d B to 36.39 d B on the SOTS indoor test dataset. |
| Researcher Affiliation | Academia | 1School of Electronics Engineering and Computer Science, Peking University 2School of Computer Science and Engineering, Beihang University |
| Pseudocode | No | The paper includes a network architecture diagram (Fig. 2) and descriptive text about the model components, but it does not provide any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Code has been made available at Git Hub. |
| Open Datasets | Yes | (Li et al. 2018) proposed an image dehazing benchmark RESIDE, which contains synthetic hazy images in both indoor and outdoor scenarios from depth dataset (NYU Depth V2(Silberman et al. 2012)) and stereo datasets(Middlebury Stereo datasets (Scharstein and Szeliski 2003)). The Indoor Training Set of RESIDE contains 1399 clean image and 13990 hazy images generated by corresponding clean images. |
| Dataset Splits | No | The paper mentions a 'training dataset' and a 'testing set' (SOTS) but does not explicitly describe a separate validation dataset split or its size/composition. |
| Hardware Specification | Yes | Pyt Torch (Paszke et al. 2017) was used to implement our models with a RTX 2080Ti GPU. |
| Software Dependencies | No | Pyt Torch (Paszke et al. 2017) was used to implement our models. The paper mentions PyTorch but does not specify a version number for it or any other software dependencies. |
| Experiment Setup | Yes | The number of Group Structure G is 3. In each Group Structure, we set the Basic Block Structure number as B = 19. Except for the Channel Attention whose kernel size is 1 1, we set all convolution layers filter size is 3 3. All feature maps keep size fixed except for Channel Attention module. Every Group Structure outputs 64 filters. ... The whole network is trained for 5 105 steps. We use Adam optimizer, where β1 and β2 take the default values of 0.9 and 0.999, respectively. The initial learning rate is set to 1 10 4, we adopt the cosine annealing strategy (He et al. 2019) to adjust the learning rate from the initial value to 0 by following the cosine function. |