FoSp: Focus and Separation Network for Early Smoke Segmentation

Authors: Lujian Yao, Haitao Zhao, Jingchao Peng, Zhongze Wang, Kaijie Zhao

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our model achieves the best performance on three available smoke segmentation datasets: SYN70K (m Io U: 83.00%), SMOKE5K (Fβ: 81.6%) and Smoke Seg (Fβ: 72.05%).
Researcher Affiliation Academia Lujian Yao, Haitao Zhao*, Jingchao Peng, Zhongze Wang, Kaijie Zhao East China University of Science and Technology {lujianyao, zzwang, kjzhao}@mail.ecust.edu.cn, haitaozhao@ecust.edu.cn, starry-sky@outlook.com
Pseudocode No The paper describes the architecture and modules with text and diagrams (Fig. 2, 3, 4) but does not provide pseudocode or an algorithm block.
Open Source Code Yes The code can be found at https://github.com/LujianYao/FoSp.
Open Datasets Yes We conduct experiments on three large smoke datasets: SYN70K (Yuan et al. 2019b), SMOKE5K (Yan, Zhang, and Barnes 2022) and our Smoke Seg. ... Smoke Seg consists of 6,144 real images (the raw smoke images are sourced from Fig Lib (Dewangan et al. 2022).)
Dataset Splits No The paper mentions training on datasets and testing on 'respective test sets'. While it describes the overall experimental process, it does not explicitly detail the specific train/validation/test splits (e.g., percentages, sample counts, or explicit validation set usage for hyperparameter tuning) that would be needed for precise reproduction.
Hardware Specification Yes We implement our Fo Sp on MMSegmentation with a single NVIDIA RTX 3090Ti GPU.
Software Dependencies No The paper mentions using 'MMSegmentation' and the 'Adam W' optimizer but does not provide specific version numbers for these software dependencies, which are crucial for reproducibility.
Experiment Setup Yes Each image is resized to 512 512. Random crop and random flip are adopted during the training. We use the Adam W (Loshchilov and Hutter 2017) optimizer and set the learning rate to 6e-5 with 0.01 weight decay. We train 40k iterations on SMOKE5K and Smoke Seg, and 80k iterations on SYN70K, with all batch sizes set to 6.