Fluid Dynamics-Inspired Network for Infrared Small Target Detection
Authors: Tianxiang Chen, Qi Chu, Bin Liu, Nenghai Yu
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on IRSTD-1k and SIRST demonstrate that our method achieves SOTA performance in terms of evaluation metrics. Our method outperforms others on IRSTD-1k and SIRST in terms of evaluation metrics. |
| Researcher Affiliation | Academia | Tianxiang Chen , Qi Chu , Bin Liu and Nenghai Yu School of Cyber Science and Technology, University of Science and Technology of China, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement regarding the release of source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We choose IRSTD-1k and SIRST as benchmarks. IRSTD-1k consists of 1,000 real infrared images of 512 512 in size, containing various kinds of small targets and scenes. SIRST contains 427 infrared images. |
| Dataset Splits | No | The paper mentions 'Training on SIRST and IRSTD-1k takes 600 epochs and 400 epochs respectively.' but does not explicitly state specific dataset splits for training, validation, and testing. |
| Hardware Specification | Yes | A Titan Xp GPU is used for training, with batch size set to 4. |
| Software Dependencies | No | The algorithm is implemented in Pytorch, with Adaptive Gradient (Ada Grad) as optimizer, initial learning rate set to 0.05 and weight decay coefficient to 0.0004. However, specific version numbers for PyTorch or other libraries are not provided. |
| Experiment Setup | Yes | The algorithm is implemented in Pytorch, with Adaptive Gradient (Ada Grad) as optimizer, initial learning rate set to 0.05 and weight decay coefficient to 0.0004. A Titan Xp GPU is used for training, with batch size set to 4. Training on SIRST and IRSTD-1k takes 600 epochs and 400 epochs respectively. |