TCI-Former: Thermal Conduction-Inspired Transformer for Infrared Small Target Detection
Authors: Tianxiang Chen, Zhentao Tan, Qi Chu, Yue Wu, Bin Liu, Nenghai Yu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on IRSTD-1k and NUAA-SIRST demonstrate the superiority of our method. |
| Researcher Affiliation | Collaboration | 1School of Cyber Science and Technology, University of Science and Technology of China 2Alibaba Group 3Key Laboratory of Electromagnetic Space Information, Chinese Academy of Sciences |
| Pseudocode | No | The paper describes the proposed modules and their components but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link regarding the availability of its source code. |
| Open Datasets | Yes | We choose NUAA-SIRST (Dai et al. 2021a) and IRSTD-1k (Zhang et al. 2022d) as our experimental datasets. |
| Dataset Splits | No | The paper states, "For each dataset, we use 80% of images as training set and 20% as test set." It does not explicitly mention a separate validation set. |
| Hardware Specification | Yes | A Titan XP GPU is used for training, with batch size set to 4. |
| Software Dependencies | No | The paper states, "The algorithm is implemented in Pytorch, with Adaptive Gradient (Ada Grad) as the optimizer...". While PyTorch is mentioned, no specific version number is provided for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The algorithm is implemented in Pytorch, with Adaptive Gradient (Ada Grad) as the optimizer with the initial learning rate set to 0.05 and weight decay coefficient set to 0.0004. A Titan XP GPU is used for training, with batch size set to 4. Training on SIRST and IRSTD-1k takes 800 epochs and 600 epochs respectively. |