MCA: Moment Channel Attention Networks
Authors: Yangbo Jiang, Zhiwei Jiang, Le Han, Zenan Huang, Nenggan Zheng
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on classical image classification, object detection, and instance segmentation tasks demonstrate that our proposed method achieves state-of-the-art results, outperforming existing channel attention methods. |
| Researcher Affiliation | Collaboration | Yangbo Jiang1,2, Zhiwei Jiang3, Le Han1,2, Zenan Huang1,2, Nenggan Zheng1,2,4,5* 1Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, Zhejiang, China 2College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China 3Guangzhou Electronic Technology Co., Ltd., Chinese Academy of Sciences, Guang Zhou, China |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code for this project can be accessed on Git Hub at https://github.com/CSDLLab/MCA. |
| Open Datasets | Yes | We evaluate the effectiveness of our approach in object detection and instance segmentation tasks using the COCO (Lin et al. 2014) dataset. Next, we further evaluate the MCA method for image classification tasks on the Image Net dataset (Russakovsky et al. 2015). |
| Dataset Splits | Yes | Experimental results on the COCO val2017 set. (Table 1, Table 2, Table 3, Table 5) |
| Hardware Specification | No | The paper mentions '8 GPUs' but does not specify the model or type of GPUs, CPUs, or other specific hardware components used for experiments. |
| Software Dependencies | Yes | Our research is conducted using Py Torch 1.8.2 and Mind Spore 1.7.0 (Huawei Technologies Co. 2022). |
| Experiment Setup | Yes | In terms of optimization, stochastic gradient descent (SGD) was chosen as the optimizer, with a weight decay of 1e-4, and a momentum of 0.9. All models are trained for a total of 12 epochs. The learning rate is set to 0.02 initially and drops to 0.002 and 0.0002 at the 8th and 11th epochs, respectively. |