Class Guided Channel Weighting Network for Fine-Grained Semantic Segmentation
Authors: Xiang Zhang, Wanqing Zhao, Hangzai Luo, Jinye Peng, Jianping Fan3344-3352
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on PASCAL VOC 2012 and six fine-grained image sets show that our proposed CGCWNet has achieved state-of-the-art results. and Experimental Results and Analysis |
| Researcher Affiliation | Academia | Northwest University, Xi an, China {Zhang Xiang2015@stumail., zhaowq@, hzluo@, pjy@, jfan@}nwu.edu.cn |
| Pseudocode | No | The paper contains architectural diagrams (flowcharts) but no formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code availability. |
| Open Datasets | Yes | We extend the fine-grained image classification datasets (i.e., FGVC Aircraft (Maji et al. 2013), CUB-2002011 (Xiao et al. 2015), Stanford Cars (Krause et al. 2013), and Orchid Plant) to fine-grained segmentation datasets. In our experiments, the proposed CGCWNet has achieved state-of-the-art results on PASCAL VOC 2012 (Hariharan et al. 2011) and expanded six finegrained image sets. and The PASCAL VOC 2012 is a semantic segmentation benchmark with 20 foreground object classes and one background class. The dataset is augmented by the extra labellings provided by (Hariharan et al. 2011) |
| Dataset Splits | Yes | The dataset is augmented by the extra labellings provided by (Hariharan et al. 2011), which has 10,582, 1,449, and 1,456 images for network training, validation, and testing, respectively. and Table 1: Statistics of fine-grained datasets used in this paper. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) were mentioned for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). It mentions frameworks/models used but not their specific versions. |
| Experiment Setup | Yes | For network training, we use min-batch stochastic gradient descent (SGD) optimizer with the batch size 6, initial learning rate 4e 3, weight decay 0.0002, and momentum 0.9 for Stanford Cars, CUB-200-2011, FGVC Aircraft, Orchid Plant, and PASCAL VOC 2012 image sets. Following some previous works (Chen et al. 2018a; Yu et al. 2018b), we use the poly learning rate policy where the learning rate is multiplied by the factor (1 iter/max iter)0.9. In the DEGF module, the values of r and ε are first determined by grid search on the validation set, and then we use the same parameters to train the CGCWNet. In our network, C and b C are set to 2048 and 512 respectively. The loss weights λa, λc, and λf in Eq. (5) are set to 0.4, 0.6, and 1.0 respectively. |