CGMGM: A Cross-Gaussian Mixture Generative Model for Few-Shot Semantic Segmentation
Authors: Junao Shen, Kun Kuang, Jiaheng Wang, Xinyu Wang, Tian Feng, Wei Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on PASCAL-5i and COCO-20i datasets demonstrate our CGMGM s effectiveness and superior performance compared to the state-of-the-art methods. |
| Researcher Affiliation | Academia | Junao Shen1, Kun Kuang2, Jiaheng Wang1, Xinyu Wang1, Tian Feng1*, Wei Zhang1, 3 1School of Software Technology, Zhejiang University 2College of Computer Science and Technology, Zhejiang University 3Innovation Center of Yangtze River Delta, Zhejiang University |
| Pseudocode | No | The paper describes methods and algorithms in paragraph form but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We evaluated the performance of our CGMGM for FSS on two benchmark datasets: PASCAL-5i (Shaban et al. 2017) and COCO-20i (Nguyen and Todorovic 2019). |
| Dataset Splits | Yes | Following previous studies(Shaban et al. 2017; Tian et al. 2020; Yang et al. 2021), we grouped the categories in both datasets into four folds for cross-validation. During training, three folds were used for training and the remaining one for validation. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments, only mentioning the use of backbones like VGG-16 and ResNet-50. |
| Software Dependencies | No | The paper mentions using the 'SGD optimizer' but does not specify version numbers for any software dependencies or libraries required for replication. |
| Experiment Setup | Yes | For fine-tuning parameters, we used the SGD optimizer with cosine learning rate decay, where the learning rate, momentum, and weight decay were set to 0.05, 0.9, and 0.0001, respectively. our method was trained for 200 epochs with the batch size of 8 and the image size of 473 473 on PASCAL-5i, and for 50 epochs with the batch size of 8 and the image size of 641 641 on COCO20i. The number of Gaussian components M was set to 3 on PASCAL-5i, and to 6 on COCO-20i. |