FGNet: Towards Filling the Intra-class and Inter-class Gaps for Few-shot Segmentation
Authors: Yuxuan Zhang, Wei Yang, Shaowei Wang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that FGNet reduces both the gaps for FSS by SAM and IFSM respectively, and achieves stateof-the-art performances on both PASCAL-5i and COCO-20i datasets compared with previous topperforming approaches. |
| Researcher Affiliation | Academia | Yuxuan Zhang1 , Wei Yang1,2,3, , Shaowei Wang4 1 School of Computer Science and Technology, University of Science and Technology of China 2Suzhou Institute for Advanced Research, University of Science and Technology of China 3Hefei National Laboratory 4 Institute of Artificial Intelligence and Blockchain, Guangzhou University yxzhang123@mail.ustc.edu.cn, qubit@ustc.edu.cn, wangsw@gzhu.edu.cn |
| Pseudocode | No | The paper describes methods in text and uses diagrams (e.g., Figure 2 and 3) but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at: github.com/YXZhang979/FGNet |
| Open Datasets | Yes | To evaluate the performance of FGNet, we conduct experiments on two widely-used FSS datasets, i.e., PASCAL-5i [Shaban et al., 2017] and COCO-20i [Lin et al., 2014]. |
| Dataset Splits | Yes | The categories are partitioned into four equal splits for crossvalidation. Specifically, three splits are selected for training, while the rest is for evaluation. |
| Hardware Specification | No | The paper mentions using a ResNet backbone but does not provide specific hardware details (e.g., GPU model, CPU type, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using SGD optimizer, but does not provide specific software names with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA) needed to replicate the experiment. |
| Experiment Setup | Yes | We use SGD optimizer to train FGNet, with 0.9 momentum and 5e-3 initial learning rate. To separate different classes, we set a large batch size of 16. All images are cropped to 473 473 resolution and augmented by random horizontal flipping. |