A Surprisingly Simple Approach to Generalized Few-Shot Semantic Segmentation

Authors: Tomoya Sakai, Haoxiang Qiu, Takayuki Katsuki, Daiki Kimura, Takayuki Osogami, Tadanobu Inoue

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through numerical experiments, we demonstrated the effectiveness of our method. It improved in novel-class segmentation performance in the 1-shot scenario by 6.1% on the PASCAL-5i dataset, 4.7% on the PASCAL-10i dataset, and 1.0% on the COCO-20i dataset.
Researcher Affiliation Industry Tomoya Sakai IBM Research Tokyo tomoya.sakai2@ibm.com Haoxiang Qiu IBM Research Tokyo haoxiang.qiu@ibm.com Takayuki Katsuki IBM Research Tokyo kats@jp.ibm.com Daiki Kimura IBM Research Tokyo daiki@jp.ibm.com Takayuki Osogami IBM Research Tokyo osogami@jp.ibm.com Tadanobu Inoue IBM inouet@jp.ibm.com
Pseudocode No The paper illustrates its method with flow diagrams (e.g., Figure 4) but does not include structured pseudocode or algorithm blocks.
Open Source Code No Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [No] Justification: However, we will release the code after we get our organization s permission.
Open Datasets Yes Datasets. We used three FSS datasets: PASCAL-5 i [4, 36, 37], PASCAL-10 i [20, 36, 37], and COCO-20 i [6, 38].
Dataset Splits Yes The regularization parameter was determined from the five-fold cross-validation from the ten candidates {10 5, . . . , 105}.
Hardware Specification Yes The computation time was measured on a machine equipped with an NVIDIA V100, 16 CPU cores, and 32GB memory.
Software Dependencies No The paper mentions using 'logistic regression in Scikit-learn' and 'L-BFGS-B' but does not provide specific version numbers for these software components.
Experiment Setup Yes It was trained with labeled data for base classes by using the stochastic gradient descent optimizer with an initial learning rate of 2.5 10 4, momentum of 0.9, and weight decay of 10 4. The batch size was 12, and number of epochs was 20 for COCO-20i and 100 for PASCAL-5i and PASCAL-10i.