Cross-Domain Few-Shot Semantic Segmentation via Doubly Matching Transformation
Authors: Jiayi Chen, Rong Quan, Jie Qin
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on four popular datasets show that DMTNet achieves superior performance over state-of-the-art approaches. |
| Researcher Affiliation | Academia | Nanjing University of Aeronautics and Astronautics State Key Laboratory of Integrated Services Networks, Xidian University |
| Pseudocode | No | The paper describes its methods and components in detail but does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/Chen Jiayi68/DMTNet. |
| Open Datasets | Yes | To fairly compare the cross-domain segmentation performance of DMTNet with PATNet [Lei et al., 2022], we choose PASCAL VOC 2012 with SBD augmentation as the source domain, and ISIC2018 [Codella et al., 2019], Chest X-ray [Candemir et al., 2014], Deepglobe [Demir et al., 2018], and FSS1000 [Wei et al., 2019] as the target domains. |
| Dataset Splits | No | The paper details a meta-training and meta-testing setup with support and query sets, but does not explicitly define a separate 'validation' split for hyperparameter tuning in the traditional sense. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used (e.g., GPU models, CPU models, memory) for running the experiments. |
| Software Dependencies | No | The paper mentions the use of Adam optimizer but does not specify version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | In the meta-training stage, we use Adam optimizer to train DMTNet for 19 epochs with a learning rate of 1e3. In the self-finetuning of the meta-testing stage, we use Adam optimizer with a learning rate of 1e-6 for ISIC2018, Deepglobe and FSS-1000, 1e-1 for Chest X-ray. All input images are resized to 400 400 resolution. |