Scribble Hides Class: Promoting Scribble-Based Weakly-Supervised Semantic Segmentation with Its Class Label
Authors: Xinliang Zhang, Lei Zhu, Hangzhou He, Lujia Jin, Yanye Lu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on the Scribble Sup dataset with different qualities of scribble annotations outperform all the previous methods, demonstrating the superiority and robustness of our method. Our experiments were carried out on the widely used Scribble Sup dataset (Lin et al. 2016) |
| Researcher Affiliation | Academia | Xinliang Zhang1,3*, Lei Zhu1-4*, Hangzhou He1-3, Lujia Jin1-4, Yanye Lu1,3,4 1 Institute of Medical University, Peking University, Beijing, China 2 Department of Biomedical Engineering, Peking University, Beijing, China 3 National Biomedical Imaging Center, Peking University, Beijing, China 4 Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Beijing, China zhangxinliang@tju.edu.cn, {zhulei, zhuang}@stu.pku.edu.cn, {jinlujia, yanye.lu}@pku.edu.cn |
| Pseudocode | No | The paper describes the methods in prose and with mathematical equations but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/ Zxl19990529/Class-driven-Scribble-Promotion-Network. |
| Open Datasets | Yes | Our experiments were carried out on the widely used Scribble Sup dataset (Lin et al. 2016), which combines PASCAL VOC2012 and SBD (Hariharan et al. 2011) datasets with scribble annotations. |
| Dataset Splits | Yes | The dataset includes 10,582 training images and 1,449 validation images. |
| Hardware Specification | Yes | All experiments were reported with the m Io U metric (%) and conducted on one NVIDIA RTX 4090 24G GPU with an Intel Xeon Gold 6330 CPU. |
| Software Dependencies | No | The paper mentions using 'deeplab V2 (Chen et al. 2017) and deeplab V3+ (Chen et al. 2018)' but does not provide specific version numbers for these or other software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | We conducted a total of 50 epochs with a base learning rate of 1e 3 and batch size set to 16 for training. To ensure stable training, we adopted a learning rate warmup strategy, linearly increasing the learning rate to 1e 3 over the first 10 epochs, followed by a cosine decay to zero over the next 40 epochs. Validation results were reported using the last checkpoint. The stochastic gradient descent (SGD) optimizer was utilized with a momentum of 0.9 and weight decay of 5e 4. Data augmentation followed the same strategy used in URSS. |