Fully Data-Driven Pseudo Label Estimation for Pointly-Supervised Panoptic Segmentation
Authors: Jing Li, Junsong Fan, Yuran Yang, Shuqi Mei, Jun Xiao, Zhaoxiang Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on Pascal VOC and MS COCO demonstrate that our approach is effective and shows stateof-the-art performance compared with related works. |
| Researcher Affiliation | Collaboration | 1University of Chinese Academy of Sciences (UCAS) 2Institute of Automation, Chinese Academy of Sciences (CASIA) 3Centre for Artificial Intelligence and Robotics, HKISI CAS 4State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS) 5Tencent Maps, Tencent {lijing2018, junsong.fan}@ia.ac.cn, {yuranyang, shawnmei}@tencent.com, xiaojun@ucas.ac.cn, zhaoxiang.zhang@ia.ac.cn |
| Pseudocode | No | The information is insufficient. The paper describes its modules and processes in text and diagrams but does not provide pseudocode or a formally structured algorithm block. |
| Open Source Code | Yes | Codes are available at https://github.com/Brave Group/FDD. |
| Open Datasets | Yes | All experiments are carried out on PASCAL VOC 2012 (Everingham et al. 2009) and MS COCO 2017 (Lin et al. 2014). [...] Following (Fan, Zhang, and Tan 2022), we augment the VOC train set with SBD train set (Hariharan et al. 2011), getting a training set of 10,582 images, and refer to this set as train aug set. |
| Dataset Splits | Yes | COCO includes 80 thing classes and 53 stuff classes, it comprises 118,000 training images and 5,000 validation images. |
| Hardware Specification | No | The information is insufficient. The paper mentions using a 'resnet50' backbone but does not provide any specific hardware details such as GPU/CPU models, memory, or cloud computing specifications used for running experiments. |
| Software Dependencies | No | The information is insufficient. The paper mentions using 'Adam W optimizer' but does not specify version numbers for any software dependencies like programming languages, libraries, or frameworks. |
| Experiment Setup | Yes | Our model adopts the same training recipe of (Fan, Zhang, and Tan 2022), namely Adam W optimizer with weight decay 1e4 and learning rate 1.4e-4. Besides, we apply a linear warmup schedule to the losses utilizing M pse as supervision to reduce the influence of noisy M pse at early training epochs. The color-prior loss adopts the same setting in (Fan, Zhang, and Tan 2022). λsem is set to 1 and 0.1 for P1 and P10 settings, respectively. |