Joint Learning of 2D-3D Weakly Supervised Semantic Segmentation
Authors: Hyeokjun Kweon, Kuk-Jin Yoon
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | With extensive quantitative and qualitative experiments, we verify that the proposed joint WSSS framework effectively transfers the benefit of each domain to the other domain, and the resulting semantic segmentation performance is remarkably improved in both 2D and 3D domains. On the Scan Net V2 benchmark, our framework significantly outperforms the prior WSSS approaches, suggesting a new research direction for WSSS. |
| Researcher Affiliation | Academia | Hyeokjun Kweon KAIST 0327june@kaist.ac.kr Kuk-Jin Yoon KAIST kjyoon@kaist.ac.kr |
| Pseudocode | No | The paper describes its algorithms through mathematical equations and textual explanations, but it does not contain a dedicated 'Pseudocode' or 'Algorithm' block or figure. |
| Open Source Code | No | But all the code and instructions will be publicly available, soon. |
| Open Datasets | Yes | Following the existing 3D WSSS studies [29, 22], we conduct experiments on Scan Net V2 [5] dataset (MIT license, we agreed to the terms of use) |
| Dataset Splits | Yes | We follow the official split, where there exist 1201 train scans and 312 val scans. |
| Hardware Specification | Yes | The model is trained on two Tesla V100 GPUs with batch size 16 for 200 epochs. |
| Software Dependencies | No | The proposed framework is implemented with Py Torch. Res Net38 [30] and Point Net++ [21] are employed as backbones for the image classifier and point cloud classifier, respectively. While software is mentioned, specific version numbers for PyTorch or other libraries are not provided. |
| Experiment Setup | Yes | The model is trained on two Tesla V100 GPUs with batch size 16 for 200 epochs. The initial learning rate is set to 0.003, and be decayed by 0.1 at epoch 120, 160, 180 as in [22]. We set λ = 1 in Eq. 12. |