SmooSeg: Smoothness Prior for Unsupervised Semantic Segmentation
Authors: Mengcheng Lan, Xinjiang Wang, Yiping Ke, Jiaxing Xu, Litong Feng, Wayne Zhang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our Smoo Seg significantly outperforms STEGO in terms of pixel accuracy on three datasets: COCOStuff (+14.9%), Cityscapes (+13.0%), and Potsdam-3 (+5.7%). |
| Researcher Affiliation | Collaboration | 1 S-Lab, Nanyang Technological University 2 SCSE, Nanyang Technological University 3 Sense Time Research |
| Pseudocode | Yes | Algorithm 1 Smoo Seg: Py Torch-like Pseudocode |
| Open Source Code | Yes | https://github.com/mc-lan/Smoo Seg |
| Open Datasets | Yes | We test on three datasets. COCOStuff [35] is a scene-centric dataset... Cityscapes [36] is a collection of street scene images... Potsdam-3 [3] is a remote sensing dataset... |
| Dataset Splits | No | The paper states training and testing image counts for Potsdam-3, but does not explicitly provide details about a separate validation set split for any of the datasets. |
| Hardware Specification | Yes | Our experiments were conducted using Py Torch [37] on an RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch [37]' but does not provide a specific version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The exponential moving average (EMA) hyper-parameter is set to α = 0.998. The dimension of the embedding space is D = 64. The temperature is set to τ = 0.1. We use the Adam optimizer [38] with a learning rate of 1 10 4 and 5 10 4 for the projector and predictor, respectively. We set a batch size of 32 for all datasets. We train our model with a total of 3000 iterations for Cityscapes and Potsdam-3 datasets, and 8000 iterations for the COCOStuff dataset. |