Label-Efficient Semantic Segmentation with Diffusion Models
Authors: Dmitry Baranchuk, Andrey Voynov, Ivan Rubachev, Valentin Khrulkov, Artem Babenko
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our approach significantly outperforms the existing alternatives on several datasets for the same amount of human supervision. The comparison of the methods in terms of the mean Io U measure is presented in Table 2. |
| Researcher Affiliation | Industry | Dmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, Artem Babenko Yandex Research |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code of the project is publicly available. |
| Open Datasets | Yes | In our evaluation, we mainly work with the bedroom , cat and horse categories from LSUN (Yu et al., 2015) and FFHQ-256 (Karras et al., 2019)... ADE-Bedroom-30 is a subset of the ADE20K dataset (Zhou et al., 2018)... Celeb A-19 is a subset of the Celeb AMask-HQ dataset (Lee et al., 2020). |
| Dataset Splits | Yes | For each dataset, a professional assessor was hired to annotate train and test samples. For each dataset, we increase the number of synthetic images until the performance on the validation set is not saturated. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU models, or cloud computing instances with detailed specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using specific models and algorithms (e.g., 'Adam optimizer', 'Deep Lab V3'), but it does not provide specific version numbers for software libraries or dependencies (e.g., 'PyTorch 1.9', 'Python 3.8') that would be needed to reproduce the experiment. |
| Experiment Setup | Yes | The ensemble of MLPs consists of 10 independent models. Each MLP is trained for 4 epochs using the Adam optimizer (Kingma & Ba, 2015) with 0.001 learning rate. The batch size is 64. This setting is used for all methods and datasets. Specifically, we use MLPs with two hidden layers with Re LU nonlinearity and batch normalization. The sizes of hidden layers are 128 and 32 for datasets with a number of classes less than 30, and 256 and 128 for others. |