Reinforced active learning for image segmentation

Authors: Arantxa Casanova, Pedro O. Pinheiro, Negar Rostamzadeh, Christopher J. Pal

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test the proof of concept in Cam Vid and provide results in the large-scale dataset Cityscapes. On Cityscapes, our deep RL region-based DQN approach requires roughly 30% less additional labeled data than our most competitive baseline to reach the same performance.
Researcher Affiliation Collaboration Arantxa Casanova École Polytechnique de Montréal Mila, Quebec Artificial Intelligence Institute Element AI Pedro O. Pinheiro Element AI Negar Rostamzadeh Element AI Christopher J. Pal École Polytechnique de Montréal Mila, Quebec Artificial Intelligence Institute Element AI
Pseudocode No The paper describes the training steps in Section 3.1 as a numbered list (1-6) and refers to Figure 2 for illustration, but it does not contain a formal 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not contain any explicit statement about the release of source code or a link to a code repository.
Open Datasets Yes We test the proof of concept in Cam Vid and provide results in the large-scale dataset Cityscapes. On Cityscapes, our deep RL region-based DQN approach requires roughly 30% less additional labeled data than our most competitive baseline to reach the same performance. Moreover, we find that our method asks for more labels of under-represented categories compared to the baselines, improving their performance and helping to mitigate class imbalance.
Dataset Splits Yes Cam Vid (Brostow et al., 2008). This dataset consists of street scene view images... We split the train set with uniform sampling in 110 labeled images (from where we get 10 images to represent the state set DS and the rest for DT ), and 260 images to build DV . We use the dataset s validation set for DR. We report the final segmentation results on the test set. Cityscapes (Cordts et al., 2016). The train set with fine-grained segmentation labels has 2975 images and the validation dataset of 500 images. We uniformly sampled 360 labeled images from the train set. Out of these, 10 images represent DS, 150 build DT and 200, DR. The remaining 2615 images of train set are used for DV , as if they were unlabeled. We report the results in the validation set (test set not available).
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU models, or specific cloud resources used for running experiments.
Software Dependencies No The paper mentions 'Python' and 'PyTorch' in Figure B.1 caption, but does not specify their version numbers or any other software dependencies with explicit version details required for reproducibility.
Experiment Setup Yes As data augmentation, we use random horizontal flips and random crops of 224 224. ... We used a training batch size of 32 for Camvid and 16 for Cityscapes. ... We use the same learning rate for both the segmentation and query networks; 10 4 and 10 3 for Cityscapes and Camvid respectively. Weight decay is set to 10 4 for the segmentation network and 10 3 for the query network.