Cross-Dataset Collaborative Learning for Semantic Segmentation in Autonomous Driving

Authors: Li Wang, Dong Li, Han Liu, JinZhang Peng, Lu Tian, Yi Shan2487-2494

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive evaluations on diverse semantic segmentation datasets for autonomous driving. Experiments demonstrate that our method consistently achieves notable improvements over prior single-dataset and cross-dataset training methods without introducing extra FLOPs. Particularly, with the same architecture of PSPNet (Res Net-18), our method outperforms the single-dataset baseline by 5.65%, 6.57%, and 5.79% m Io U on the validation sets of Cityscapes, BDD100K, Cam Vid, respectively.
Researcher Affiliation Industry Xilinx Inc. {liwa, dongl, hanl, jinzhang, lutian, yishan}@xilinx.com
Pseudocode No The paper describes the method and provides a conceptual overview in Figure 2, but it does not include explicit pseudocode or algorithm blocks.
Open Source Code No Code and models will be released.
Open Datasets Yes We apply our CDCL method on three semantic segmentation datasets for autonomous driving: Cityscapes, BDD100K, Cam Vid. Dataset details are provided in the supplementary material. ... (e.g., Cityscapes (Cordts et al. 2016), BDD100K (Yu et al. 2018)), which provide rich labeled data for network training.
Dataset Splits No The paper refers to 'validation sets' and reports mIoU on them (e.g., 'm Io U on the validation sets of Cityscapes, BDD100K, Cam Vid'), but it does not provide specific details on the dataset split sizes, percentages, or the methodology used to create these validation splits.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory, or cloud configurations) used for running the experiments.
Software Dependencies No The paper mentions various components and frameworks such as 'PSPNet', 'Res Net-18', 'Res Net-101', 'SGD', but it does not specify version numbers for any software dependencies like Python, PyTorch, TensorFlow, or CUDA versions.
Experiment Setup Yes The networks are trained using stochastic gradient descent (SGD) with momentum of 0.9, weight decay of 0.0001, and batch size of 8. The initial learning rate is set to 0.01 and multiplied by (1 iter maxiter)0.9 with a polynomial decaying policy. Unless specified otherwise, we randomly crop the images into 512 512 for training, and use random scaling (0.5 2.1) and random flipping for data augmentation.