Artificial Dummies for Urban Dataset Augmentation

Authors: Antonín Vobecký, David Hurych, Michal Uřičář, Patrick Pérez, Josef Sivic2692-2700

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present a series of experiments on controlled dataset augmentation with the aim to improve the accuracy of a person classifier/detector in the context of autonomous driving. ...We consider four experimental set-ups. The first experiment (Sec. 4.1) focuses on augmenting the daytime Cityscapes dataset. ... In the second experiment (Sec. 4.2), we use Dummy Net to generate night-time person images and show significant improvements in classifier performance on the Night Owls dataset (Neumann et al. 2018). In the next experiment (Sec. 4.3) we use Dummy Net to improve performance of the state-of-the-art person detection network CSP (Liu et al. 2019b) in the standard (full) data regime on the Cityscapes and Caltech datasets. Finally, we demonstrate the benefits of our approach in set-ups (Sec. 4.4)...
Researcher Affiliation Collaboration 1Czech Institute of Informatics, Robotics and Cybernetics at the Czech Technical University in Prague 2valeo.ai
Pseudocode No The paper describes the architecture and loss functions in text and diagrams but does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/vobecant/DummyNet
Open Datasets Yes Cityscapes dataset, Night Owls dataset (Neumann et al. 2018), Caltech (Dollar et al. 2011), MS COCO dataset (Lin et al. 2014), Youtube BB (Real et al. 2017), Citypersons dataset (Zhang, Benenson, and Schiele 2017), SURREAL (Varol et al. 2017), Human3.6M dataset (Ionescu et al. 2014; Catalin Ionescu 2011)
Dataset Splits Yes For both experiments, the classifier is trained for 1, 000 epochs and the classifier with the best validation error is kept. ... We report log-average miss rate for multiple setups of the detector with the best validation performance
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software tools like Open Pose and model architectures like Faster-RCNN, but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or specific library versions).
Experiment Setup Yes The classifier consists of 4 convolutional layers with a 3 3 kernel, stride 2, Re LU activations, maxpooling, and one fully connected layer with sigmoid activation. For both experiments, the classifier is trained for 1, 000 epochs and the classifier with the best validation error is kept. ... Following (Liu et al. 2019b), we train the detection network for 150 epochs...