Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks

Authors: Yuenan Hou, Zheng Ma, Chunxiao Liu, Chen Change Loy8433-8440

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our approach achieves a new state-of-the-art on Udacity and Comma.ai, outperforming the previous best by a large margin of 12.8% and 52.1%1, respectively. Encouraging results are also shown on Berkeley Deep Drive (BDD) dataset. ... Extensive experiments are conducted on two public datasets, namely, Udacity (Udacity 2018) and Comma.ai (Santana and Hotz 2016).
Researcher Affiliation Collaboration 1The Chinese University of Hong Kong 2Sense Time Group Limited 3Nanyang Technological University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes In this study, we contribute a 50-layer 3D Res Net model for steering angle prediction and make it available to the research community2. Code is available at https://cardwing.github.io/projects/FMNet
Open Datasets Yes We perform evaluations on two standard benchmarks widely-used in the community, namely Udacity (Udacity 2018) and Comma.ai (Santana and Hotz 2016) for evaluation. They are the largest steering angle prediction datasets by far. Note that the Berkeley Deep Drive (BDD) dataset (Yu et al. 2018) provides vehicle turning directions (i.e., go straight, stop, turn left / right) instead of steering wheel angles.
Dataset Splits Yes The Udacity dataset... provides a total number of 404,916 video frames for training and 5,614 video frames for testing. ... For fair comparisons, we benchmark our method and variants using a common setting. Specifically, we use 5% of each of the 11 clips for validation and testing, chosen randomly as a continuous chunk. The remaining frames are used for training.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper describes the software components used (e.g., mentioning deep learning frameworks and models like ResNet), but does not provide specific version numbers for these software dependencies (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes The balancing parameters αl is set as 1.0 for both the speed and torque prediction tasks, while βk is set as 0.2 for all auxiliary networks. In Udacity, three vehicle states, i.e., steering angle, torque and speed are used as targets, while we only use steering angle and speed in Comma.ai, since it does not provide steering torque. A training batch for our network contains 16 video clips. The learning rate is set as 10 4 in first 30 training episodes and reduced to 10 6 thereafter.