Risk-Driven Design of Perception Systems

Authors: Anthony Corso, Sydney Katz, Craig Innes, Xin Du, Subramanian Ramamoorthy, Mykel J Kochenderfer

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37 % over a baseline system.
Researcher Affiliation Academia Anthony L. Corso Department of Aeronautics and Astronautics Stanford University Stanford, CA acorso@stanford.edu, Sydney M. Katz Department of Aeronautics and Astronautics Stanford University Stanford, CA smkatz@stanford.edu, Craig Innes School of Informatics University of Edinburgh Edinburgh, UK craig.innes@ed.ac.uk, Xin Du School of Informatics University of Edinburgh Edinburgh, UK x.du@ed.ac.uk, Subramanian Ramamoorthy School of Informatics University of Edinburgh Edinburgh, UK s.ramamoorthy@ed.ac.uk, Mykel J. Kochenderfer Department of Aeronautics and Astronautics Stanford University Stanford, CA mykel@stanford.edu
Pseudocode No The paper refers to an algorithm from external work ('alg. 5.3 in Bellemare et al. [13]') but does not include its own pseudocode or algorithm blocks.
Open Source Code Yes The code for this work can be found at https://github.com/sisl/Risk Driven Perception.
Open Datasets No We train a baseline network using the YOLOv5 algorithm [21], [22] on 10,000 simulated images in which the intruder location is sampled uniformly within the ownship field of view. To test H1, we train a baseline perception system using a mean squared error loss function and a risk-driven perception system using the loss function described in section 3.3 on 10,000 uniformly sampled data points.
Dataset Splits No The paper mentions training data sizes and evaluation metrics but does not specify explicit training, validation, and test splits with percentages or counts.
Hardware Specification Yes All experiments were run on an NVIDIA RTX 2080 Ti GPU.
Software Dependencies Yes We train a baseline network using the YOLOv5 algorithm [21], [22]... [22] G. Jocher et al., ultralytics/yolov5: v6.1 Tensor RT, Tensor Flow Edge TPU and Open VINO Export and Inference, version v6.1, Feb. 2022.
Experiment Setup Yes For additional details on the controller, network architecture, computational resources, and training, see appendix B. For both the pendulum and DAA problems, we train for 200 epochs using the Adam optimizer with a learning rate of 0.001.