Chanakya: Learning Runtime Decisions for Adaptive Real-Time Perception

Authors: Anurag Ghosh, Vaibhav Balloli, Akshay Nambi, Aditya Singh, Tanuja Ganu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We analyse and evaluate Chanakya through the following experiments. We employ s AP metric [6] that coherently evaluates real-time perception, combining latency and accuracy into a single metric.
Researcher Affiliation Collaboration Anurag Ghosh1 Vaibhav Balloli2 Akshay Nambi3 Aditya Singh3 Tanuja Ganu3 1Carnegie Mellon University 2University of Michigan 3Microsoft Research India
Pseudocode Yes Algorithm 1: Obtaining Observations From Streaming Perception System
Open Source Code Yes Code can be viewed at https://github.com/microsoft/chanakya.
Open Datasets Yes All the results are reported on Argoverse-HD, unless stated otherwise.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification Yes For a real-time perception system on a given hardware (P40 GPU),... Consider the task of migrating a streaming perception system from a device employing P40 GPU, to newer V100 GPU.
Software Dependencies No The paper mentions various models and frameworks but does not specify software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes We optimize configurations across two decision dimensions D = {Ds : {360, 480, 540, 640, 720}, Dnp : {100, 300, 500, 1000}}, i.e., detector scale and number of proposals.