Towards Distraction-Robust Active Visual Tracking

Authors: Fangwei Zhong, Peng Sun, Wenhan Luo, Tingyun Yan, Yizhou Wang

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that our tracker performs desired distraction-robust active visual tracking and can be well generalized to unseen environments. We show that our tracker significantly outperforms the state-of-the-art methods in a room with clean backgrounds and a number of moving distractors. The effectiveness of introduced components are validated in ablation study.
Researcher Affiliation Collaboration 1Center on Frontiers of Computing Studies, Dept. of Computer Science, Peking University, Beijing, P.R. China. 2Tencent Robotics X, Shenzhen, P.R. China 3Tencent, Shenzhen, P.R. China 4Adv. Inst. of Info. Tech, Peking University, Hangzhou, P.R. China.
Pseudocode No The paper describes the proposed methods and algorithms in text but does not include any formal pseudocode blocks or sections explicitly labeled 'Pseudocode' or 'Algorithm'.
Open Source Code Yes The code and demo videos are available on https://sites.google.com/view/ distraction-robust-avt.
Open Datasets No The paper states that experiments are conducted on 'Unreal CV environments' and models are 'trained in Simple Room with environment augmentation'. These are simulated environments used for training, not publicly available datasets with specific access information (links, DOIs, or formal citations to a dataset repository).
Dataset Splits No The paper does not specify explicit training, validation, and test dataset splits with percentages or sample counts. It refers to a number of 'testing episodes' for evaluation, but this is an evaluation procedure, not a data split.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU models, CPU specifications, or memory.
Software Dependencies No The paper mentions software components and algorithms like 'Unreal CV', 'A3C', and 'Conv LSTM', but it does not provide specific version numbers for these or other software dependencies required for reproducibility.
Experiment Setup No The paper describes general experimental settings like the action space and observation type, and states that 'More implementation details are introduced in Appendix.C'. However, Appendix C is not provided in the given text, and the main body does not contain specific hyperparameter values or detailed system-level training configurations.