Pose-Assisted Multi-Camera Collaboration for Active Object Tracking
Authors: Jing Li, Jing Xu, Fangwei Zhong, Xiangyu Kong, Yu Qiao, Yizhou Wang759-766
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results demonstrate that our system outperforms all the baselines and is capable of generalizing to unseen environments. |
| Researcher Affiliation | Collaboration | 1Center for Data Science, Peking University 2Computer Science Dept., Sch l of EECS, Peking University 3Advanced Innovation Center for Future Visual Entertainment(AICFVE), Beijing Film Academy 4Key Lab. of System Control and Information Processing (Mo E), Shanghai; Automation Dept., Shanghai Jiao Tong University 5Center on Frontiers of Computing Studies, Peking University 6Deepwise AI Lab ... This work was supported by MOST-2018AAA0102004, NSFC-61625201, NSFC61527804, Qualcomm University Research Grant. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | The code and demo videos are available on our website https://sites.google.com/view/pose-assistedcollaboration. |
| Open Datasets | Yes | Specifically, we choose pictures from a texture dataset (Kylberg 2011) and place them on the surface of walls, floor, obstacles etc. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning for validation. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions software components and algorithms (e.g., A3C algorithm, Conv-LSTM network, GRU, CNNs, LSTM Network) but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | The action space is discrete and contains eleven candidate actions (turn left, turn right, turn up, turn down, turn topleft, turn top-right, turn bottom-left, turn bottom-right, zoom in, zoom out and keep still). We take a two-phase training strategy for learning. Specifically, we choose pictures from a texture dataset (Kylberg 2011) and place them on the surface of walls, floor, obstacles etc. And we apply the A3C algorithm to update the network architecture of the Pose-Assisted Multi-Camera Collaboration System. |