Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks
Authors: Peter Ondruska, Ingmar Posner
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate our approach using a synthetic dataset designed to mimic the task of tracking objects in 2D laser data as commonly encountered in robotics applications and show that it learns to track many dynamic objects despite occlusions and the presence of sensor noise. |
| Researcher Affiliation | Academia | Peter Ondr uˇska and Ingmar Posner Mobile Robotics Group, University of Oxford, United Kingdom {ondruska, ingmar}@robots.ox.ac.uk |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | The source code of our experiments is available at: http://mrg.robots.ox.ac.uk/mrg people/peter-ondruska/ |
| Open Datasets | No | We demonstrate our approach using a synthetic dataset designed to mimic the task of tracking objects in 2D laser data as commonly encountered in robotics applications and show that it learns to track many dynamic objects despite occlusions and the presence of sensor noise. The paper describes the creation of a synthetic dataset but does not provide access information (link, DOI, or specific citation to a public resource) for it. |
| Dataset Splits | No | We generated 10,000 sequences of length 200 time steps and trained the network for a total of 50,000 iterations using stochastic gradient descent with learning rate 0.9. No specific percentages or sample counts for training, validation, or test splits are provided. |
| Hardware Specification | No | One pass through the network takes 10ms on a standard laptop drawing the method suitable for real-time data filtering. This provides a general device type, but no specific hardware details (e.g., CPU/GPU model, memory, or cloud instance types) are given. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9, TensorFlow 2.x) are provided. |
| Experiment Setup | Yes | We generated 10,000 sequences of length 200 time steps and trained the network for a total of 50,000 iterations using stochastic gradient descent with learning rate 0.9. The network has in total 11k parameters and its hyperparameters such as number of channels in each layer and size of the kernels were set by cross-validation. |