Tracking Without Re-recognition in Humans and Machines

Authors: Drew Linsley, Girik Malik, Junkyung Kim, Lakshmi Narasimhan Govindarajan, Ennio Mingolla, Thomas Serre

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We measure the ability of models to learn Path Tracker and systematically generalize to novel versions of the challenge when trained on 20K samples. We trained models using a similar approach as in our human psychophysics.
Researcher Affiliation Collaboration 1Carney Institute for Brain Science, Brown University, Providence, RI 2Northeastern University, Boston, MA 3Deep Mind, London, UK
Pseudocode No The paper describes methods and models but does not contain a structured pseudocode or algorithm block.
Open Source Code Yes We release all Path Tracker data, code, and human psychophysics at http://bit.ly/In Tcircuit to spur interest in the challenge of tracking without re-recognition.
Open Datasets Yes We release all Path Tracker data, code, and human psychophysics at http://bit.ly/In Tcircuit to spur interest in the challenge of tracking without re-recognition. Training In T+Trans T training and evaluation hews close to the Trans T procedure. This includes training on the latest object tracking challenges in computer vision: Tracking Net [13], La SOT [42], and GOT-10K [43].
Dataset Splits Yes Models were trained to detect if the target dot reached the blue goal marker using binary crossentropy and the Adam optimizer [44] until performance on a test set of 20K videos with 14 distractors decreased for 200 straight epochs. In each experiment, we selected model weights that performed best on the 14 distractor dataset. We selected the weights that performed best on GOT-10K validation.
Hardware Specification Yes We used four NVIDIA GTX GPUs and a batch size 180 for training. The complete model was trained with batches of 24 videos on 8 NVIDIA GTX GPUs for 150 epochs (2 days).
Software Dependencies No The paper mentions software like the Adam optimizer and a Matlab toolbox, but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes Models were trained to detect if the target dot reached the blue goal marker using binary crossentropy and the Adam optimizer [44] until performance on a test set of 20K videos with 14 distractors decreased for 200 straight epochs. Models were retrained three times on learning rates {1e-2, 1e-3, 1e-4, 3e-4, 1e-5} to optimize performance. We used four NVIDIA GTX GPUs and a batch size 180 for training.