Multi-View 3D Human Tracking in Crowded Scenes

Authors: Xiaobai Liu

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on both public video benchmarks and newly built multi-view video dataset. Results with comparisons showed that our method could achieve state-of-the-art tracking results and meter-level 3D localization on challenging videos.
Researcher Affiliation Academia Xiaobai Liu Department of Computer Science, San Diego State University GMCS Building, Campanile Drive San Diego, CA 92182
Pseudocode Yes Algorithm 1 summarizes the sketch of our method.
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes Dataset-2 is collected by Berclaz et al. Berclaz et al. (2011), known as EPFL dataset, which includes five scenes.
Dataset Splits No The paper mentions training a model and evaluating on datasets, but does not provide specific train/validation/test splits (e.g., percentages or sample counts), nor does it reference predefined splits with citations for reproducibility.
Hardware Specification Yes On an DELL workstation (with 64GB memory, i7 CPU @2.80GHz, and NVIDIA Tesla K40 GPU), our algorithm can process on average 15 frames per second.
Software Dependencies No The paper mentions using a 'toolkit developed by Carl et al. Vondrick, Patterson, and Ramanan (2012)' for annotation and 'the same features as Hoiem, Efros, and Hebert (2005)', but does not provide specific version numbers for any software, libraries, or dependencies used in their implementation.
Experiment Setup Yes We set the tuning parameters (i.e. λs) empirically for each scene and fix them throughout the evaluations. For spline fitting, we fix the number of knot points to be 15, and the random noises level be 2σ = 0.6 meters.