Unsupervised Learning based Jump-Diffusion Process for Object Tracking in Video Surveillance

Authors: Xiaobai Liu, Donovan Lo, Chau Thuan

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed method on challenging video sequences and compare it to other methods. Significant improvements are obtained for the videos including frequent interactions.
Researcher Affiliation Collaboration Xiaobai Liu1,2, Donovan Lo1, Chau Thuan1 1 San Diego State University, San Diego, CA 2 Xre Lab Inc., San Diego, CA xiaobai.liu@sdsu.edu
Pseudocode Yes Algorithm 1 summarizes the sketch of the proposed training algorithm.
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes We pre-train the policy networks on the Image Net dataset [Deng et al., 2009], and then fine-tune the network parameters on the generated training episodes.
Dataset Splits No The paper does not explicitly state a validation dataset split (e.g., percentages, sample counts, or explicit mention of a validation set).
Hardware Specification Yes We implement the proposed tracker using Mat Conv Net toolbox and run all experiments on a workstation with CPU: Intel Core i7-7700K, GPU: Nvidia Ge Force GTX 1050, and Memory: 8GB.
Software Dependencies No The paper mentions 'Mat Conv Net toolbox' but does not provide specific version numbers for software dependencies.
Experiment Setup Yes We resize all images to be 112 112 pixels before feeding into the policy networks. We use dropout regularization for fully-connected layers, with drop rate 0.7. Each convolution layer is followed by the rectified linear unit (Re LU) activation function. To train the network, we set the learning rate c to be 0.0001, γ = 0.95... We use experience replay method [Schaul et al., 2015] during training and retain in the replay memory 5000 successful samples and 5000 failure samples. To update the network parameters, we sample 50 samples from the memory and accumulate the gradients.