Temporally Adaptive Restricted Boltzmann Machine for Background Modeling
Authors: Linli Xu, Yitan Li, Yubo Wang, Enhong Chen
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on two datasets: Wall Flower Dataset (Toyama et al. 1999) and the dataset used in (Li et al. 2003), which is denoted as I2R here. ... We compare our model with three representative methods for background subtraction including both parametric and non-parametric... To evaluate the performance, we employ the traditional pixel-level measurement F1-measure. ... Table 4 reports the results of foreground detection on the sequences in the Wall Flower dataset... |
| Researcher Affiliation | Academia | Linli Xu, Yitan Li, Yubo Wang and Enhong Chen School of Computer Science and Technology University of Science and Technology of China linlixu@ustc.edu.cn, {etali, wybang}@mail.ustc.edu.cn, cheneh@ustc.edu.cn |
| Pseudocode | Yes | Algorithm 1: Framework of background modeling with TARBM |
| Open Source Code | No | The paper does not include an unambiguous statement or a direct link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We conduct experiments on two datasets: Wall Flower Dataset (Toyama et al. 1999) and the dataset used in (Li et al. 2003), which is denoted as I2R here. The Wall Flower dataset1... 1http://research.microsoft.com/enus/um/people/jckrumm/wallflower/testimages.htm. On the other hand, I2R2 consists of 9 video sequences... 2http://perception.i2r.a-star.edu.sg/bkmodel/bkindex.html |
| Dataset Splits | No | The paper mentions training and testing but does not provide specific details on training/validation/test dataset splits, such as percentages, sample counts, or explicit validation sets. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers, such as programming languages, libraries, or solvers used in the experiments. |
| Experiment Setup | Yes | In order to model the sharp illumination changes in most sequences in Wall Flower, the size of the hidden layer both in RBM and TARBM is set to 400, while in I2R the parameter is set to 50 considering the relatively smooth variations in the sequences. The size of the visible layer is equal to the number of pixels, and λ in TARBM is set to 1. When training TARBM, the parameters Wl, Wr and b are initialized randomly, while cl and cr are initialized with the mean of the training frames. The learning rate ϵ is fixed at 1e-3, and the max epoches is 150. We also follow the tricks of momentum and weight-decay for increasing the speed of learning as advised in (Hinton 2010), which are set to 0.9 and 2e-4 respectively. When testing new frames, the update rate of the parameters is set to 1e-2. |