Online Robust Low-Rank Tensor Learning

Authors: Ping Li, Jiashi Feng, Xiaojie Jin, Luming Zhang, Xianghua Xu, Shuicheng Yan

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical studies have validated the effectiveness of the proposed method on both synthetic data and one practical task, i.e., video background subtraction.
Researcher Affiliation Collaboration Ping Li1,2, Jiashi Feng2, Xiaojie Jin2, Luming Zhang3, Xianghua Xu1, Shuicheng Yan4,2 1School of Computer Science and Technology, Hangzhou Dianzi University 2Department of Electrical and Computer Engineering, National University of Singapore 3Department of Computer and Information, Hefei University of Technology 4Qihoo 360 Artificial Intelligence Institute
Pseudocode Yes Algorithm 1 Online Robust Low-Rank Tensor Modeling
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described.
Open Datasets Yes To generate synthetic data with low-rank structure, we construct an authentic tensor S of size 50 50 30 by rank-3 factor matrices as in [Sobral et al., 2015]. ... We test on the I2R database [Li et al., 2004] which involves various indoor and outdoor environments.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for a validation set. It mentions "all frames are passed two epochs" for online methods, but this isn't a validation split.
Hardware Specification Yes All tests were run with 3.06 GHz Core X5675 processor and 24GB RAM.
Software Dependencies No The paper mentions comparing with other methods and their default parameters but does not provide specific software dependencies with version numbers for its own implementation.
Experiment Setup Yes For ORLTM, λ1 = 0.5/ p log(n1n2), λ2 = 1, λ3 = λ1 p log(t). ... For our method, parameters are fixed as λ1 = α/ p log(n1n2) (α is 0.02 for Curtain, Lobby, Watersurface; and 0.1 for the rest), λ2 = 1, λ3 = λ1 p log(t) (t is the iteration number), and each sub-tensor size is set to 1. For all methods, the target rank p is empirically defined as 10... For online methods, all frames are passed two epochs.