Unsupervised Human Action Categorization with Consensus Information Bottleneck Method
Authors: Xiaoqiang Yan, Yangdong Ye, Xueying Qiu
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on five realistic human action data sets show that CIB can consistently and significantly beat other state-of-the-art consensus and multi-view clustering methods. |
| Researcher Affiliation | Academia | Xiaoqiang Yan, Yangdong Ye, Xueying Qiu School of Information Engineering, Zhengzhou University, China |
| Pseudocode | Yes | Algorithm 1 The Consensus Information Bottleneck: CIB |
| Open Source Code | No | The paper does not provide an explicit statement or link to open-source code for the described methodology. |
| Open Datasets | Yes | The Weizmann data set... The KTH data set... UCF Sports [Rodriguez et al., 2008] data set... UCF50 [Reddy and Shah, 2013] is an action recognition data set... HMDB [Kuehne et al., 2011] data set is a recently released large video database... |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits, such as percentages or sample counts for each split. |
| Hardware Specification | No | No specific hardware details (e.g., GPU models, CPU types, or memory specifications) used for running experiments are mentioned in the paper. |
| Software Dependencies | No | The paper mentions methods like Bag-of-Words (BoW) model but does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | The size of the vocabulary in Bo W model is set to 1000, which results in a 1000 dimensional frequency histogram of motion features. We choose one feature from STIP, HOG, HOF randomly as our feature variable, the remaining two features are naturally treated as clustering variables to construct auxiliary clusterings... The number of categories M is set to be identical with number of real categories on each data set. As all algorithms are stochastic, all experiments are run 10 times, and we report the average clustering results. |