Submodular Attribute Selection for Action Recognition in Video
Authors: Jingjing Zheng, Zhuolin Jiang, Rama Chellappa, Jonathon P Phillips
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on the Olympic Sports and UCF101 datasets demonstrate that the proposed attribute-based representation can significantly boost the performance of action recognition algorithms and outperform most recently proposed recognition approaches. |
| Researcher Affiliation | Collaboration | Jinging Zheng UMIACS, University of Maryland College Park, MD, USA zjngjng@umiacs.umd.edu; Zhuolin Jiang Noah s Ark Lab Huawei Technologies zhuolin.jiang@huawei.com; Rama Chellappa UMIACS, University of Maryland College Park, MD, USA rama@umiacs.umd.edu; P. Jonathon Phillips National Institute of Standards and Technology Gaithersburg, MD, USA jonathon.phillips@nist.gov |
| Pseudocode | Yes | Algorithm 1 Submodular Attribute Selection |
| Open Source Code | No | The paper does not contain any statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | In this section, we validate our method for action recognition on two public datasets: Sports dataset [25] and UCF101 [20] dataset. |
| Dataset Splits | Yes | Following the training and testing dataset partitions proposed in [30], we train a linear SVM and report classification accuracies of different attribute-based representations in Table 1. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | The paper mentions using an SVM and other techniques like KSVD and PCA, but it does not provide specific version numbers for any software, libraries, or frameworks used. |
| Experiment Setup | Yes | Figure 4d shows the performance curves for a range of λ. We observe that the combination of entropy rate term and maximum coverage term obtains a higher classification accuracy than when only one of them is used. In addition, our approach is insensitive to the selection of λ. Hence we use λ = 0.1 throughout the experiments. |