Subset Selection and Summarization in Sequential Data
Authors: Ehsan Elhamifar, M. Clara De Paolis Kaluza
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic and real data, including instructional video summarization, show that our sequential subset selection framework not only achieves better encoding and diversity than the state of the art, but also successfully incorporates dynamics of data, leading to compatible representatives. |
| Researcher Affiliation | Academia | Ehsan Elhamifar Computer and Information Science College Northeastern University Boston, MA 02115 eelhami@ccs.neu.edu M. Clara De Paolis Kaluza Computer and Information Science College Northeastern University Boston, MA 02115 clara@ccs.neu.edu |
| Pseudocode | No | The paper describes the message passing algorithm using mathematical equations (14-20) and textual explanations, but it does not present a structured pseudocode block or algorithm listing. |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that its source code is open or publicly available. |
| Open Datasets | Yes | We use videos from the instructional video dataset [45], which consists of 30 instructional videos for each of five activities. |
| Dataset Splits | Yes | We use 60% of the videos from each task as the training set to build an HMM model whose states form the source set, X. For each of the 40% remaining videos, we set Y to be the sequence of features extracted from the superframes of the video. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a 'deep neural network' for feature extraction and an 'HMM model' but does not specify any software names with version numbers for implementation (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | In the experiments we set M = 50, T = 100. For a fixed β, we run Seq FL for different values of λ to select different number of representatives. ... for Seq FL we have set β = 0.02. ... We use 60% of the videos from each task as the training set to build an HMM model whose states form the source set, X. ... We preprocess the videos by segmenting each video into superframes [46] and obtain features using a deep neural network that we have constructed for feature extraction for summarization tasks. ... For each method, we choose the number of HMM states and the number of slots for alignment that achieve the best performance. |