Learning Low-Dimensional Temporal Representations
Authors: Bing Su, Ying Wu
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we evaluate the proposed LT-LDA in comparison with several supervised DR methods for sequences on three real-world datasets. Evaluations on another dataset are presented in the supplementary material. |
| Researcher Affiliation | Academia | 1Science & Technology on Integrated Information System Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing, China 2Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL, USA. |
| Pseudocode | Yes | Algorithm 1 Abstract template learning; Algorithm 2 LT-LDA |
| Open Source Code | No | The paper does not provide concrete access to its own source code (e.g., a specific repository link or an explicit code release statement for the methodology described). |
| Open Datasets | Yes | Cha Learn Gesture dataset (Escalera et al., 2013b;a); MSR Sports Action3D dataset (Li et al., 2010); Olympic Sports dataset (Niebles et al., 2010) |
| Dataset Splits | Yes | Cha Learn Gesture dataset (...) has been split into training, validation and test sets. MSR Sports Action3D dataset (...) split the dataset into training and test set. Olympic Sports dataset (...) The dataset has been split into training and test sets, where 649 videos are used for training and 134 videos are used for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'We use the drtoolbox (van der Maaten & Hinton, 2008) to perform LDA and k LDA' but does not specify version numbers for this or any other software dependencies. |
| Experiment Setup | Yes | We fix a to 2 in the following experiments, and fix L to 8 except on the Olympic Sports dataset, where we set L to 20 such that LT-LDA can preserve 20 C 1 = 319 dimensions at most. The reduced dimension is fixed to 20. For the HMM classifier, a left-to-right HMM with 4 states and self-loops is trained for each sequence class. For the SVM classifier (...) the parameter C of SVM is selected by cross-validation. |