Extreme Low Resolution Activity Recognition With Multi-Siamese Embedding Learning

Authors: Michael Ryoo, Kiyoon Kim, Hyun Yang

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally confirm that our approach of jointly learning such transform robust LR video representation and the classifier outperforms the previous state-of-the-art low resolution recognition approaches on two public standard datasets by a meaningful margin.
Researcher Affiliation Collaboration 1Ego Vid Inc., Daejeon, South Korea 2Indiana University, Bloomington, IN, USA 3Ulsan National Institute of Science and Technology, Ulsan, South Korea
Pseudocode No The paper provides mathematical equations to describe its loss functions and architecture components, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about making its source code publicly available or provide a link to a code repository.
Open Datasets Yes HMDB dataset (Kuehne et al. 2011) is one of the most widely used public video datasets...Dog Centric dataset (Iwashita et al. 2014) is a smaller scale dataset...
Dataset Splits Yes The standard evaluation setting of the dataset using 3 provided training/testing splits was followed... We followed the standard evaluation setting of the dataset, using 10 random half-training/half-testing splits... a standard early stopping strategy using validation errors was used to check the convergence, avoiding overfitting.
Hardware Specification Yes Our approach runs in real-time ( 50 fps) on a Nvidia Jetson TX2 mobile GPU card with the Tensor Flow library...
Software Dependencies No The paper mentions using the 'Tensor Flow library' and 'Farneback algorithm' but does not specify any version numbers for these or other software components like the 'TV-L1 optical flow extraction algorithm'.
Experiment Setup No The paper describes the model architecture, input dimensions (e.g., 16x12), the number of transforms used (n=75), and that a 'standard early stopping strategy' was employed. However, it does not provide specific hyperparameters such as learning rate, batch size, optimizer details, or the exact number of training epochs.