From Shared Subspaces to Shared Landmarks: A Robust Multi-Source Classification Approach
Authors: Sarah Erfani, Mahsa Baktashmotlagh, Masud Moshtaghi, Vinh Nguyen, Christopher Leckie, James Bailey, Kotagiri Ramamohanarao
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through comprehensive analysis, we show that our proposed approach outperforms stateof-the-art results on several benchmark datasets, while keeping the computational complexity low.In this section, we compare the performance and efficiency of the proposed algorithm with state-of-the-art methods through classification tasks on multiple sensor and image benchmark datasets. |
| Researcher Affiliation | Academia | Sarah M. Erfani , Mahsa Baktashmotlagh , Masud Moshtaghi , Vinh Nguyen , Christopher Leckie , James Bailey , Kotagiri Ramamohanarao Department of Computing and Information Systems, The University of Melbourne, Australia Department of Science and Engineering, Queensland University of Technology, Australia sarah.erfani@unimelb.edu.au |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | Sensor Datasets: We use four real life activity recognition datasets from the UCI Machine Learning Repository: (i) Daily and Sport Activity (DSA), (ii) Heterogeneity Activity Recognition (HAR), (iii) Opportunity Activity Recognition (OAR), (iv) PAMAP2 Physical Activity Monitoring, with the number of 19, 6, 5, 13 activities collected from 8, 9, 4, 8 subjects, respectively1. Image Datasets: We use four set sof images from the Caltech-101 (C), Label Me (L), SUN09 (S), and PASCAL VOC2007 (P) datasets. Each of the datasets represents a data source and they share five object categories: bird, car, chair, dog, and person. Instead of using the raw features as inputs to the algorithms, we used the De CAF6 extracted features with the dimensionality of 4,096 2. 2Available at: http://www.cs.dartmouth.edu/ chenfang/proj_page/FXR_iccv13/index.php |
| Dataset Splits | No | The hyper-parameters of all the algorithms are adjusted using grid search based on their best performance on a validation set. For each dataset, we take one subject (i.e., source) as the test set and the remaining subjects as the training set, i.e., leave-one-source-out, and repeat this for all the sources. While a validation set is mentioned for hyperparameter tuning, the specific split percentages or methodology for this validation set are not provided. |
| Hardware Specification | Yes | The reported training times are in seconds using MATLAB on an Intel Core i7 CPU at 3.60 GHz with 16 GB RAM. |
| Software Dependencies | No | The paper mentions 'MATLAB' and 'LIBSVM' but does not specify their version numbers or the versions of any other software dependencies. |
| Experiment Setup | No | The paper states that 'The hyper-parameters of all the algorithms are adjusted using grid search based on their best performance on a validation set,' but it does not specify the actual hyperparameter values (e.g., learning rates, batch sizes, epochs) used for training. |