Robust Domain Generalisation by Enforcing Distribution Invariance

Authors: Sarah M. Erfani, Mahsa Baktashmotlagh, Masud Moshtaghi, Vinh Nguyen, Christopher Leckie, James Bailey, Kotagiri Ramamohanarao

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Empirical Analysis In this section, we illustrate the effectiveness of ESRand via a visualisation of a toy dataset. Furthermore, we compare the performance and efficiency of the proposed algorithm with state-of-the-art algorithms through classification tasks on multiple benchmark datasets.
Researcher Affiliation Academia Department of Computing and Information Systems, The University of Melbourne, Australia. {sarah.erfani, masud.moshtaghi, vinh.nguyen, caleckie, baileyj, kotagiri}@unimelb.edu.au Department of Science and Engineering, Queensland University of Technology, Australia. m.baktashmotlagh@qut.edu.au
Pseudocode No The paper describes the ESRand procedure in narrative text in Section 3.3 but does not provide a structured pseudocode or algorithm block.
Open Source Code No The paper does not provide any explicit statements about open-sourcing the code or links to a code repository for the described methodology.
Open Datasets Yes The experiments are conducted on four real life datasets from the UCI Machine Learning Repository: (i) Daily and Sport Activity (DSA), (ii) Heterogeneity Activity Recognition (HAR), (iii) Opportunity Activity Recognition (OAR), (iv) PAMAP2 Physical Activity Monitoring...
Dataset Splits Yes The hyper-parameters of all the algorithms are adjusted using grid search based on their best performance on a validation set. The reported AUC values of each algorithm are the average accuracies of leave-one-domain-out test (domain), i.e., taking one domain as the test set and the remaining domains as the training set.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions "For SVM based methods LIBSVM was used" but does not specify a version number for LIBSVM or any other software dependencies.
Experiment Setup Yes k NN: k Nearest Neighbour, we use k = 1