Multi-Source Iterative Adaptation for Cross-Domain Classification

Authors: Himanshu S. Bhatt, Arun Rajkumar, Shourya Roy

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results justify the theory as MSIDA significantly outperforms existing cross-domain classification approaches on the real world and benchmark datasets. 6 Experiments The efficacy of the proposed algorithm is evaluated for crossdomain sentiment classification task and the performance is reported in terms of classification accuracy.
Researcher Affiliation Industry Himanshu S. Bhatt, Arun Rajkumar and Shourya Roy Xerox Research Centre India, Bengaluru, INDIA {Firstname.Lastname}@xerox.com
Pseudocode Yes Algorithm 1 Greedy Algorithm for Selecting Sources. Algorithm 2 Multi-source Iterative Learning Algorithm.
Open Source Code No The paper does not provide any explicit statements or links indicating that the source code for the methodology is openly available.
Open Datasets Yes The first dataset comprises the widely used Amazon review dataset [Blitzer et al., 2007] appended with the Amazon product dataset [Mc Auley et al., 2015b; 2015a] for evaluating the challenges of multi-source adaptation.
Dataset Splits No The paper specifies using '1600 reviews from each source-target domain pair are used to learn shared representations' and 'For target domains, 1600 reviews serve as unlabeled pool of instances' (training data), and 'performance is reported on the non-overlapping 400 reviews' (test data). However, a distinct validation split is not mentioned.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as CPU or GPU models, or cloud computing specifications.
Software Dependencies No The paper mentions 'SVM classifiers with radial basis function (RBF) kernels' but does not specify the software library or version number used for the SVM implementation (e.g., scikit-learn, LIBSVM).
Experiment Setup Yes we used SVM classifiers with radial basis function (RBF) kernels as individual classifiers combined with uniformly initialized weights in the ensemble and the maximum number of iterations (iter Max) set to 30. 1 and 2 are set empirically on a held-out set