Linking Heterogeneous Input Features with Pivots for Domain Adaptation
Authors: Guangyou Zhou, Tingting He, Wensheng Wu, Xiaohua Tony Hu
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on a benchmark composed of reviews of 4 types of Amazon products. Experimental results show that our proposed approach significantly outperforms the baseline method, and achieves an accuracy which is competitive with the state-of-the-art methods for sentiment classification adaptation. |
| Researcher Affiliation | Academia | 1 School of Computer, Central China Normal University, Wuhan 430079, China 2 Computer Science Department, University of Southern California, Los Angeles, CA |
| Pseudocode | No | The paper provides mathematical derivations and update equations but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not state that open-source code for the methodology is provided or include a link to a code repository. |
| Open Datasets | Yes | A large majority experiments are performed on the benchmark made of reviews of Amazon products gathered by Blitzer et al. [2007]. |
| Dataset Splits | Yes | All hyper-parameters are set by 5-fold cross validation on the source training set1. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a 'linear SVM' and that 'implementations provided by the authors' were used for some comparison methods, and that others were 're-implement[ed] based on the original papers', but it does not provide specific version numbers for any software or libraries. |
| Experiment Setup | No | The paper states 'All hyper-parameters are set by 5-fold cross validation on the source training set' but does not provide specific hyperparameter values, optimizer settings, or a detailed description of the training configuration. |