Implicit Semantic Response Alignment for Partial Domain Adaptation

Authors: Wenxiao Xiao, Zhengming Ding, Hongfu Liu

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on several cross-domain benchmark datasets demonstrate the effectiveness of our method over the state-of-the-art PDA methods. Moreover, we elaborate in-depth analyses to further explore implicit semantic alignment.
Researcher Affiliation Academia Wenxiao Xiao Department of Computer Science Brandeis University Waltham, MA 02451 wenxiaoxiao@brandeis.edu Zhengming Ding Department of Computer Science Tulane University New Orleans, LA 70118 zding1@tulane.edu Hongfu Liu Department of Computer Science Brandeis University Waltham, MA 02451 hongfuliu@brandeis.edu
Pseudocode No The paper describes the model's components and objective function verbally and with equations, but no pseudocode or algorithm blocks are provided.
Open Source Code Yes Our code will be publicly available at: https://github.com/implicitseman-align/Implicit-Semantic-Response-Alignment.
Open Datasets Yes Datasets. We include three domain adaptation benchmark datasets for performance evaluation. (1) Office-Home dataset [44] is a challenging benchmark... (2) Office31 dataset [36] contains images of 31 object categories... (3) Image Net-Caltech [4] is a large-scale object classification dataset that consists Image Net-1K (I) [35] and Caltech256 (C)...
Dataset Splits Yes For task I C, the training set of Image Net-1K is used as source domain and a subset of 84 classes from Caltech256 is used as target domain. While for task C I the Caltech256 is used as target domain and we choose the same 84 classes from the validation set of Image Net-1K as the target domain.
Hardware Specification Yes We implement our model in Py Torch [32] using one NVIDIA Titan V GPU card.
Software Dependencies No We implement our model in Py Torch [32] using one NVIDIA Titan V GPU card. The paper mentions PyTorch but does not specify its version number or any other software dependencies with specific versions.
Experiment Setup Yes We adopt the pre-trained Res Net-50 [11] network as the backbone feature extractor. ASn auto-encoder with one hidden layer... We train the network with the standard stochastic gradient descent optimizer and the learning rate is set to 1e-3 initially and decay exponentially during training. The learning rate of the backbone feature extractor is 0.1 of other layers... α and β are both set to 1 for Office-Home and Image Net-Caltech, while for Office31 we set α and β to 0.1 and 0.5. λreg is set to 0.5 in all experiments. For Office-Home, Office31 and Image Net-Caltech, the maximum iterations for training is set to 8,000, 4,000 and 40,000, respectively. The numbers of implicit semantic topics are set to 256, 64 and 16 separately for Office-Home, Office31 and Imagenet-Caltech.