Multi-Source Domain Adaptation for Visual Sentiment Classification

Authors: Chuang Lin, Sicheng Zhao, Lei Meng, Tat-Seng Chua2661-2668

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-theart MDA approaches for visual sentiment classification. Experiments This section presents the experimental analysis of MSGAN. First, the detailed experimental setups are introduced, including the datasets, baselines, and evaluation metrics. Second, the performance of MSGAN and the state-of-the-art algorithms in MDA is reported. Finally, an in-depth analysis on MSGAN, including a parameter sensitivity analysis and an ablation study, is presented.
Researcher Affiliation Academia 1NEx T++, National University of Singapore 2University of California, Berkeley, USA clin@u.nus.edu, schzhao@gmail.com, {lmeng, dcscts}@nus.edu.sg
Pseudocode Yes Algorithm 1: Adversarial training procedure of the proposed MSGAN model
Open Source Code No The paper does not provide any explicit statement about open-sourcing the code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes We evaluate our framework on four public datasets including the Flickr and Instagram (FI) (You et al. 2016), Artistic (Art Photo) dataset (Machajdik 2010), Twitter I (You et al. 2015) and Twitter II (You et al. 2016) datasets.
Dataset Splits No The paper mentions using source and target domains, but it does not specify concrete training, validation, or test dataset splits (e.g., percentages, sample counts, or explicit references to predefined splits).
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory, or cloud instance types).
Software Dependencies No The paper does not provide specific version numbers for any software dependencies, libraries, or solvers used in the experiments.
Experiment Setup Yes The hyper-parameters λ0 and λ1 control the weights of the objective terms and the KL divergence terms penalize the deviation of the distribution of the latent code from the prior distribution. The hyper-parameters λ2 and λ3 control the weights of the two different objective terms. The hyper-parameters λ4 and λ5 control the weights of the two different objective terms. We hence set λ0 = λ2 = 10, λ1 = λ3 = 0.1.