Learning Disentangled Semantic Representation for Domain Adaptation

Authors: Ruichu Cai, Zijian Li, Pengfei Wei, Jie Qiao, Kun Zhang, Zhifeng Hao

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental studies testify that our model yields state-of-the-art performance on several domain adaptation benchmark datasets.
Researcher Affiliation Academia 1School of Computers, Guangdong University of Technology, China 2School of Computer Science and Engineering, Nanyang Technological University, Singapore 3Department of Philosophy, Carnegie Mellon University, USA 4School of Mathematics and Big Data, Foshan University, China
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide an explicit statement about releasing code or a link to a code repository for the methodology described.
Open Datasets Yes Office-31 is a standard benchmark for visual domain adaptation, which contains 4,652 images and 31 categories from three distinct domains: Amazon (A), Webcam (W) and DSLR (D). Office-Home is a more challenging domain adaptation dataset than Office-31, which consists of around 15,500 images from 65 categories of everyday objects.
Dataset Splits No The paper describes using labeled samples from a source domain for training and unlabeled samples from a target domain for classification, which is typical for domain adaptation. However, it does not provide specific train/validation/test dataset splits (e.g., percentages or sample counts) for any of the datasets used, nor does it reference predefined splits with explicit citations.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instances) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers).
Experiment Setup Yes The total loss of the proposed disentangled semantic representation learning for domain adaptation model is formulated as: L(φy, θy,d, θy,y, φd,θd,d, θd,y, θr) = LELOB + βLsem + γLdom, (5) where β and γ are the hyper-parameters that is not very sensitive and we set β=1 and γ=1. ... The default value of δ is 1, and we try different values in order to validate the individual contributions of the domain adversarial learning module in the section 4.