Transferable Calibration with Lower Bias and Variance in Domain Adaptation

Authors: Ximei Wang, Mingsheng Long, Jianmin Wang, Michael Jordan

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We fully verify our methods on six DA datasets: (1) Office-Home [52]: a dataset with 65 categories, consisting of 4 domains: Artistic (A), Clipart (C), Product (P) and Real-World (R). (2) Vis DA-2017 [41], a Simulation-to-Real dataset with 12 categories. (3) Image Net-Sketch [53], a large-scale dataset transferring from Image Net (I) to Sketch (S) with 1000 categories. (4) Multi-Domain Sentiment [4], a NLP dataset, comprising of product reviews from amazon.com in four product domains: books (B), dvds (D), electronics (E), and kitchen appliances (K). (5) Domain Net [40]: a dataset with 345 categories, including 6 domains: Infograph (I), Quickdraw (Q), Real (R), Sketch (S), Clipart (C) and Painting (P). (6) Office-31 [46] contains 31 categories from 3 domains: Amazon (A), Webcam (W), DSLR (D). We run each experiment for 10 times.
Researcher Affiliation Academia Ximei Wang, Mingsheng Long , Jianmin Wang, and Michael I. Jordan School of Software, KLiss, BNRist, Tsinghua University University of California, Berkeley wxm17@mails.tsinghua.edu.cn {mingsheng,jimwang}@tsinghua.edu.cn jordan@cs.berkeley.edu
Pseudocode Yes Algorithm 1 Transferable Calibration in Domain Adaptation
Open Source Code No The paper does not provide a specific link to an open-source code repository or an explicit statement about releasing the code for the described methodology.
Open Datasets Yes We fully verify our methods on six DA datasets: (1) Office-Home [52]: a dataset with 65 categories... (2) Vis DA-2017 [41]... (3) Image Net-Sketch [53]... (4) Multi-Domain Sentiment [4]... (5) Domain Net [40]... (6) Office-31 [46]...
Dataset Splits No The paper states: 'Similar to IID calibration, S is first partitioned into Str = xi tr, yi tr ntr i=1 and Sv = xi v, yi v nv i=1.' However, it does not specify the exact percentages or sample counts for these splits, which is necessary for precise reproducibility.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies, such as programming languages, libraries, or frameworks used in the experiments.
Experiment Setup No The paper does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific optimizer settings used for training their models or the baseline DA methods.