Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Representation Subspace Distance for Domain Adaptation Regression

Authors: Xinyang Chen, Sinan Wang, Jianmin Wang, Mingsheng Long

ICML 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method is evaluated on three domain adaptation regression benchmarks, two of which are constructed in this paper. Our method outperforms the state-of-the-art methods significantly, forming early positive results in the deep regime.
Researcher Affiliation Academia 1School of Software, BNRist, Tsinghua University. Xinyang Chen <EMAIL>. Correspondence to: Mingsheng Long <EMAIL>.
Pseudocode No No section or figure explicitly labeled "Pseudocode" or "Algorithm".
Open Source Code Yes The code is available at github.com/thuml/Domain-Adaptation-Regression.
Open Datasets Yes d Sprites1 (Higgins et al., 2017) is a standard 2D synthetic dataset for deep representation learning. It is composed of three domains each with 737,280 images: Color (C), Noisy (N) and Scream (S). The example images are shown in Figure 4. In every image, there are ๏ฌve factors of variations, details illustrated in Table 1. https://github.com/deepmind/ dsprites-dataset. MPI3D2 (Gondal et al., 2019) is a simulation-to-real dataset of 3D objects. It has three domains: Toy (T), Realisti C (RC) and Rea L (RL). Each domain contains 1,036,800 images... https://github.com/rr-learning/ disentanglement_dataset. Biwi Kinect (Fanelli et al., 2013) is a real-world dataset for head pose estimation.
Dataset Splits Yes We employ IWCV (Sugiyama et al., 2007), a model selection method for domain adaptation, to determine the hyper-parameters and the number of iterations for all methods.
Hardware Specification Yes We use Py Torch3 with Titan V to implement our methods and ๏ฌne-tune Res Net-18
Software Dependencies No The paper mentions "Py Torch" but does not specify a version number (e.g., PyTorch 1.x or 2.x). The footnote link provided is to the general PyTorch website.
Experiment Setup Yes The learning rates of layers trained from scratch are set to 10 times those of ๏ฌne-tuned layers. The batch size is b = 36. We use mini-batch SGD with a momentum of 0.95 with the learning rate of 0.1 and the progressive training strategies of DANN (Ganin et al., 2016). Labels are all normalized to [0, 1] to eliminate the effects of diverse scales in regression values, where the activation of the regressor is Sigmoid.