Adversarial Bi-Regressor Network for Domain Adaptive Regression

Authors: Haifeng Xia, Pu Wang, Toshiaki Koike-Akino, Ye Wang, Philip Orlik, Zhengming Ding

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The empirical studies on two cross-domain regressive benchmarks illustrate the power of our method on solving the domain adaptive regression (DAR) problem.
Researcher Affiliation Collaboration Haifeng Xia1 , Pu Wang2 , Toshiaki Koike-Akino2 , Ye Wang2 , Philip Orlik2 , Zhengming Ding1 1Department of Computer Science, Tulane University, New Orleans, LA 2Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA {hxia, zding1}@tulane.edu, {pwang, koike, yewang, porlik}@merl.com
Pseudocode No The paper describes the steps of the method in text and mathematical formulations but does not provide any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for their methodology is publicly available.
Open Datasets Yes SPAWC2021 [Arnold and Schaich, 2021] is a large-scale indoor localization dataset... d Sprites [Higgins et al., 2017] is popular 2D synthetic image dataset...
Dataset Splits No The paper mentions 'Dataset1 (source domain) with 750k samples' and 'Dataset2 (target domain) with 650k instances' but does not explicitly state training, validation, and test splits with specific percentages or counts. A validation set is not explicitly mentioned.
Hardware Specification No The paper mentions using 'LSTM' and 'Res Net-18' as backbones but does not specify any hardware components (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper mentions 'Adam optimizer' and 'SGD' but does not provide specific version numbers for any software dependencies, libraries, or programming languages used.
Experiment Setup Yes In the implementation, the Adam optimizer with a learning rate of 0.001 is used to update all network components. For d Sprites, we following the training strategy of RSD [Chen et al., 2021] utilize the pre-trained Res Net-18 as backbone to extract features and SGD with a momentum 0.95 to optimize the network architecture. ...we adopt a fixed ratio λ to linearly combine the source and target samples to synthesize source-similar and target-similar instances as: X s = λXs i + (1 λ)Xt j, X t = (1 λ)Xs i + λXt j, (3) where λ is set to 0.7 in all experiments.