HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks

Authors: Aibek Alanov, Vadim Titov, Dmitry P. Vetrov

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide extensive experiments to empirically confirm the effectiveness of the proposed parameterization and the regularization loss on a wide range of domains. We illustrate that our parameterization can achieve quality comparable with the full parameterization (i.e. when we optimize all weights). The proposed regularization loss significantly improves the diversity of the fine-tuned generator that is validated qualitatively and quantitatively. Further, we conduct experiments with the Hyper Domain Net and show that it can be successfully trained on a number of target domains simultaneously. Also we show that it can generalize to a number of diverse unseen domains.
Researcher Affiliation Collaboration Aibek Alanov HSE University, AIRI Moscow, Russia alanov.aibek@gmail.com Vadim Titov MIPT , AIRI Moscow, Russia titov.vn@phystech.edu Dmitry Vetrov HSE University, AIRI Moscow, Russia vetrovd@yandex.ru
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes Source code can be found at this github repository.
Open Datasets Yes As a target domain we take the common benchmark dataset of face sketches [36] that has approximately 300 samples.
Dataset Splits No The paper mentions using a 'common benchmark dataset of face sketches' and 'one-shot adaptation setting' but does not specify explicit train/validation/test splits for reproducing experiments.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup No For the detailed information about setup of the experiments please refer to Appendix A.1.