TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation
Authors: Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, William Cheung, James Kwok
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on 8 FHA tasks on 5 datasets (MNIST, SVHN, USPS, CIFAR-10 and STL-10). We compare TOHAN with 5 competitive baselines. Experiments show that TOHAN effectively transfers knowledge of the source hypothesis to train a target-domain classifier when we only have few labeled target data. In other words, our paper opens a new door to the domain adaptation field, which solves private-data leakage and data shortage simultaneously. |
| Researcher Affiliation | Academia | 1 State Key Laboratory of High Performance Computing, College of CS, NUDT 2 CS Department, HKBU 3 De SI Lab, AAII, Faculty of Engineering and IT, UTS 4 TML Lab, School of CS, Faculty of Engineering, USYD 5 CSE Department, HKUST 6 Peng Cheng Laboratory, Shenzhen haoangchi618@gmail.com, feng.liu@uts.edu.au, {wenjing.yang, long.lan}@nudt.edu.cn, {bhanml, william}@comp.hkbu.edu.hk jamesk@cse.ust.hk |
| Pseudocode | Yes | Algorithm 1 Target-oriented hypothesis adaptation network (TOHAN) |
| Open Source Code | Yes | The code is available at github.com/Haoang97/TOHAN. |
| Open Datasets | Yes | We conduct experiments on 8 FHA tasks on 5 datasets (MNISTp Mq, SYHNp Sq, USPSp Uq, CIFAR-10 p CFq, STL-10 p SLq). We follow the standard domain-adaptation protocols [39] and compare average accuracy of 5 independent repeated experiments. |
| Dataset Splits | No | The paper mentions using "few labeled data" for training the target-domain classifier and describes the total number of target data used (e.g., 1 to 7 per class for digital datasets, 10 for object datasets), but it does not provide explicit details about how these data are split into training, validation, and test sets. It defines Dt as labeled data but does not specify its role in a train/val/test split. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions). |
| Experiment Setup | Yes | Input: encoder gs, classifier hs, Dt xi t, yi t (nt i 1, learning rate γ1, γ2, γ3 and γ4, total epoch Tmax, pretraining D epoch Td, adaptation epoch Tf, network parameter tθGnu N n 1, θh g, θD. ... where λ is a hyper-parameter between two losses to tradeoff the weight of knowledge of source and target domains. |