Provable Adaptation across Multiway Domains via Representation Learning
Authors: Zhili Feng, Shaobo Han, Simon Shaolei Du
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In addition, we provide experiments on a two-way MNIST, a four-way fiber sensing dataset, and also the GTOS dataset to demonstrate the effectiveness of our proposed model. |
| Researcher Affiliation | Collaboration | Zhili Feng Carnegie Mellon University zhilif@andrew.cmu.edu Shaobo Han NEC Laboratories America, Inc. shaobo@nec-labs.com Simon S. Du University of Washington ssdu@cs.washington.edu |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. The paper describes the methods using mathematical equations and textual explanations but not formal algorithm listings. |
| Open Source Code | No | The paper does not provide a concrete statement or link to access the source code for the methodology described. It only mentions using existing architectures and other algorithms (e.g., Fish algorithm) but not their own implementation code. |
| Open Datasets | Yes | We test our proposed method on a two-way MNIST dataset and four-way fiber sensing datasets. The GTOS dataset (Xue et al., 2017). |
| Dataset Splits | No | The paper mentions 'training data' and 'test data' but does not provide specific details on training/validation/test dataset splits, such as exact percentages, sample counts, or explicit references to how their combined domain data was split for these purposes. For example, 'During training time, we collect training data from (i, i) entries for all i [5], and leave data in any other domains for test only.' |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It only discusses training parameters and dataset usage. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and 'Le Net architecture' but does not provide specific version numbers for any libraries, frameworks, or programming languages used in the experiments. |
| Experiment Setup | Yes | All models are trained using the cross entropy loss. Throughout the experiments, the Adam optimizer with default learning rate 10 3 is used... To prevent overfitting, we stop training of all models on the two-way MNIST dataset as soon as the last 50 iterations have average loss less than 0.05, and the training of all models on GTOS and the four-way fiber sensing dataset is stopped once the last 100 iterations have average loss less than 0.05. ...we set it to 0.05 for all rest of the experiments. ...our proposed model and the ERM models exhibit better generalization when trained with larger batch size 200. |