Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
A Unified Feature Disentangler for Multi-Domain Image Translation and Manipulation
Authors: Alexander H. Liu, Yen-Cheng Liu, Yu-Ying Yeh, Yu-Chiang Frank Wang
NeurIPS 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiment Results |
| Researcher Affiliation | Academia | 1National Taiwan University, Taiwan 2Georgia Institute of Technology, USA 3University of California, San Diego, USA 4MOST Joint Research Center for AI Technology and All Vista Healthcare, Taiwan |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Implementation of our proposed method and the datasets are now available1. 1https://github.com/Alexander-H-Liu/UFDN |
| Open Datasets | Yes | Digits MNIST, USPS, Street View House Number (SVHN) datasets are considered to be three different domains and used as benchmark datasets in unsupervised domain adaption (UDA) tasks. |
| Dataset Splits | No | The paper specifies training and testing image counts for MNIST and USPS (e.g., โMNIST contains 60000/10000 training/testing imagesโ), but does not explicitly detail validation set splits or percentages across all datasets. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names with versions) that are needed to replicate the experiments. |
| Experiment Setup | No | The paper describes the model and its objectives but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings. |