Integrating Categorical Semantics into Unsupervised Domain Translation
Authors: Samuel Lavoie-Marchildon, Faruk Ahmed, Aaron Courville
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare Cat S-UDT with other unsupervised domain translation methods and demonstrate that it shows significant improvements on the SPUDT and SHDT problems. We then perform ablation and comparative studies to investigate the cause of the improvements on both setups. We demonstrate SPUDT using the MNIST (Le Cun & Cortes, 2010) and SVHN (Netzer et al., 2011) datasets and SHDT using Sketches and Reals samples from the Domain Net dataset (Peng et al., 2019). |
| Researcher Affiliation | Academia | Samuel Lavoie , Faruk Ahmed & Aaron Courville Département d Informatique et de Recherche Opérationnelle Université de Montréal, Mila |
| Pseudocode | No | The paper does not include a clearly labeled 'Pseudocode' or 'Algorithm' block, nor does it present structured steps formatted like code or an algorithm. |
| Open Source Code | Yes | 1The public code can be found: https://github.com/lavoiems/Cats-UDT. |
| Open Datasets | Yes | We demonstrate SPUDT using the MNIST (Le Cun & Cortes, 2010) and SVHN (Netzer et al., 2011) datasets and SHDT using Sketches and Reals samples from the Domain Net dataset (Peng et al., 2019). |
| Dataset Splits | Yes | The classifiers obtain an accuracy of 99.6% and 98.0% on the test set of MNIST and SVHN respectively as reported in the last column of Table 1. and This process yields an accuracy of 75.47% on the test set of sketches and 90.32% on the test set of real images. |
| Hardware Specification | No | The paper mentions 'Compute-Canada and Mila for providing the computing ressources used for this work', but does not specify any particular hardware models like specific GPUs, CPUs, or TPU versions. |
| Software Dependencies | Yes | Our results on MNIST$SVHN and Sketches!Reals datasets were obtained using our Pytorch (Paszke et al., 2019) implementation. and This includes: Python, Pytorch (Paszke et al., 2019), Numpy (Harris et al., 2020) and Matplotlib (Hunter, 2007). |
| Experiment Setup | No | The paper describes some data preprocessing (e.g., 'upsample to 32 32 and triple the number of channels', 'feature values in the range [-1, 1]') but does not provide specific hyperparameters such as learning rate, batch size, number of epochs, or optimizer settings in the main text. |