Unsupervised Adversarial Invariance
Authors: Ayush Jaiswal, Rex Yue Wu, Wael Abd-Almageed, Prem Natarajan
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide experimental results on three tasks relevant to invariant feature learning for improved prediction of target variables: (1) invariance to inherent nuisance factors, (2) effective use of synthetic data augmentation for learning invariance, and (3) domain adaptation. |
| Researcher Affiliation | Academia | Ayush Jaiswal, Yue Wu, Wael Abd Almageed, Premkumar Natarajan USC Information Sciences Institute Marina del Rey, CA, USA {ajaiswal, yue_wu, wamageed, pnataraj}@isi.edu |
| Pseudocode | No | No pseudocode or clearly labeled algorithm block was found in the paper. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We provide experimental results on three tasks relevant to invariant feature learning for improved prediction of target variables: (1) invariance to inherent nuisance factors, (2) effective use of synthetic data augmentation for learning invariance, and (3) domain adaptation. We evaluate the performance of our model and prior works on two metrics accuracy of predicting y from e1 (Ay) and accuracy of predicting z from e1 (Az). We provide results of our framework at the task of learning invariance to inherent nuisance factors on two datasets Extended Yale-B [7] and Chairs [2]... We use two variants of the MNIST [12] dataset... We evaluate the performance of our model at domain adaptation on the Amazon Reviews dataset [4]. |
| Dataset Splits | Yes | We use the prior works version of the dataset, which has lighting conditions classified into five groups front, upper-left, upper-right, lower-left and lower-right, with the same split as 38 × 5 = 190 samples used for training and the rest used for testing [13, 14, 19]. ... We split the data into training and testing sets by picking alternate yaw angles. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments were provided in the paper. |
| Software Dependencies | No | The paper does not provide specific software dependency details with version numbers (e.g., Python, PyTorch, TensorFlow versions) needed to replicate the experiment. |
| Experiment Setup | Yes | We update M1 and M2 in the frequency ratio of 1:k. We found k = 5 to perform well in our experiments. ... We found α = 100, β = 0.1 and γ = 1 to work well for all datasets on which we evaluated the proposed model. ... We use the same architecture for the predictor and the encoder as CAI (as presented in [19]), i.e., single-layer neural networks, except that our encoder produces two encodings instead of one. We also model the decoder and the disentanglers as single-layer neural networks. |