Generalized Adversarially Learned Inference
Authors: Yatin Dandi, Homanga Bharadhwaj, Abhishek Kumar, Piyush Rai7185-7192
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through comprehensive experiments, we demonstrate the efficacy, scalability, and flexibility of the proposed approach for a variety of tasks. |
| Researcher Affiliation | Collaboration | Yatin Dandi,1 Homanga Bharadhwaj, 2 Abhishek Kumar, 3 Piyush Rai 1 1 Indian Institute of Technology Kanpur 2 University of Toronto, Vector Institute 3 Google Brain yatind@iitk.ac.in, homanga@cs.toronto.edu, abhishk@google.com, piyush@cse.iitk.ac.in |
| Pseudocode | No | The paper describes algorithms using mathematical equations and textual explanations, but does not include any explicit pseudocode blocks or figures labeled 'Algorithm' or 'Pseudocode'. |
| Open Source Code | No | We will make the experimental code publicly available. |
| Open Datasets | Yes | Through experiments on two benchmark datasets, SVHN (Netzer et al. 2011) and Celeb A (Liu et al. 2015), we aim to assess the reconstruction quality, meaningfulness of the learned representations for use in downstream tasks and generation, effects of extending the approach to more classes of tuples and larger tuple size, the ability of the proposed approach to incorporate knowledge from pretrained models trained for a different task, and its adaptability to specific tasks such as inpainting. |
| Dataset Splits | Yes | The hyperparameters of the SVM model are selected using a held-out validation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used, such as GPU or CPU models, for running the experiments. |
| Software Dependencies | No | The paper mentions techniques and architectural details (e.g., 'spectral normalization', 'batch normalization and dropout') and references other works for architectures, but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For all our proposed models and both ALI (Dumoulin et al. 2017) and ALICE (Li et al. 2017a) (ALI + L2 reconstruction error) baselines, we borrow the architectures from (Dumoulin et al. 2017) with the discriminator using spectral normalization (Miyato et al. 2018) instead of batch normalization and dropout (Srivastava 2013). All the architectural details and hyper-parameters considered are further described in the appendix. |