Learning about an exponential amount of conditional distributions
Authors: Mohamed Belghazi, Maxime Oquab, David Lopez-Paz
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Throughout a variety of experiments on synthetic and image data, we show the efficacy of NCs in generation and prediction tasks (Sections 5 and 7). |
| Researcher Affiliation | Collaboration | 1Facebook AI Research, Paris, France 2Montréal Institute for Learning Algorithms, Montréal, Canada |
| Pseudocode | No | The paper describes the training process in six steps but does not provide structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We train NCs on SVHN and Celeb A. We consider data imputation tasks on three UCI datasets [37]. [37] Moshe Lichman et al. Uci machine learning repository, 2013. |
| Dataset Splits | Yes | We train 40 linear SVMs on learned representations extracted from the encoder using full available and requested masks (a = r = 1) on the Celeb A validation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU models or CPU specifications. |
| Software Dependencies | No | The paper mentions the 'Adam optimizer' but does not specify version numbers for any software dependencies. |
| Experiment Setup | Yes | We train the networks for 10, 000 updates, with a batch-size of 512, and the Adam optimizer with a learning rate of 10 4, β1 = 0.5, and β2 = 0.999. For these experiments, both the discriminator and the NC have 2 hidden layers of 64 units each, and Re LU non-linearities. |