GibbsNet: Iterative Adversarial Inference for Deep Graphical Models

Authors: Alex M. Lamb, Devon Hjelm, Yaroslav Ganin, Joseph Paul Cohen, Aaron C. Courville, Yoshua Bengio

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show empirically that Gibbs Net is able to learn a more complex p(z) and show that this leads to improved inpainting and iterative refinement of p(x, z) for dozens of steps and stable generation without collapse for thousands of steps, despite being trained on only a few steps.
Researcher Affiliation Academia No explicit institutional affiliations (university names, company names, or email domains) are provided in the paper's text. However, the authors are widely recognized researchers primarily affiliated with academic institutions in the field of deep learning.
Pseudocode No The paper describes its training procedure and theoretical arguments in prose and through diagrams like Figure 1, but it does not include structured pseudocode or an algorithm block.
Open Source Code No The paper does not contain an explicit statement about releasing its source code, nor does it provide a link to a code repository for the Gibbs Net methodology described.
Open Datasets Yes We evaluate this property on two datasets: Street View House Number (SVHN, Netzer et al., 2011) and permutation invariant MNIST. The CIFAR-10 dataset was used. Celeb A, Liu et al., 2015.
Dataset Splits No The paper mentions using 'the MNIST validation set' and reports test accuracies on MNIST and SVHN, implying standard splits were used. However, it does not explicitly provide specific details such as percentages, sample counts, or a description of the splitting methodology for the train/validation/test datasets.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments, such as GPU/CPU models, memory, or cloud instance types.
Software Dependencies No The paper mentions using specific methods like 'boundary-seeking GAN' and 'code released from Salimans et al. (2016)', but it does not specify any software dependencies with version numbers (e.g., deep learning frameworks, libraries, or programming language versions) required to reproduce the experiment.
Experiment Setup No The paper describes architectural similarities between Gibbs Net and ALI, details about latent space dimensions (e.g., '2-D latent space'), network layer configurations (e.g., '1024 units', '6 layers with 2048 units'), and procedural steps (e.g., 'running the chain for four steps'). However, it does not provide specific training hyperparameters such as learning rates, batch sizes, optimizers, or number of epochs, which are crucial for detailed experimental setup reproducibility.