Learning Neurosymbolic Generative Models via Program Synthesis
Authors: Halley Young, Osbert Bastani, Mayur Naik
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we evaluate our approach on synthetic data and on a real-world dataset of building facades (Tyleˇcek & ˇS ara, 2013), both on the task of generation from scratch and on generation from a partial image. We show that our approach substantially outperforms several state-of-the-art deep generative models (Section 4). |
| Researcher Affiliation | Academia | 1University of Pennsylvania, USA. Correspondence to: Halley Young <halleyy@seas.upenn.edu>. |
| Pseudocode | Yes | Algorithm 1 Synthesizes a program P representing the global structure of a given image x 2 RNM NM. |
| Open Source Code | No | The paper does not provide any specific repository link or explicit statement about the release of the source code for the methodology described. |
| Open Datasets | Yes | Synthetic dataset. We developed a synthetic dataset based on MNIST. |
| Dataset Splits | No | The paper states dataset sizes for training and testing (e.g., '10,000 training and 500 test images' and '1755 training, 100 testing') but does not specify a separate validation split. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper discusses various model architectures and frameworks (e.g., LSTM, VAE, Cycle GAN, GLCIC) but does not provide specific software names with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x, or specific Python library versions). |
| Experiment Setup | No | The paper mentions neural network architectures and general training strategies but lacks specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings in the main text or appendices. |