Neurosymbolic Deep Generative Models for Sequence Data with Relational Constraints
Authors: Halley Young, Maxwell Du, Osbert Bastani
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, we show that our approach significantly improves over state-of-the-art in terms of capturing high-level structure in the data, while performing comparably or better in terms of low-level structure. |
| Researcher Affiliation | Academia | Halley Young Department of Computer Science University of Pennsylvania halleyy@seas.upenn.edu Maxwell Du Department of Computer Science University of Pennsylvania mdu@seas.upenn.edu Osbert Bastani Department of Computer Science University of Pennsylvania obastani@seas.upenn.edu |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of its proposed methodology. |
| Open Datasets | Yes | We used songs from the Essen folk song corpus (Schaffrath, 1995), using 2223 for training and 555 for testing (after removing examples with less than 16 measures or that were not in the standard 4/4 meter). For the poetry domain, We use from Project Gutenberg s poetry collection (Parrish, 2018), focusing on 10-line poems with rhymes and meter, with 2700 for training and 300 for testing. |
| Dataset Splits | Yes | We used songs from the Essen folk song corpus (Schaffrath, 1995), using 2223 for training and 555 for testing (after removing examples with less than 16 measures or that were not in the standard 4/4 meter). For the poetry domain, We use from Project Gutenberg s poetry collection (Parrish, 2018), focusing on 10-line poems with rhymes and meter, with 2700 for training and 300 for testing. |
| Hardware Specification | No | No specific hardware specifications (e.g., GPU/CPU models, memory) used for running experiments are mentioned in the paper. |
| Software Dependencies | No | The paper mentions software like Z3, BERT, GPT2, Music Auto Bot, and Music VAE but does not provide specific version numbers for these or other ancillary software components. |
| Experiment Setup | Yes | Our algorithm A uses Z3 to solve the optimization problem (H , K ) = arg min {λ1 J1 + λ2 J2 + λ3 J3} subj. to 1 2 3, (1), where λ1, λ2, λ3 2 R 0 are hyperparameters. We also mention kmax and kmin for the number of prototype subcomponents. |