VAEL: Bridging Variational Autoencoders and Probabilistic Logic Programming
Authors: Eleonora Misino, Giuseppe Marra, Emanuele Sansone
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments provide support on the benefits of this neuro-symbolic integration both in terms of task generalization and data efficiency. |
| Researcher Affiliation | Academia | Eleonora Misino Department of Computer Science and Engineering University of Bologna, Italy eleonora.misino2@unibo.it Giuseppe Marra, Emanuele Sansone Department of Computer Science KU Leuven, Belgium {first}.{last}@kuleuven.be |
| Pseudocode | Yes | In Appendix ?? we report VAEL training algorithm (Algorithm ??) along with further details on the training procedure. |
| Open Source Code | Yes | The source code and the datasets are available at https://github.com/elemisi/vael under MIT license. |
| Open Datasets | Yes | 2digit MNIST dataset. We create a dataset of 64, 400 images of two digits taken from the MNIST dataset [38]... The source code and the datasets are available at https://github.com/elemisi/vael under MIT license. |
| Dataset Splits | Yes | We use 65%, 20%, 15% splits for the train, validation and test sets, respectively. |
| Hardware Specification | No | The paper discusses the training and evaluation of models but does not provide specific details regarding the hardware (e.g., CPU, GPU models, memory) used for the experiments. |
| Software Dependencies | No | The paper mentions tools like Prob Log but does not specify versions for any key software components or libraries required to reproduce the experiments. |
| Experiment Setup | No | The paper states that "Further implementation details can be found in Appendix ??" but does not provide specific hyperparameter values or detailed training configurations within the main text. |