Deep Generative Models with Learnable Knowledge Constraints

Authors: Zhiting Hu, Zichao Yang, Russ R. Salakhutdinov, LIANHUI Qin, Xiaodan Liang, Haoye Dong, Eric P. Xing

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on human image generation and templated sentence generation show models with learned knowledge constraints by our algorithm greatly improve over base generative models.
Researcher Affiliation Collaboration Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, Xiaodan Liang, Lianhui Qin, Haoye Dong, Eric P. Xing Carnegie Mellon University, Petuum Inc. {zhitingh,zichaoy,rsalakhu,xiaodan1}@cs.cmu.edu, eric.xing@petuum.com
Pseudocode Yes Algorithm 1 Joint Learning of Deep Generative Model and Constraints
Open Source Code No The paper does not explicitly state that source code for the described methodology is provided or made publicly available, nor does it include a link to a code repository.
Open Datasets Yes We follow the previous work [37] and obtain from Deep Fashion [35] a set of triples (source image, pose keypoints, target image) as supervision data. and Paired (template, sentence) data is obtained by randomly masking out different parts of sentences from the IMDB corpus [8].
Dataset Splits No The paper mentions using a 'test set' and 'test cases' but does not specify the train/validation/test dataset splits (e.g., percentages or exact counts) needed for reproduction.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper describes the general setup for the base models and constraints (e.g., 'residual block architecture', 'L1 distance loss') but does not provide specific numerical hyperparameters like learning rates, batch sizes, number of epochs, or optimizer settings needed for full reproducibility of the experimental setup.