Constrained Generation of Semantically Valid Graphs via Regularizing Variational Autoencoders

Authors: Tengfei Ma, Jie Chen, Cao Xiao

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments 5.1 Tasks, Data Sets, and Baselines 5.2 Network Architecture 5.3 Results Table 2: Standard VAE versus regularized VAE. Table 3: Comparison with baselines.
Researcher Affiliation Industry Tengfei Ma Jie Chen Cao Xiao IBM Research Tengfei.Ma1@ibm.com, {chenjie,cxiao}@us.ibm.com
Pseudocode No The paper describes its methods but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper only mentions downloading code for baselines ('The codes are downloaded from https://github.com/mkusner/grammar VAE'). There is no explicit statement or link indicating that the authors' own source code for the proposed method is provided.
Open Datasets Yes For molecular graphs, two benchmark data sets are QM9 [32] and ZINC [21]. The former contains molecules with at most 9 heavy atoms whereas the latter consists of drug-like commercially available molecules extracted at random from the ZINC database.
Dataset Splits No The paper mentions 'holdout graphs (in the training set)' for reconstruction percentage but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or test sets.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper describes network architectures (e.g., convolutional neural nets, deconvolutional neural net) but does not provide specific ancillary software details, such as library names with version numbers (e.g., Python 3.8, PyTorch 1.9, TensorFlow 2.x).
Experiment Setup No The paper describes the network architecture and mentions that 'Regularization parameters are tuned for the highest validity,' but it does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings in the main text.