Efficient Generation of Structured Objects with Constrained Adversarial Networks

Authors: Luca Di Liello, Pierfrancesco Ardino, Jacopo Gobbi, Paolo Morettin, Stefano Teso, Andrea Passerini

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An extensive empirical analysis shows that CANs efficiently generate valid structures that are both high-quality and novel. We implemented CANs6 using Tensorflow and used Py SDD7 to perform knowledge compilation. We tested CANs using different generator architectures on three real-world structured generative tasks.8 In all cases, we evaluated the objects generated by CANs and those of the baselines using three metrics (adopted from [36]): validity is the proportion of sampled objects that are valid; novelty is the proportion of valid sampled objects that are not present in the training data; and uniqueness is the proportion of valid unique (non-repeated) sampled objects.
Researcher Affiliation Academia Luca Di Liello University of Trento Pierfrancesco Ardino University of Trento Jacopo Gobbi University of Trento Paolo Morettin KU Leuven Stefano Teso University of Trento firstname.lastname@unitn.it Andrea Passerini University of Trento
Pseudocode No The paper does not contain a clearly labeled pseudocode or algorithm block.
Open Source Code Yes The code is freely available at https://github.com/unitn-sml/CAN
Open Datasets Yes The structured objects are 14 28 tile-based representations of SMB levels (e.g. Fig. 2) and the training data is obtained by sliding a 28 tiles window over levels from the Video game level corpus [42].
Dataset Splits No The paper mentions 'validation experiments' for hyperparameter tuning but does not provide specific numerical details for a train/validation/test dataset split (e.g., percentages or sample counts).
Hardware Specification Yes In this experiment, we train the model on a NVIDIA RTX 2080 Ti.
Software Dependencies No The paper mentions 'Tensorflow' and 'Py SDD' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes We run all the experiments on a machine with a single 1080Ti GPU for 4 times with random seeds. We address both issues by introducing the SL after an initial bootstrap phase (of 5, 000 epochs)... and by linearly increasing its weight from zero to λ = 0.2... All experiments were run for 12, 000 epochs. Each training run lasted 15000 epochs with all the default hyper parameters defined in [41], and the SL was activated from epoch 5000 with λ = 0.01... The training is stopped once the uniqueness drops under 0.2.