Top-N: Equivariant Set and Graph Generation without Exchangeability
Authors: Clement Vignac, Pascal Frossard
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, our method outperforms i.i.d. generation by 15% at Set MNIST reconstruction, by 33% at object detection on CLEVR, generates sets that are 74% closer to the true distribution on a synthetic molecule-like dataset, and generates more valid molecules on QM9. |
| Researcher Affiliation | Academia | Cl ement Vignac, Pascal Frossard LTS4, EPFL Lausanne, Switzerland |
| Pseudocode | No | The paper provides mathematical equations for the Top-n creation module but does not include a formally labeled 'Algorithm' or 'Pseudocode' block. |
| Open Source Code | Yes | Source code is available at github.com/cvignac/Top-N |
| Open Datasets | Yes | We first perform experiments on the Set MNIST benchmark, introduced in Zhang et al. (2019)... We further benchmark Top-n on object detection with the CLEVR dataset, made of 70k training images and 15k validation images... Finally, we evaluate Top-n on a graph generation task. We train a graph VAE (detailed in Appendix D.3) on QM9 molecules... |
| Dataset Splits | Yes | We further benchmark Top-n on object detection with the CLEVR dataset, made of 70k training images and 15k validation images... |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper mentions general software components like 'Transformer layers', 'MLP', 'PNA global pooling layer', and 'Adam' optimizer, but does not provide specific version numbers for any software or libraries. |
| Experiment Setup | Yes | TSPN was therefore trained for 100 epochs with a learning rate of 5e-4, and DSPN with a learning rate of 1e-4 for 200 epochs... We use a learning rate of 2e 4 and a scheduler that halves it when reconstruction performance does not improve significantly after 750 epochs... The reference set contains 35 points... The model is trained over 600 epochs with a batch size of 512 and a learning rate of 2e 3. It is halved after 100 epochs when the loss does not improve anymore. The reference set has 12 points. |