Better Set Representations For Relational Reasoning
Authors: Qian Huang, Horace He, Abhay Singh, Yan Zhang, Ser Nam Lim, Austin R. Benson
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We first use synthetic image experiments to demonstrate how our approach effectively decomposes objects without explicit supervision. Then, we insert our module into existing relational reasoning models and show that respecting set invariance leads to substantial gains in prediction performance and robustness on several relational reasoning tasks. |
| Researcher Affiliation | Collaboration | Qian Huang Cornell University qh53@cornell.edu Horace He * Cornell University & Facebook hh498@cornell.edu Abhay Singh Cornell University as2626@cornell.edu Yan Zhang University of Southampton yz5n12@ecs.soton.ac.uk Ser-Nam Lim Facebook AI sernam@gmail.com Austin R. Benson Cornell University arb@cs.cornell.edu |
| Pseudocode | Yes | Algorithm 1 One forward pass of the Relational Reasoning System with SRN |
| Open Source Code | Yes | Code can be found at github.com/CUAI/Better Set Representations. |
| Open Datasets | Yes | To this end, we construct a synthetic Circles Dataset for easy control over the latent structure. Each image is 64 64 pixels with RGB channels in the range 0 to 1 (Fig. 2(a) is an example data point). An image contains 0 to 10 circles with varying color and size. Each circle is fully contained in the image with no overlap between circles of the same color. We use 64000 images for training and 4000 images for testing. |
| Dataset Splits | Yes | We use 100000 images for training, 1000 for validation, and 1000 for test. |
| Hardware Specification | No | The paper does not explicitly state the hardware specifications (e.g., specific GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like Adam optimizer, but does not provide specific version numbers for any software dependencies (e.g., Python, PyTorch, TensorFlow versions, or specific library versions). |
| Experiment Setup | Yes | We train the model with squared error image reconstruction loss using the Adam optimizer with learning rate 3e-4. See Appendix B for the full architecture details. ... We train all models for 50 epochs and select the epoch with the best validation accuracy. |