Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Better Set Representations For Relational Reasoning

Authors: Qian Huang, Horace He, Abhay Singh, Yan Zhang, Ser Nam Lim, Austin R. Benson

NeurIPS 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We first use synthetic image experiments to demonstrate how our approach effectively decomposes objects without explicit supervision. Then, we insert our module into existing relational reasoning models and show that respecting set invariance leads to substantial gains in prediction performance and robustness on several relational reasoning tasks.
Researcher Affiliation Collaboration Qian Huang Cornell University EMAIL Horace He * Cornell University & Facebook EMAIL Abhay Singh Cornell University EMAIL Yan Zhang University of Southampton EMAIL Ser-Nam Lim Facebook AI EMAIL Austin R. Benson Cornell University EMAIL
Pseudocode Yes Algorithm 1 One forward pass of the Relational Reasoning System with SRN
Open Source Code Yes Code can be found at github.com/CUAI/Better Set Representations.
Open Datasets Yes To this end, we construct a synthetic Circles Dataset for easy control over the latent structure. Each image is 64 64 pixels with RGB channels in the range 0 to 1 (Fig. 2(a) is an example data point). An image contains 0 to 10 circles with varying color and size. Each circle is fully contained in the image with no overlap between circles of the same color. We use 64000 images for training and 4000 images for testing.
Dataset Splits Yes We use 100000 images for training, 1000 for validation, and 1000 for test.
Hardware Specification No The paper does not explicitly state the hardware specifications (e.g., specific GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions software components like Adam optimizer, but does not provide specific version numbers for any software dependencies (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup Yes We train the model with squared error image reconstruction loss using the Adam optimizer with learning rate 3e-4. See Appendix B for the full architecture details. ... We train all models for 50 epochs and select the epoch with the best validation accuracy.