Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D
Authors: Ankit Goyal, Kaiyu Yang, Dawei Yang, Jia Deng
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically validate that minimally contrastive examples can diagnose issues with current relation detection models as well as lead to sample-efficient training. |
| Researcher Affiliation | Academia | University of Michigan, Ann Arbor, MI Princeton University, Princeton, NJ |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Code and data are available at https://github.com/princeton-vl/Rel3D. |
| Open Datasets | Yes | Code and data are available at https://github.com/princeton-vl/Rel3D. |
| Dataset Splits | Yes | Hyper-parameters for each model are tuned separately using validation data, and the best-performing model on the validation set is used for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions software like Blender and Unity Web GL but does not specify their version numbers for reproducibility. |
| Experiment Setup | Yes | All images are resized to 224 224 before feeding into the model. We perform random cropping and color jittering on training data. Hyper-parameters for each model are tuned separately using validation data, and the best-performing model on the validation set is used for testing. |