Generating Programmatic Referring Expressions via Program Synthesis
Authors: Jiani Huang, Calvin Smith, Osbert Bastani, Rishabh Singh, Aws Albarghouthi, Mayur Naik
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our algorithm on challenging benchmarks based on the CLEVR dataset, and demonstrate that our approach significantly outperforms several baselines. We evaluate our approach on the CLEVR dataset (Johnson et al., 2017), a synthetic dataset of objects with different attributes and spatial relationships. |
| Researcher Affiliation | Collaboration | 1University of Pennsylvania 2University of Wisconsin Madison 3Google Brain. Correspondence to: Jiani Huang <jianih@seas.upenn.edu>. |
| Pseudocode | Yes | Algorithm 1 Our algorithm for synthesizing referring relational programs. Hyperparameters are N, M, K N. |
| Open Source Code | Yes | Our implementation is available at: https://github. com/moqingyan/object_reference_synthesis. |
| Open Datasets | Yes | We evaluate our approach on the CLEVR dataset (Johnson et al., 2017) |
| Dataset Splits | No | The paper specifies training and testing sets, but does not explicitly mention a separate validation set. 'For each dataset, we use 7 total objects. Each dataset has 30 scene graphs for training (a total of 210 problem instances), and 500 scene graphs for testing (a total of 3500 problem instances).' |
| Hardware Specification | No | The paper mentions using neural networks (CNN, GCN) but does not specify any hardware details such as GPU or CPU models used for the experiments. |
| Software Dependencies | No | The paper refers to algorithms and models (e.g., deep Q-learning, graph convolutional network) but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | We search for programs of length at most M = 8, using K = 2 in hierarchical synthesis. We consider three variables i.e., |Z| = 3, including zt. We use N = 200 rollouts during reinforcement learning. We pretrain our Q-network on the training set corresponding to each dataset, using N = 10000 gradient steps with a batch size of 5. |