Are Elephants Bigger than Butterflies? Reasoning about Sizes of Objects

Authors: Hessam Bagherinezhad, Hannaneh Hajishirzi, Yejin Choi, Ali Farhadi

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental evaluations show strong results. On our dataset of about 500 relative size comparisons, our method achieves 83.5% accuracy, compared to 63.4% of a competitive NLP baseline.
Researcher Affiliation Collaboration University of Washington, Allen Institute for AI {hessam, hannaneh, yejin, ali}@washington.edu
Pseudocode Yes Algorithm 1 The overview of our method.
Open Source Code Yes The code, data, and results are available at http://grail.cs. washington.edu/projects/size.
Open Datasets Yes We use Flickr 100M dataset (Thomee et al. 2015) as the source of tag lists needed to construct the size graph (Section 4.1). We compiled a dataset of size comparisons among different physical objects. Our final dataset includes a total of 486 object pairs between 41 physical objects.
Dataset Splits No The paper describes compiling a dataset and its size, but does not specify details regarding train/validation/test splits, such as percentages, sample counts, or specific methods for data partitioning for reproducibility.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments.
Software Dependencies No The paper mentions using 'LEVAN detectors' and a depth estimation method from Eigen et al., but does not provide specific version numbers for any software dependencies.
Experiment Setup No The paper mentions aspects of the method like learning rate 'η' and initialization, but it does not provide specific hyperparameter values (e.g., fixed learning rate value, batch size, number of epochs, specific optimizer settings) or detailed training configurations in a comprehensive way.