Creating Images by Learning Image Semantics Using Vector Space Models

Authors: Derrall Heath, Dan Ventura

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the semantic model with an image clustering technique and demonstrate that the model is successful in creating images that communicate semantic relationships. ... We apply these clustering methods here and show that the new semantic model successfully enables DARCI to render images that convey a larger variety of concepts in ways that accurately reflect their semantic relationships. ... We start with evaluating how well the semantic modeling component learns to predict word vectors from images. We then use clustering techniques to determine how well the images that DARCI produces actually reflect their intended adjective. Finally, we evaluate how clusters of images relate to each other and to the word vectors on which they are based.
Researcher Affiliation Academia Derrall Heath and Dan Ventura Computer Science Department Brigham Young University Provo, UT 84602 USA dheath@byu.edu, ventura@cs.byu.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions using a 'publicly available implementation of the skip-gram model', but does not provide access to the authors' own source code for the methodology described in the paper.
Open Datasets Yes We use a publicly available implementation of the skip-gram model1 and a lemmatized Wikipedia corpus to learn the word vectors (Denoyer and Gallinari 2006). ... We maintain a dataset of approximately 15,000 images that have either been explicitly hand labeled or automatically retrieved through Google image search.
Dataset Splits Yes We compare our visual semantic model (Vector) with a binary relevance model (Binary) using 10-fold cross validation.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., specific GPU/CPU models, memory).
Software Dependencies No The paper mentions using a 'publicly available implementation of the skip-gram model' and 'WEKA', but does not specify version numbers for these or other software components used in their experiments.
Experiment Setup Yes The parameters for the neural networks were determined through experimentation (see the Evaluation Section for the metrics used) and include a learning rate of 0.01, a momentum of 0.1, and 100 hidden nodes.