Human-Like Sketch Object Recognition via Analogical Learning
Authors: Kezhen Chen, Irina Rabkina, Matthew D. McLure, Kenneth D. Forbus1336-1343
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Results from the MNIST dataset and a novel dataset, the Coloring Book Objects dataset, are provided. Comparison to existing approaches indicates that analogical generalization can be used to identify sketched objects from these datasets with several orders of magnitude fewer examples than deep learning systems require. |
| Researcher Affiliation | Academia | Kezhen Chen, Irina Rabkina, Matthew D. Mc Lure Kenneth D. Forbus , Northwestern University {Kezhen Chen2021 | irabkina | mclure}@u.northwestern.edu | forbus@northwestern.edu |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides a link for the "Coloring Book Objects dataset and Cog Sketch sketches" but does not explicitly state that the source code for the methodology described in the paper is open-source or available. |
| Open Datasets | Yes | The MNIST handwritten digit dataset (Le Cun et al. 1998) is constructed from NIST s Special Database 3 and Special Database 1. ... The Coloring Book Objects dataset and Cog Sketch sketches can be found at http://www.qrg.northwestern.edu/Resources/cbo/index.html. |
| Dataset Splits | Yes | We use randomly-selected subsets of 10, 100, and 500 images for training and the full test set. ... We use leave-one-out cross-validation to perform sketched object recognition. In each round, nine images are used as training data and one image is used for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU/CPU models, memory, or cloud instance types. |
| Software Dependencies | No | The paper mentions software tools like 'Potrace' and components like 'Cog Sketch', and describes the CNN model architecture (ReLU, softmax), but it does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | A SAGE assimilation threshold of 0.9 was used in all experiments. ... The model has 2 convolution layers with a Re LU activation followed by maxpooling layers and a fully connected layer with softmax. ... With 0.1 learning rate and a momentum equal to 0.9, the model could reach 75.42% average accuracy after 80 epochs (epoch = 13056 examples presented to the Conv Net). |