Sketch Recognition via Part-based Hierarchical Analogical Learning

Authors: Kezhen Chen, Ken Forbus, Balaji Vasan Srinivasan, Niyati Chhaya, Madeline Usher

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the TU Berlin dataset and the Coloring Book Objects dataset show that the system can learn explainable models in a data-efficient manner.
Researcher Affiliation Collaboration Kezhen Chen1 , Ken Forbus1 , Balaji Vasan Srinivasan2 , Niyati Chhaya2 and Madeline Usher1 1Northwestern University 2Adobe Research kezhenchen@google.com, forbus@northwestern.edu, {balsrini, nchhava}@adobe.com, usher@northwestern.com
Pseudocode Yes Algorithm 1 Hierarchical Analogical Retrieval
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the source code for the work described, nor does it provide a direct link to a code repository for PHAL.
Open Datasets Yes We performed experiments on two datasets, the TU Berlin dataset [Eitz et al., 2012] and Coloring Book Objects dataset [Chen et al., 2019].
Dataset Splits Yes We used the popular training/testing splits, where each category has 16 testing sketches, and the rest of the sketches are training samples. [...] We use the same cross-validation method described in [Chen et al., 2019] for evaluation. At each round out of ten rounds, a random image in each category is used as the testing samples and the other nine images in each category are used as training samples.
Hardware Specification Yes Also, our approach only uses up to 10 CPUs to encode sketches and 1 CPU computer to perform hierarchical analogical learning.
Software Dependencies No The paper mentions software tools like Cog Sketch, Potrace, and Zhang-Suen's thinning algorithm but does not provide specific version numbers for these or other ancillary software components.
Experiment Setup Yes We performed hyperparameters search, settling on 0.8 as the assimilation thresholds and 0.2 as the cutoff probability for all three encoding levels. On the full dataset, the numbers of categories we keep at each level are 20, 10, and 5. [...] After a hyper-parameter search, we use 0.7 as the assimilation threshold and 0.2 as the cutoff probability for all three levels. During hierarchical analogical retrieval, we keep the top 10, 5, and 3 categories in Levels 1, 2, and 3 respectively.