Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning
Authors: Quanshi Zhang, Ruiming Cao, Ying Nian Wu, Song-Chun Zhu
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method is guided by a small number of part annotations, and it achieves superior performance (about 13% 107% improvement) in part center prediction on the PASCAL VOC and Image Net datasets. |
| Researcher Affiliation | Academia | Quanshi Zhang, Ruiming Cao, Ying Nian Wu, Song-Chun Zhu University of California, Los Angeles |
| Pseudocode | No | The paper describes the learning process and equations but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Codes here: https://sites.google.com/site/cnnsemantics/ |
| Open Datasets | Yes | We chose the 16-layer VGG network (VGG-16) (Simonyan and Zisserman 2015) that was pre-trained using the 1.3M images in the Image Net ILSVRC 2012 dataset (Deng et al. 2009) for object classification. ... We tested our method on three benchmark datasets: the PASCAL VOC Part Dataset (Chen et al. 2014), the CUB2002011 dataset (Wah et al. 2011), and the ILSVRC 2013 DET dataset (Deng et al. 2009). |
| Dataset Splits | No | In Experiments, we annotated 3 12 boxes for each part to build the AOG, and we used the rest images in the dataset as testing images. No explicit validation split is mentioned. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions using VGG-16 but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We chose the 16-layer VGG network (VGG-16)... We chose the last 9 (from the 5-th to the 13th) conv-layers as valid conv-layers... Exp. 1, three-shot AOG construction: ... used a total of three annotations... Exp. 2, AOG construction with more annotations: ... annotated four parts in four different object images. |