Knowledge Transfer with Interactive Learning of Semantic Relationships
Authors: Jonghyun Choi, Sung Ju Hwang, Leonid Sigal, Larry Davis
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the proposed model in a few-shot multi-class classification scenario, where we measure classification performance on a set of target classes, with few training instances, by leveraging and transferring knowledge from anchor classes, that contain larger set of labeled instances. |
| Researcher Affiliation | Collaboration | Jonghyun Choi University of Maryland College Park, MD jhchoi@umiacs.umd.edu Sung Ju Hwang UNIST Ulsan, Korea sjhwang@unist.ac.kr Leonid Sigal Disney Research Pittsburgh, PA lsigal@disneyresearch.com Larry S. Davis University of Maryland College Park, MD lsd@umiacs.umd.edu |
| Pseudocode | Yes | We summarize the overall procedure in Algorithm 1 and describe the steps in the following subsections. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology. |
| Open Datasets | Yes | We use two object categorization datasets: 1) Animals with Attributes (AWA) (Lampert, Nickisch, and Harmeling 2009), which consists of 50 animal classes and 30,475 images, 2) Image Net-50 (Hwang, Grauman, and Sha 2013), which consists of 70,380 images of 50 categories. |
| Dataset Splits | Yes | For testing and validation set, we use a 50/50 split of the remaining samples, excluding the training samples. |
| Hardware Specification | No | The paper does not provide specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | Yes | We evaluate the performance of knowledge transfer by measuring the classification accuracy of each model on the target classes in a challenging set-up that has only a few training samples (2, 5 and 10 samples per class, few-shot learning) with a prior learned with anchor classes that have a larger numbers of training samples (30 samples per class). |