Object Recognition with Hidden Attributes
Authors: Xiaoyang Wang, Qiang Ji
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark data sets demonstrate that both approaches can effectively improve the learning of the object classifiers over the baseline models that do not use attributes, and their combination reaches the best performance. Experiments also show that the proposed approaches outperform both state of the art methods that use attributes as middle level representation and the approaches that learn the classifiers with hidden information. |
| Researcher Affiliation | Academia | Xiaoyang Wang and Qiang Ji Rensselaer Polytechnic Institute, Troy, NY, USA |
| Pseudocode | No | The paper describes mathematical formulations and equations but does not present any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | We perform experiments on natural scene object classification on two benchmark datasets: a Pascal dataset [Farhadi et al., 2009] and Animals with Attributes (AWA) dataset [Lampert et al., 2009]. |
| Dataset Splits | Yes | The a Pascal dataset contains 6340 training images and 6355 testing images collected from Pascal VOC 2008 challenge. [...] During classifier learning, the coefficients are tuned through a two fold cross validation procedure within the training set. [...] We use 100 samples per object class for training, and 200 samples per object class for testing. |
| Hardware Specification | No | The paper does not specify any hardware details like CPU, GPU models, or memory used for running experiments. |
| Software Dependencies | No | The paper mentions using logistic regression (LR) and support vector machine (SVM) models but does not specify any software names with version numbers for implementation or dependencies. |
| Experiment Setup | No | During classifier learning, the coefficients are tuned through a two fold cross validation procedure within the training set. The results are shown in Table 1, where both the overall accuracy rate and mean per-class recognition accuracy rate are given. [...] We use 100 samples per object class for training, and 200 samples per object class for testing. As in [Sharmanska et al., 2013], we repeat such training/testing split procedure for 20 times. |