Object Recognition with and without Objects
Authors: Zhuotun Zhu, Lingxi Xie, Alan Yuille
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that useful visual hints can be explicitly learned separately and then combined to achieve higher performance, which verifies the advantages of the proposed framework. |
| Researcher Affiliation | Academia | Zhuotun Zhu, Lingxi Xie, Alan Yuille Johns Hopkins University, Baltimore, MD, USA {zhuotun, 198808xc, alan.l.yuille}@gmail.com |
| Pseudocode | No | The paper describes methods in prose and does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We first construct datasets from ILSVRC2012 [Russakovsky et al., 2015], i.e., one foreground set and one background set, by taking advantage of the ground-truth bounding box(es) provided in both training and testing cases. |
| Dataset Splits | Yes | The ILSVRC2012 dataset [Russakovsky et al., 2015] contains about 1.3M training and 50K validation images. Throughout this paper, we refer to the original dataset as Orig Set and the validation images are regarded as our testing set. [...] There are totally 544,539 training images and 50,000 testing images on FGSet. [...] in the end, 289,031 training images and 50,000 testing images are preserved (for BGSet). |
| Hardware Specification | No | The paper mentions 'powerful computational resources' and 'powerful computation source like GPUs' in a general sense but does not provide specific details such as GPU/CPU models, memory, or machine specifications used for experiments. |
| Software Dependencies | No | The paper mentions using the 'CAFFE library' and 'Mat Conv Net platform' but does not specify version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | The base learning rate is set to 0.01, and reduced by 1/10 for every 100,000 iterations. The moment is set to be 0.9 and the weight decay parameter is 0.0005. A total number of 450,000 iterations is conducted, which corresponds to around 90 training epochs on the original dataset. ... we adjust the dropout ratio as 0.7 to avoid the overfitting issue. |