Single-Label Multi-Class Image Classification by Deep Logistic Regression
Authors: Qi Dong, Xiatian Zhu, Shaogang Gong3486-3493
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive comparative evaluations demonstrate the model learning advantages of the proposed LR functions over the commonly adopted SR in single-label coarse-grained object categorisation and cross-class fine-grained person instance identification tasks. We also show the performance superiority of our method on clothing attribute classification in comparison to the vanilla LR function. |
| Researcher Affiliation | Collaboration | Qi Dong,1 Xiatian Zhu,2 Shaogang Gong1 1Queen Mary University of London, 2Vision Semantics Ltd. q.dong@qmul.ac.uk, eddy@visionsemantics.com, s.gong@qmul.ac.uk |
| Pseudocode | No | The paper describes the proposed algorithms mathematically but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code had been made publicly available. |
| Open Datasets | Yes | CIFAR10 and CIFAR100 (Krizhevsky and Hinton 2009) both have 32 32 sized images from 10 and 100 classes, respectively. Tiny Image Net (Tiny200) (Deng et al. 2009) contains 110,000 64 64 images from 200 classes. The Market-1501 (Zheng et al. 2015) has 32,668 images of 1,501 different identities (ID). The Duke MTMC (Ristani et al. 2016) consists of 36,411 images of 1,404 IDs. This dataset has 289,222 images labelled with 1,000 fine-grained clothing attributes with a 209,222/40,000/40,000 train/val/test benchmark setting. |
| Dataset Splits | Yes | We adopted the benchmarking 50,000/10,000 train/test image split on both [CIFAR]. We followed the standard 100,000/10,000 train/val setting [Tiny Image Net]. The Market-1501 (Zheng et al. 2015) has 32,668 images of 1,501 different identities (ID) captured from 6 outdoor camera views. We followed the standard 751/750 train/test ID split. This dataset has 289,222 images labelled with 1,000 fine-grained clothing attributes with a 209,222/40,000/40,000 train/val/test benchmark setting. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory used for running the experiments. |
| Software Dependencies | No | We carried out all the following experiments in Tensor Flow (Abadi et al. 2016). The paper mentions TensorFlow but does not specify its version or other software dependencies with version numbers. |
| Experiment Setup | Yes | We used the standard SGD with momentum for model training. We set the initial learning rate to 0.1, the momentum to 0.9, the weight decay to 10 4, the batch size to 128/64/128 for CIFAR/Tiny200/Image Net, the epoch number to 300. We set the parameter m (Eq (5)) in the range of [25, 75] and r = 2 (Eq (6)) (r = 50 for Image Net) by a grid search on the validation dataset. Data augmentation includes horizontal flipping and translation. |