Logo-2K+: A Large-Scale Logo Dataset for Scalable Logo Classification

Authors: Jing Wang, Weiqing Min, Sujuan Hou, Shengnan Ma, Yuanjie Zheng, Haishuai Wang, Shuqiang Jiang6194-6201

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments on Logo-2K+ and other three existing benchmark datasets demonstrate the effectiveness of proposed method. ... We conduct extensive evaluation on four datasets, including newly proposed Logo-2K+, and other three datasets with different scales, namely Belga Logos (Neumann, Samet, and Soffer 2002), Flickr Logos-32 (Romberg et al. 2011) and Web Logo-2M (Su, Gong, and Zhu 2017). The experimental results verified the effectiveness of the proposed method on all these datasets.
Researcher Affiliation Academia Jing Wang,1 Weiqing Min,2 Sujuan Hou,1 Shengnan Ma,1 Yuanjie Zheng,1 Haishuai Wang,3 Shuqiang Jiang2 1School of Information Science and Engineering, Shandong Normal University 2Institute of Computing Technology, Chinese Academy of Sciences 3Department of Computer Science and Engineering, Fairfield University
Pseudocode No The paper describes the architecture and mathematical formulations but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper states: 'the Logo-2K+ dataset can be found at https://github.com/msn199959/Logo-2k-plus-Dataset.' This link is explicitly for the dataset, not the open-source code for the DRNA-Net methodology.
Open Datasets Yes Logo-2K+ and the proposed strong baseline DRNA-Net are expected to further the development of scalable logo image recognition, and the Logo-2K+ dataset can be found at https://github.com/msn199959/Logo-2k-plus-Dataset. ... Besides Logo-2K+, we also conduct the evaluation on another publicly available benchmark datasets, Belga Loges, Flickr Logo-32 and Web Logo-2M to further verify the effectiveness of our method.
Dataset Splits No For Logo-2K+ and other datasets, the paper states '70%, 30% of images are randomly selected for training and testing in each logo category.' It does not explicitly mention a separate validation split.
Hardware Specification Yes We adopt the Pytorch framework to train the network and implement our algorithm on a NVIDIA Tesla V100 GPU (32GB).
Software Dependencies No The paper mentions using the 'Pytorch framework' but does not specify its version number or any other software dependencies with version details.
Experiment Setup Yes For our method, we adopt Res Net-50 and Res Net-152 pretrained on ILSVRC2012 as the feature extractor. The thresholds of region cropping and dropping θc and θd are both set to 0.5. We empirically set M = 4 in the navigator subnetwork and K = 2 in the teacher sub-network. In Eq. 8, hyper-parameters weights α = β = γ = 1 without prior. It is optimized using the stochastic gradient descent with a momentum of 0.9, a batch size of 8 and weight decay of 0.0001. ... All the models are trained for 100 epochs with an initial learning rate of 0.001 and decreased after 20 epochs to 0.0001.