Webly-Supervised Fine-Grained Recognition with Partial Label Learning
Authors: Yu-Yan Xu, Yang Shen, Xiu-Shen Wei, Jian Yang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on several webly-supervised fine-grained benchmark datasets show that our method obviously outperforms other existing state-of-the-art methods. |
| Researcher Affiliation | Academia | 1 Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology 2State Key Laboratory of Integrated Services Networks, Xidian University {xuyy, shenyang 98, weixs, csjyang}@njust.edu.com |
| Pseudocode | No | The paper describes algorithms and methods in text and uses diagrams (e.g., Figure 1) but does not contain structured pseudocode or algorithm blocks with formal labels like "Algorithm" or "Pseudocode". |
| Open Source Code | No | The paper does not provide an unambiguous statement about releasing source code for the methodology described, nor does it include a direct link to a code repository. |
| Open Datasets | Yes | Web-Aircraft [Sun et al., 2020b], Web Bird [Sun et al., 2020b], Web-Car [Sun et al., 2020b], Webi Nat-5089 [Sun et al., 2021b] |
| Dataset Splits | No | The paper mentions training and test sets with specific sizes for several datasets. For Webi Nat-5089, it states that "the validation set of iNat2017 [Horn et al., 2018] is utilized as the test set," meaning a pre-existing validation set was used for testing, not for model selection during training. For other datasets, no explicit validation split information (percentages, counts, or dedicated validation sets) for hyperparameter tuning or early stopping is provided. |
| Hardware Specification | Yes | We conduct all experiments on three Ge Force RTX 3060 GPUs. |
| Software Dependencies | No | The paper mentions software/platforms like "Mind Spore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor" in the acknowledgements, but it does not provide specific version numbers for these or any other software libraries or dependencies used for the experiments. |
| Experiment Setup | Yes | We set the threshold δ = 0.2. For the sampling strategy of top-k recall optimization loss, we set k = 5 and n = 4. For the number of binary classifiers in ECOC, we set L = 128. We adopt the Res Net-50 [He et al., 2016] as our backbone. We use the stochastic gradient descent optimizer with the momentum set as 0.9. The batch size is 32 for per GPU and the epoch number is set as 110. The initial learning rate is 5 × 10−3 while the weight decay is 2 × 10−5. The warmup stage lasts for 10 epochs. |