Navigating Real-World Partial Label Learning: Unveiling Fine-Grained Images with Attributes

Authors: Haoran Jiang, Zhihao Sun, YingJie Tian

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of our approach on finegrained partial label datasets. The proposed So Disam framework not only addresses the challenges associated with finegrained partial label learning but also provides a more realistic representation of real-world partial label scenarios.
Researcher Affiliation Academia 1 School of Mathematical Sciences, University of Chinese Academy of Sciences 2 Research Center on Fictitious Economy and Data Science, Chinese Academy of Sciences 3 Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences 4 School of Computer Science and Technology, University of Chinese Academy of Sciences 5 School of Economics and Management, University of Chinese Academy of Sciences 6 MOE Social Science Laboratory of Digital Economic Forecasts and Policy Simulation at UCAS
Pseudocode No The paper does not contain any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for open-source code availability.
Open Datasets Yes We choose three popular fine-grained datasets with attributes. CUB (Wah et al. 2011) consists of 11,788 images of 200 bird classes with 312 attributes, AWA2 (Lampert, Nickisch, and Harmeling 2009) contains 37,322 images of 50 animal classes with 85 attributes and SUN (Xiao et al. 2010; Patterson and Hays 2012) consists of 108,754 images of 395 scene classes with 102 attributes.
Dataset Splits No The paper mentions training, but does not explicitly provide details about training/validation/test splits, only that datasets are manually corrupted into partially labeled versions.
Hardware Specification Yes All implementation is based on Py Torch(Paszke et al. 2019) and models are trained on a NVIDIA A100 GPU.
Software Dependencies No All implementation is based on Py Torch(Paszke et al. 2019) and models are trained on a NVIDIA A100 GPU. An 18-layer Res Net (He et al. 2016) pretrained on Image Net (Deng et al. 2009) is used as backbone for all main experiments. While PyTorch is mentioned, a specific version number for PyTorch is not provided, and other software dependencies are also not listed with versions.
Experiment Setup Yes For So Disam, the only hyperparameters τ, α, β are fixed as 0.25, 0.3, 1, respectively. Choose PRODEN (Lv et al. 2020) loss as L( ).