Learning with Group Noise

Authors: Qizhou Wang, Jiangchao Yao, Chen Gong, Tongliang Liu, Mingming Gong, Hongxia Yang, Bo Han10192-10200

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The performance on a range of real-world datasets in the area of several learning paradigms demonstrates the effectiveness of Max-Matching. We conduct a range of experiments, and the results indicate that the proposed method can achieve superior performance over baselines from three different learning paradigms with group noise in Figure 1.
Researcher Affiliation Collaboration Qizhou Wang1,2,*, Jiangchao Yao3,*, Chen Gong2,4, , Tongliang Liu5, Mingming Gong6, Hongxia Yang3, Bo Han 1, 1 Department of Computer Science, Hong Kong Baptist University 2 Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Mo E, School of Computer Science and Engineering, Nanjing University of Science and Technology 3 Data Analytics and Intelligence Lab, Alibaba Group 4 Department of Computing, Hong Kong Polytechnic University 5 Trustworthy Machine Learning Lab, School of Computer Science, Faculty of Engineering, The University of Sydney 6 School of Mathematics and Statistics, The University of Melbourne
Pseudocode No The paper describes the proposed Max-Matching method in detail and illustrates its structure in Figure 2, but it does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the Max-Matching method is publicly available.
Open Datasets Yes The experiments are conducted on an object localization dataset SIVAL (Rahmani et al. 2005) in the literature of MIL, as it provides instance-level annotations for evaluation. The experiments are conducted on five PLL datasets from various domains: FG-NET (Panis and Lanitis 2014) aims at facial age estimation; MSRCv2 (Liu and Dietterich 2012) and Bird Song (Briggs, Fern, and Raich 2012) focus on object classification; Yahoo! News (Guillaumin, Verbeek, and Schmid 2010) and Lost (Cour, Sapp, and Taskar 2011) deal with face naming tasks. The offline experiments are implemented on a range of datasets from Amazon: Video, Beauty, and Game.
Dataset Splits Yes Each dataset is then partitioned into 8:1:1 for training, validation, and test. Each dataset is partitioned randomly into 8:1:1 for training, validation, and test. For each user, we randomly take two subsets for validation and test, and the remaining data are used for training.
Hardware Specification No The paper describes the experimental settings and datasets used, but it does not provide specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory, or cloud instances).
Software Dependencies No The paper states 'Moreover, we implement Max-Matching using PyTorch, the Adam (Kingma and Ba 2015) is adopted with the learning rate selected from {10 1, , 10 4}, and the methods are run for 50 epochs.' However, it does not specify version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes Moreover, we implement Max-Matching using PyTorch, the Adam (Kingma and Ba 2015) is adopted with the learning rate selected from {10 1, , 10 4}, and the methods are run for 50 epochs.