Fast Multi-label Learning

Authors: Xiuwen Gong, Dong Yuan, Wei Bao

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comprehensive empirical studies corroborate our theoretical findings and demonstrate the superiority of the proposed methods.
Researcher Affiliation Academia Xiuwen Gong , Dong Yuan and Wei Bao Faculty of Engineering, The University of Sydney {xiuwen.gong, dong.yuan, wei.bao}@sydney.edu.au
Pseudocode No The paper describes mathematical formulations and processes but does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about making its source code available or include links to code repositories for the methodology described.
Open Datasets Yes This section evaluates the performance of the proposed methods on four data sets: corel5k, nus(vlad), nus(bow) and rcv1x. The statistics of these data sets are presented in website1. We compare SS+GAU and SS+WH with several state-of-the-art methods, as follows. BR [Tsoumakas et al., 2010]: We implement two base classifiers for BR. The first uses linear classification/regression package LIBLINEAR [Fan et al., 2008] with l2-regularized square hinge loss as the base classifier. We simply call this baseline BR+LIB. The second uses k NN as the base classifier. We simply call this baseline BR+k NN and count the k NN search time as the training time. Fast XML [Prabhu and Varma, 2014]: An advanced treebased multi-label classifier. SLEEC [Bhatia et al., 2015]: A state-of-the-art embedding method, which is based on sparse local embeddings for large-scale multi-label classification.
Dataset Splits No The paper mentions "training samples" and the "training process," but it does not specify any dataset splits for training, validation, or testing (e.g., percentages or sample counts for each split).
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions using "LIBLINEAR [Fan et al., 2008]" and "solvers of Fast XML and SLEEC provided by the respective authors with default parameters," but it does not specify any version numbers for these software dependencies or any other libraries.
Experiment Setup Yes Following the similar settings in [Zhang and Zhou, 2007] and [Bhatia et al., 2015], we set k = 10 for the k NN search in all k NN based methods. The sketch size m is chosen in a range of {64, 128, 256, 512, 1024}.