Learning for Tail Label Data: A Label-Specific Feature Approach

Authors: Tong Wei, Wei-Wei Tu, Yu-Feng Li

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental comparisons and studies verify the effectiveness of the proposed method.
Researcher Affiliation Collaboration 1National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210023, China 24Paradigm Inc., Beijing, China
Pseudocode Yes Algorithm 1 The pseudo-code of TAIL
Open Source Code No The paper states, 'All the data sets as well as the code of compared methods are publicly available,' referring to external resources used for comparison, not the code for the authors' proposed TAIL method.
Open Datasets Yes Experiments are conducted on four benchmark data sets with the number of labels ranging from 159 to 30K. Table 1 lists the detailed statistics. All the data sets as well as the code of compared methods are publicly available1. 1http://manikvarma.org/downloads/XC/XMLRepository.html
Dataset Splits No The paper mentions 'train/test splits' and provides statistics for 'Train N' and 'Test M' in Table 1, but it does not explicitly describe or mention a separate validation split or how one was handled.
Hardware Specification Yes All experimental comparisons are conducted on the same PC machine with an Intel i5-6500 3.20GHz CPU and 32GB RAM.
Software Dependencies No The paper mentions using 'Liblinear' but does not specify a version number for it or any other software dependency.
Experiment Setup Yes In all of our experiments, we fix the number of nearest neighbors considered to 5, i.e., k = 5. We set the ratio parameter γ during clustering to 0.1 following the setting in [Zhang and Wu, 2015].