Deep Streaming Label Learning

Authors: Zhen Wang, Liu Liu, Dacheng Tao

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, extensive empirical results show that the proposed method performs significantly better than the existing state-of-theart multi-label learning methods to handle the continually emerging new labels.
Researcher Affiliation Academia 1UBTECH Sydney AI Centre, School of Computer Science, Faculty of Engineering, The University of Sydney, Darlington, NSW 2008, Australia. Correspondence to: Zhen Wang <zwan4121@uni.sydney.edu.au>, Liu Liu <liu.liu1@sydney.edu.au>.
Pseudocode No No pseudocode or algorithm blocks are provided in the paper.
Open Source Code No The paper states: 'The codes of baseline methods are provided by the authors or scikit-multilearn (Szyma nski & K., 2017).' This refers to the code for baseline methods, not for the proposed DSLL framework. No explicit statement or link providing access to the source code for DSLL is found.
Open Datasets Yes We use five readily available multi-label benchmark datasets from different application domains, including three regular-scale datasets2,3 (Yeast, Mir Flickr and Delicious) and two large-scale datasets4 (EURlex and Wiki10). The statistics of these five real-world data sets are summarized in Table 1. [Footnotes provide links: 2http://mulan.sourceforge.net/, 3https://github.com/chihkuanyeh/C2AE, 4http://manikvarma.org/downloads/XC/XMLRepository.html]
Dataset Splits Yes Table 1. Statistics of five real-world datasets. Dataset Domain #Training #Testing #Feature #Labels #Card-Features #Card-Labels lists specific counts for #Training and #Testing (e.g., Yeast: 1,500 #Training, 917 #Testing).
Hardware Specification Yes All the computations are performed on a 64-Bit Linux workstation with 10-core Intel Core CPU i7-6850K 3.60GHz processor, 256 GB memory, and 4 Nvidia GTX 1080 Ti GPUs.
Software Dependencies No The paper mentions 'scikit-multilearn (Szyma nski & K., 2017)' as providing codes for baseline methods, but does not specify its version or any other software dependencies with version numbers.
Experiment Setup No The paper states, 'More detailed settings are provided in the Appendix B.' However, the provided text does not contain specific hyperparameter values (e.g., learning rate, batch size for training, number of epochs, optimizer settings) or other concrete system-level training configurations within the main body.