Discover Multiple Novel Labels in Multi-Instance Multi-Label Learning

Authors: Yue Zhu, Kai Ming Ting, Zhi-Hua Zhou

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of the proposed approach is validated in experiments. ... Experimental results are presented before conclusion. ... Experiments on Toy Dataset ... Experiments on Real Datasets
Researcher Affiliation Academia 1 National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China 2 Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing 210023, China 3 Federation University, Victoria 3842, Australia
Pseudocode Yes Algorithm 1 DMNL
Open Source Code No The paper mentions that DMNL is implemented 'in MATLAB' but does not provide any link or explicit statement about the availability of its source code.
Open Datasets Yes The datasets include MSRCv2 image dataset (Winn, Criminisi, and Minka 2005), two letter datasets (Briggs, Fern, and Raich 2012) (i.e., Letter Carroll and Letter Frost), and the MNIST handwritten dataset (Le Cun et al. 1998).
Dataset Splits No All parameters in each approach are tuned via 5-fold cross validation on the training set on the known labels in bag level, except the number of novel labels k in the baselines.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., CPU/GPU models, memory).
Software Dependencies No The paper mentions that 'DMNL (in MATLAB)' was used but does not provide specific version numbers for MATLAB or any other software dependencies.
Experiment Setup No The paper states 'All parameters in each approach are tuned via 5-fold cross validation on the training set on the known labels in bag level'. However, it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed system-level training settings.