Direct Discriminative Bag Mapping for Multi-Instance Learning
Authors: Jia Wu, Shirui Pan, Peng Zhang, Xingquan Zhu
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments We carry out experiments on three real-world learning tasks: (a) content-based image annotation... (b) text categorization with DBLP data set... and (c) train bound challenge... Table 1 shows the classification accuracy of all comparison algorithms... |
| Researcher Affiliation | Academia | Jia Wu, Shirui Pan, Peng Zhang, Xingquan Zhu Quantum Computation & Intelligent Systems Centre, University of Technology Sydney, Australia Dept. of Computer & Electrical Engineering and Computer Science, Florida Atlantic University, USA jia.wu@student.uts.edu.au; {shirui.pan, peng.zhang}@uts.edu.au; xzhu3@fau.edu; |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | Experiments We carry out experiments on three real-world learning tasks: (a) content-based image annotation with 100 positive (elephant images) and 100 negative example images (Foulds and Frank 2008); (b) text categorization with DBLP data set being used... (Wu et al. 2013); and (c) train bound challenge... |
| Dataset Splits | No | The paper does not explicitly provide details about training/validation/test dataset splits, percentages, or cross-validation setup for reproduction. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or version numbers needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions that the 'size of DIP is set to the same in Fu et al. (2011)' and 'use kNN algorithm', but it does not provide explicit hyperparameter values or detailed training configurations (e.g., learning rate, number of epochs, specific k for kNN if not implied as standard). |