Multi-Instance Learning with Distribution Change

Authors: Wei-Jia Zhang, Zhi-Hua Zhou

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that MICS is almost always significantly better than many state-of-the-art multi-instance learning algorithms when distribution change occurs; and even when there is no distribution change, their performances are still comparable.
Researcher Affiliation Academia Wei-Jia Zhang and Zhi-Hua Zhou National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210023, China {zhangwj,zhouzh}@lamda.nju.edu.cn
Pseudocode No The paper describes the approach mathematically and textually but does not include a structured pseudocode or algorithm block.
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of open-source code for the described methodology.
Open Datasets Yes First, we perform experiments on text data sets based on the 20 Newsgroups corpus popularly used in text categorization.
Dataset Splits Yes During the experiments, the parameters are selected via 5-folds cross validation on the training data.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU, memory) used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., specific libraries, frameworks, or programming language versions) used for the implementation or experiments.
Experiment Setup Yes During the experiments, the parameters are selected via 5-folds cross validation on the training data.