Instance-Level Label Propagation with Multi-Instance Learning

Authors: Qifan Wang, Gal Chechik, Chen Sun, Bin Shen

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on two benchmark datasets demonstrate the effectiveness of the proposed approach over several state-of-the-art methods.
Researcher Affiliation Industry Qifan Wang, Gal Chechik, Chen Sun and Bin Shen Google Research Mountain View, CA 94043, US {wqfcr, gal, chensun, bshen}@google.com
Pseudocode Yes Algorithm 1 Instance-Level Label Propagation with Multi Instance Learning (ILLP)
Open Source Code No The paper provides a link for the MISSL baseline method's code ('The code is available from http://www.cs.cmu.edu/ juny/MILL/') but does not state that the code for the proposed ILLP method is open-source or provide a link for it.
Open Datasets Yes The proposed ILLP approach is evaluated with three configurations of experiments on two benchmarks: an image dataset SIVAL2 and a text corpus Reuters (Reuters21578)3. ... SIVAL2 (http://www.cs.wustl.edu/ sg/multi-inst-data/) ... Reuters3 (http://www.daviddlewis.com/resources/testcollections/)
Dataset Splits Yes In each experiment, we randomly partition the examples in each category into two splits to form the labeled and unlabeled sets. The trade-off parameters α and β are tuned using five-fold cross-validation.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are mentioned in the paper.
Software Dependencies No The paper mentions 'tf-idf features' and refers to existing implementations for baseline parameters, but it does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, scikit-learn x.x).
Experiment Setup Yes The trade-off parameters α and β are tuned using five-fold cross-validation. We set the number of neighbors k to 8 to construct the k-NN graph.