A Gaussian Process-Bayesian Bernoulli Mixture Model for Multi-Label Active Learning
Authors: Weishi Shi, Dayou Yu, Qi Yu
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on real-world multi-label datasets demonstrate the state-of-the-art AL performance of the proposed model. and We conduct extensive experiments on both synthetic and real-world multi-label data to demonstrate: (1) important properties of GP-B2M to capture complex label correlations and how they contribute to predict complex labels, (2) state-of-the-art ML-AL performance by comparing with existing competitive models, (3) impact of key model parameters through an ablation study, and (4) effectiveness of active sampling by examining sampled data instances. |
| Researcher Affiliation | Academia | Weishi Shi Dayou Yu Qi Yu Golisano College of Computing and Information Sciences Rochester Institute of Technology {ws7586,dy2507,qi.yu}@rit.edu |
| Pseudocode | No | No explicit pseudocode or algorithm blocks are found in the paper. |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] |
| Open Datasets | Yes | We choose five representative real-world multi-label datasets, including Delicious, Enron, Bibtex, Corel5K, and NUS-WIDE, from different application domains [26]. and Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] |
| Dataset Splits | No | We randomly shuffle each dataset and partition them into three parts: training, testing, and candidate pool. This mentions partitions but does not explicitly specify a validation set or its size/percentage for reproducibility. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory) used for running the experiments in the main text. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | No | The paper mentions that 'Active learning stops after each model selects 500 samples' and discusses tunable parameters η and ρ, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs, optimizer settings) or explicit details on how these parameters were chosen for the main experiments in the main text. |