Message Passing Adaptive Resonance Theory for Online Active Semi-supervised Learning
Authors: Taehyeong Kim, Injune Hwang, Hyundo Lee, Hyunseo Kim, Won-Seok Choi, Joseph J Lim, Byoung-Tak Zhang
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our model in stream-based selective sampling scenarios with comparable query selection strategies, showing that MPART significantly outperforms competitive models. |
| Researcher Affiliation | Collaboration | 1AI Lab, CTO Division, LG Electronics, Seoul, Republic of Korea 2Seoul National University, Seoul, Republic of Korea 3University of Southern California, California, USA. |
| Pseudocode | Yes | Algorithm 1 The MPART algorithm |
| Open Source Code | No | The paper does not provide any explicit statement or link for open-source code availability for the described methodology. |
| Open Datasets | Yes | For experiments, we used four kinds of datasets with different distributions: Mouse retina transcriptomes (Macosko et al., 2015; Poliˇcar et al., 2019), Fashion MNIST (Xiao et al., 2017), EMNIST Letters (Cohen et al., 2017), and CIFAR-10 (Krizhevsky et al., 2009). |
| Dataset Splits | No | The paper mentions training data and a hold-out test dataset but does not specify a distinct validation set split or its details for the main model training. It mentions 30% of training data used for Parametric UMAP training, but this is for feature extraction, not for model validation in the main experiment. |
| Hardware Specification | No | The paper mentions "a 3.8 GHz CPU machine" but does not specify the CPU model, GPU, or other detailed hardware specifications. |
| Software Dependencies | No | The paper mentions "Python implementation" but does not specify exact version numbers for Python or any specific libraries (e.g., PyTorch, TensorFlow, scikit-learn) used. |
| Experiment Setup | Yes | The propagation rate δ for message passing was set to 0.1, and the parameters ke, τ and kd used for the score calculation were set to 1.0, 0.7 and 0.01, respectively. For other parameter settings, please refer to the Appendix. |