Dealing with Multiple Classes in Online Class Imbalance Learning
Authors: Shuo Wang, Leandro L. Minku, Xin Yao
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Then, we look into the impact of multi-minority and multi-majority cases on MOOB and MUOB in comparison to other methods under stationary and dynamic scenarios. Both multi-minority and multi-majority make a negative impact. MOOB shows the best and most stable Gmean in most stationary and dynamic cases. |
| Researcher Affiliation | Academia | Shuo Wang Leandro L. Minku Xin Yao University of Birmingham, UK University of Leicester, UK University of Birmingham, UK s.wang@cs.bham.ac.uk leandro.minku@leicester.ac.uk x.yao@cs.bham.ac.uk |
| Pseudocode | Yes | Table 1: MOOB and MUOB Training Procedures. |
| Open Source Code | No | The paper does not provide any statement or link indicating that its source code is open or publicly available. |
| Open Datasets | Yes | online chess game [ˇZliobait e, 2011] and UDI Tweeter Crawl data [Li et al., 2012]. |
| Dataset Splits | Yes | We use the first 1% of data (i.e. 50 examples) as the initialisation and validation data. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory, cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a "multilayer perceptron (MLP)" as a base classifier but does not specify any software names with version numbers for implementation or dependencies. |
| Experiment Setup | Yes | we set the number of base classifiers to 11. Choosing an odd number is to avoid an even majority vote from base classifiers. |