Interpretable Minority Synthesis for Imbalanced Classification
Authors: Yi He, Fudong Lin, Xu Yuan, Nian-Feng Tzeng
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies substantiate that our approach can empower simple classifiers to achieve superior imbalanced classification performance over the state-of-the-art competitors and is robust across various imbalance settings. |
| Researcher Affiliation | Academia | Yi He , Fudong Lin , Xu Yuan and Nian-Feng Tzeng University of Louisiana at Lafayette {yi.he1, fudong.lin1, xu.yuan, nianfeng.tzeng}@louisiana.edu |
| Pseudocode | No | The paper describes its approach in section 3, but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Code is released in github.com/fudonglin/IMSIC. |
| Open Datasets | Yes | We benchmark our experiments on two widely used image sets, namely, MNIST [Le Cun et al., 2010] and Fashion-MNIST [Xiao et al., 2017]. |
| Dataset Splits | Yes | We perform a 10-fold crossvalidation to eliminate the randomization bias and record the averaged results and the corresponding statistics. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for experiments, such as specific GPU or CPU models. |
| Software Dependencies | No | The paper mentions feeding datasets 'to three CNNs with an identical architecture', but it does not specify any software names with version numbers, such as specific deep learning frameworks or libraries. |
| Experiment Setup | No | The paper does not provide specific details about the experimental setup, such as hyperparameter values (e.g., learning rate, batch size) or optimizer settings. |