Optimal Margin Distribution Learning in Dynamic Environments
Authors: Teng Zhang, Peng Zhao, Hai Jin6821-6828
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both synthetic and real data sets demonstrate the superiority of our method. In this section, we empirically evaluate the effectiveness of the proposed method. |
| Researcher Affiliation | Academia | 1National Engineering Research Center for Big Data Technology and System Services Computing Technology and System Lab, Cluster and Grid Computing Lab School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China 2National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210023, China |
| Pseudocode | Yes | Algorithm 1 summarizes the pseudo-code of the update for the i-th restarted online ODM. Algorithm 2 summarizes the pseudocode of the whole algorithm. |
| Open Source Code | No | The paper does not provide any statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | We adopt 12 synthetic data sets collected in dynamic environments, including sea, hyperplane, 1CDT, 2CDT, 1CHT, 2CHT, 1CSurr, UG-2C-2D, UG-2C-3D, UG-2C5D, MG-2C-2D, and GEARS-2C-2D. Basic information is included in Table 1, and one may refer to (de Souza et al. 2015) for more details. Besides, to valid the efficacy of our proposed method in real applications, we further examine performance on 8 real data sets, including chess, usenet-1, usenet-2, Luxembourg, spam, whether, powersupply, and electricity. |
| Dataset Splits | No | The paper describes an evaluation protocol where 'for the data set with T instances, we select 10 different subsets with consecutive instances starting from {T/50, T/25, . . . , T/5}. All these subsets are with the same length 4T/5.' However, it does not specify explicit training/validation splits needed for model training. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies or their version numbers (e.g., programming languages, libraries, frameworks) used in the experiments. |
| Experiment Setup | No | The paper describes the data selection and evaluation protocol in the 'Settings' section, but it does not provide concrete hyperparameter values or detailed system-level training settings for the empirical studies. |