Scalable Optimal Margin Distribution Machine
Authors: Yilin Wang, Nan Cao, Teng Zhang, Xuanhua Shi, Hai Jin
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the proposed algorithms by comparing with other SOTA scalable QP solvers. All the experiments are performed on eight real-world data sets. The statistics of these data sets are summarized in Table 1. |
| Researcher Affiliation | Academia | Yilin Wang , Nan Cao , Teng Zhang , Xuanhua Shi and Hai Jin National Engineering Research Center for Big Data Technology and System Services Computing Technology and System Lab, Cluster and Grid Computing Lab School of Computer Science and Technology, Huazhong University of Science and Technology, China {yilin wang, nan cao, tengzhang, xhshi, hjin}@hust.edu.cn |
| Pseudocode | Yes | Algorithm 1 summarizes the pseudo-code of SODM. Algorithm 2 summarizes the process of DSVRG for SODM. |
| Open Source Code | Yes | Our implementation are available on Github 2. https://github.com/CGCL-codes/SODM |
| Open Datasets | Yes | All the experiments are performed on eight real-world data sets. The statistics of these data sets are summarized in Table 1. Data sets: gisette, svmguide1, phishing, a7a, cod-rna, ijcnn1, skin-nonskin, SUSY |
| Dataset Splits | No | For each data set, eighty percent of instances are randomly selected as training data, while the rest are testing data. The paper does not explicitly state a validation split. |
| Hardware Specification | Yes | All the experiments are performed on a Spark [Zaharia et al., 2012] cluster with one master and five workers. Each machine is equipped with 16 Intel Xeon E5-2670 CPU cores and 64GB RAM. |
| Software Dependencies | No | The paper mentions using a Spark cluster and its own implementation available on Github, but does not specify particular software dependencies with version numbers (e.g., libraries, frameworks, or languages). |
| Experiment Setup | No | The paper mentions hyperparameters λ, υ, and θ in the problem formulation but does not provide their specific values or other detailed training settings like learning rate, batch size, or epochs in the experimental setup. |