Growing Adaptive Multi-hyperplane Machines

Authors: Nemanja Djuric, Zhuang Wang, Slobodan Vucetic

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We run experiments on data of varying characteristics to measure accuracy of the proposed method, and to estimate its robustness to noise. We evaluated the algorithms on 5 real-world data sets of very different sizes, dimensions, and complexities.
Researcher Affiliation Collaboration Nemanja Djuric 1 Zhuang Wang 2 Slobodan Vucetic 3 1Uber ATG, Pittsburgh, PA, USA 2Facebook, Menlo Park, CA, USA 3Temple University, Philadelphia, PA, USA.
Pseudocode Yes Algorithm 1 Training algorithm for GAMM
Open Source Code Yes The GAMM implementation is available for download at https://github.com/djurikom/Budgeted SVM.
Open Datasets Yes We evaluated the algorithms on 5 real-world data sets of very different sizes, dimensions, and complexities. The datasets include usps, letter, ijcnn1, rcv1, and mnist. A footnote links to their source: https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/, last accessed June 2020.
Dataset Splits No The paper mentions 'We created 15,000 training and 5,000 test examples' for synthetic data and 'λ through cross-validation' for parameter tuning, but does not explicitly state a separate validation dataset split with specific percentages or counts.
Hardware Specification Yes To better illustrate scalability we evaluated the algorithms on the lower-end Intel R E7400 with 2.80GHz processor and 4GB RAM
Software Dependencies No The paper mentions using 'scikit-learn implementations' for some baseline models, but does not provide specific version numbers for scikit-learn or any other software dependencies. It mentions C++ as the implementation language for GAMM, and names other tools/libraries without versions.
Experiment Setup Yes We set parameters to their default values, c = 10 for AMM and c = 50 for GAMM (due to more frequent introduction of new weights), p = 0.2, β = 0.99, set λ through cross-validation, and trained the models for 15 epochs.