Differential Geometric Regularization for Supervised Learning of Classifiers

Authors: Qinxun Bai, Steven Rosenberg, Zheng Wu, Stan Sclaroff

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experiments, we apply our regularization technique to standard loss functions for classification, our RBF-based implementation compares favorably to widely used regularization methods for both binary and multiclass classification. and 4. Experiments To evaluate the effectiveness of the proposed regularization approach, we compare our RBF-based implementation with two groups of related classification methods.
Researcher Affiliation Collaboration Qinxun Bai QINXUN@CS.BU.EDU Department of Computer Science, Boston University, Boston, MA 02215 USA, Steven Rosenberg SR@MATH.BU.EDU Department of Mathematics and Statistics, Boston University, Boston, MA 02215 USA, Zheng Wu WUZHENG@BU.EDU The Mathworks Inc. Natick, MA 01760 USA, Stan Sclaroff SCLAROFF@BU.EDU Department of Computer Science, Boston University, Boston, MA 02215 USA
Pseudocode Yes Algorithm 1 Geometric regularized classification
Open Source Code No The paper mentions publicly available implementations for comparison methods (SVM and IVM) but does not state that the code for its own described methodology is open-source or publicly released.
Open Datasets Yes on datasets from the UCI repository and real-world datasets including the Flickr Material Database (FMD) and the MNIST Database of handwritten digits. and We tested our classification method on four binary classification datasets and four multiclass classification datasets... using the same datasets as (Varshney & Willsky, 2010) and The MNIST dataset contains 10 classes (0 9) of handwritten digits with 60, 000 samples for training and 10, 000 samples for testing.
Dataset Splits Yes Tenfold cross-validation error is reported. For each of the ten folds, the kernel-width constant c and tradeoff parameter λ are found using fivefold cross-validation on the training folds.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, memory, or specific computing environments used for the experiments.
Software Dependencies No The paper mentions software like LIBSVM, IVM, and VLFeat library, but does not provide specific version numbers for any of these software dependencies.
Experiment Setup Yes All dimensions of input sample points are normalized to a fixed range [0, 1] throughout the experiments. We select c from the set of values {1/25, 1/24, 1/23, 1/22, 1/2, 1, 2, 4, 8} and λ from the set of values {1/1.54, 1/1.53, 1/1.52, 1/1.5, 1, 1.5} that minimizes the fivefold cross-validation error. The step-size τ = 0.1 and iteration number T = 5 are fixed over all datasets. We used the same settings for both loss functions. and We use 1000 RBFs to represent our function f, with RBF centers obtained by applying K-means clustering on the training set.