Solving Interpretable Kernel Dimensionality Reduction

Authors: Chieh Wu, Jared Miller, Yale Chang, Mario Sznaier, Jennifer Dy

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments. The experiment includes 5 real datasets of commonly encountered data types. Wine [33] consists of continuous data while the Cancer dataset [34] features are discrete. The Face dataset [35] is a standard dataset used for alternative clustering; it includes images of 20 people in various poses. The MNIST [36] dataset includes images of handwritten characters.
Researcher Affiliation Academia Electrical and Computer Engineering Dept., Northeastern University, Boston, MA
Pseudocode Yes Algorithm 1 ISM Algorithm Input : Data X, kernel, Subspace Dimension q Output : Projected subspace W
Open Source Code Yes To support reproducible results, the source code is made publicly available on https://github.com/chieh-neu/ISM_supervised_DR.
Open Datasets Yes Wine [33] consists of continuous data while the Cancer dataset [34] features are discrete. The Face dataset [35] is a standard dataset used for alternative clustering; it includes images of 20 people in various poses. The MNIST [36] dataset includes images of handwritten characters.
Dataset Splits Yes For supervised dimension reduction, we perform SVM on XW using 10-fold cross validation.
Hardware Specification Yes All experiments were conducted on Dual Intel Xeon E5-2680 v2 @ 2.80GHz, with 20 total cores.
Software Dependencies No All sources are written in Python using Numpy and Sklearn [41; 42]. Specific version numbers for Python, Numpy, or Sklearn are not provided.
Experiment Setup Yes The median of the pair-wise Euclidean distance is used as σ for all experiments using the Gaussian kernel. Degree of 3 is used for all polynomial kernels. The dimension of subspace q is set to the number of classes/clusters. The convergence threshold δ is set to 0.01.