Navigating the Effect of Parametrization for Dimensionality Reduction

Authors: Haiyang Huang, Yingfan Wang, Cynthia Rudin

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments Here, we evaluate the performance of our Param Pa CMAP and Param Respulsor algorithms empirically. To contextualize our findings, we juxtapose our results against those obtained from other contemporary parametric DR algorithms. Visualization for the embeddings generated by all algorithms can be found in App. C.
Researcher Affiliation Academia Haiyang Huang Yingfan Wang Cynthia Rudin Duke University {hyhuang, yw416, cynthia}@cs.duke.edu
Pseudocode Yes Pseudocode for Param Repulsor is found in Alg. 1 and detailed in Alg. 2 in App. F.
Open Source Code Yes Our code is available at https://github.com/hyhuang00/Param Repulsor.
Open Datasets Yes For image analysis, we analyzed the MNIST [15] and Fashion-MNIST (F-MNIST) [32] datasets, along with COIL-20 [33] and COIL-100 [34].
Dataset Splits Yes We perform leave-one-out cross validation, and utilize a k-NN classifier to predict the label of the point.
Hardware Specification Yes All experiments are conducted with an Exxact Tensor EX 2U Server with 2 Intel Xeon Ice Lake Gold 5317 Processors @ 3.0GHz. We limit the RAM usage to be 32GB. Parallel computation are performed over a single Nvidia RTX A5000 GPU.
Software Dependencies Yes Param Repulsor and Param Pa CMAP are implemented with Py Torch 2.0.0, Numba 0.57.0 and CUDA 11.7.
Experiment Setup Yes Unless otherwise specified, we utilize a network of three hidden layers with [100, 100, 100] neurons. Param Repulsor utilizes Si LU as the activation function, whereas Param Pa CMAP utilizes Re LU just as the other methods.