Differential Privacy Over Riemannian Manifolds

Authors: Matthew Reimherr, Karthik Bharath, Carlos Soto

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we numerically explore two examples that are common in statistics. In the first and second panels of Figure 1 we show simulation results which illustrate Theorems 2 and 3 and compare the utility of the Euclidean counterpart. Simulations are done in Matlab on a desktop computer with an Intel Xeon processor at 3.60GHz with 31.9 GB of RAM running Windows 10.
Researcher Affiliation Academia Matthew Reimherr Department of Statistics Pennsylvania State University University Park, PA mreimherr@psu.edu Karthik Bharath School of Mathematical Sciences University of Nottingham Nottingham, UK Karthik.Bharath@nottingham.ac.uk Carlos Soto Department of Statistics Pennsylvania State University University Park, PA cjs7363@psu.edu
Pseudocode No The paper describes algorithms but does not provide pseudocode or a clearly labeled algorithm block.
Open Source Code Yes All code and instructions are provided as a zipped folder.
Open Datasets No The paper describes generating samples using specific distributions (e.g., Wishart distribution) but does not provide access information (link, citation, or repository) for a publicly available or open dataset used in the experiments.
Dataset Splits No The paper performs simulations and discusses sample sizes but does not specify training, validation, or test dataset splits or cross-validation methods.
Hardware Specification Yes Simulations are done in Matlab on a desktop computer with an Intel Xeon processor at 3.60GHz with 31.9 GB of RAM running Windows 10.
Software Dependencies No The paper mentions 'Matlab' but does not specify a version number or other software dependencies with versions.
Experiment Setup No The paper describes aspects of the numerical examples, such as data generation parameters and the type of algorithm used (e.g., gradient descent), but does not provide specific hyperparameter values or detailed system-level training settings for these algorithms.