Data Minimization at Inference Time

Authors: Cuong Tran, Ferdinando Fioretto

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations across various learning tasks show that individuals can potentially report as little as 10% of their information while maintaining the same accuracy level as a model that employs the full set of user information. Evaluations across various learning tasks indicate that individuals may be able to report as little as 10% of their information while maintaining the same accuracy level as a model using the complete set of user information. 7 Experiments
Researcher Affiliation Academia Cuong Tran Department of Computer Science University of Virginia kxb7sd@virginia.edu Ferdinando Fioretto Department of Computer Science University of Virginia fioretto@virginia.edu
Pseudocode Yes Algorithm 1: Min DRel for linear classifiers; Algorithm 2: Min DRel for non-linear classifiers
Open Source Code No The paper does not provide an explicit statement or link to open-source code for the methodology.
Open Datasets Yes The experiments are conducted on six standard UCI datasets [9]. [9] C. Blake and C. Merz. Uci repository of machine learning databases, 1988.
Dataset Splits Yes For each dataset, 70% of the data will be used for training the classifiers, while the remaining 30% will be used for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types) used for running its experiments.
Software Dependencies No The paper mentions 'bayesian-torch [17]' and programming constructs like 'Re LU activation functions' and 'stochastic gradient descent' but does not specify version numbers for general software dependencies such as Python, PyTorch, or other libraries.
Experiment Setup Yes The nonlinear classifiers used in this study consist of a neural network with two hidden layers, using the Re LU activation function. The number of nodes in each hidden layer is set to 10. The network is trained using stochastic gradient descent (SGD) with a batch size of 32 and a learning rate of 0.001 for 300 epochs. The base regressor is a neural network with one hidden layer that has 10 hidden nodes and a Re LU activation function. We train the network in 300 epochs with a learning rate of 0.001.