Probably Approximately Correct Constrained Learning

Authors: Luiz Chamon, Alejandro Ribeiro

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We analyze the generalization properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification. 6 Numerical experiments Due to space constraints, we only provide highlights of the results obtained for the problems from Section 3. For more details and additional experiments, see Appendix D in the extended version [56].
Researcher Affiliation Academia Luiz F. O. Chamon Dept. of Electrical and Systems Engineering University of Pennsylvania Pennsylvania, USA luizf@seas.upenn.edu Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania Pennsylvania, USA aribeiro@seas.upenn.edu
Pseudocode Yes Algorithm 1 Primal-dual near-PACC learner
Open Source Code No No explicit statement about releasing the source code for the described methodology or a link to a code repository is provided in the paper.
Open Datasets Yes In the Adult dataset [70], our goal is to predict whether an individual makes more than US$ 50,000.00 while being insensitive to gender. In this illustration, we use Algorithm 1 to train a Res Net18 [73] to classify images from the FMNIST dataset [74].
Dataset Splits No The best accuracy over the validation set is achieved after 67 epochs, yielding a solution with test accuracy of 93.5% (Figure 2a).
Hardware Specification No The paper does not specify the hardware used for running the experiments (e.g., specific CPU/GPU models, memory).
Software Dependencies No For step 3 of Algorithm 1, we use ADAM [71] with batch size 128 and learning rate 0.1.
Experiment Setup Yes For step 3 of Algorithm 1, we use ADAM [71] with batch size 128 and learning rate 0.1. All other parameters were kept as in the original paper. After each epoch, we update the dual variables (step 4), also using ADAM with a step size of 0.01. All classifiers were trained over 300 epochs.