Trustworthy Actionable Perturbations
Authors: Jesse Friedbaum, Sudarshan Adiga, Ravi Tandon
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4. Experimental Results We compare TAP, counterfactuals and adversarial attacks on four data sets from different fields; data set details are found in Figure 3 and the Appendix A.3.1. |
| Researcher Affiliation | Academia | 1Program in Applied Mathematics, University of Arizona, Tucson, AZ, USA 2Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA. |
| Pseudocode | Yes | Algorithm 1 Generating TAP |
| Open Source Code | Yes | The associated code can be found at https://github.com/Jesse Friedbaum/TAP_code. |
| Open Datasets | Yes | Adult Income (Kohavi & Becker, 1996): This data set contains demographic information on Americans labelled by whether they had a high income. Law School Success (Wightman, 1998): This data set contains information on law school students labelled by whether they passed the BAR exam. Diabetes Prediction (for Disease Control & , CDC): The individuals in this data set are labelled by whether they have diabetes. German Credit (Hofmann, 1994): This data set contains loan applications. |
| Dataset Splits | Yes | We used an 80 10 10 train-validate-test data split and implemented early stopping with the validation data. |
| Hardware Specification | No | The paper does not specify hardware details such as CPU, GPU models, or memory specifications used for experiments. |
| Software Dependencies | No | The paper mentions software like ‘neural networks’, ‘ADAM algorithm’, ‘gradient boosted tree algorithms’, and ‘random forests and histogram boosted trees’ but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | Each network used 3 hidden layers with Re Lu activation functions between each layer. We tuned the parameters of the neural networks until they provide accuracy on par with gradient boosted tree models on the same data set. Additionally, for the German Credit data set only, we used dropout regularization of 20% on each hidden layer. We trained these models using the ADAM optimizer to minimize cross entropy loss. |