Learning Deep Attribution Priors Based On Prior Knowledge

Authors: Ethan Weinberger, Joseph Janizek, Su-In Lee

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our method to a synthetic dataset and two real-world biological datasets. Empirically we find that models trained using the DAPr framework achieve better performance on tasks where training data is limited.
Researcher Affiliation Academia Ethan Weinberger Paul G. Allen School of Computer Science University of Washington Seattle, WA 98195 ewein@cs.washington.edu
Pseudocode No The paper does not contain explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a direct link to a code repository for the methodology described.
Open Datasets Yes The data for this experiment comes from multiple datasets available on the Accelerating Medicines Partnership Alzheimer s Disease Project (AMP-AD) portal1; in particular we make use of the Adult Changes in Thought (ACT) [24], Mount Sinai Brain Bank (MSBB), and Religious Orders Study/Memory and Aging Project (ROSMAP) [1] datasets. [Footnote 1: https://adknowledgeportal.synapse.org/]. Our data comes from the Beat AML dataset...[39].
Dataset Splits Yes For each dataset we use 20% of the dataset for model training, and divide the remaining points evenly into testing and validation sets, giving a final 20%-40%-40% train-test-validation split. (Section 4.1)
Hardware Specification No The paper mentions training models and using attribution methods but does not specify any particular GPU model, CPU, or other hardware details used for the experiments.
Software Dependencies No The paper mentions using 'Adam' for optimization and 'Enrichr' library, but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes We optimize our models using Adam [14] with early stopping, and all hyperparameters are chosen based on performance on our validation sets (see Supplement for additional details). ... For a given number of features p the first hidden layer has p/2 units, and the second has p/4 units. ... All prediction model MLPs have two hidden layers with 512 and 256 units respectively, and for our DAPr models we use MLPs with one hidden layer containing four units.