Finding Interpretable Class-Specific Patterns through Efficient Neural Search
Authors: Nils Philipp Walter, Jonas Fischer, Jilles Vreeken
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show on synthetic and real world data, including three biological applications, that, unlike its competitors, DIFFNAPS consistently yields accurate, succinct, and interpretable class descriptions. 4 Experiments We compare DIFFNAPS five state-of-the art methods on synthetic and real-world data. |
| Researcher Affiliation | Academia | 1CISPA Helmholtz Center for Information Security, 2Harvard T.H. Chan School of Public Health, Department of Biostatistics |
| Pseudocode | No | The paper describes the architecture and processes in prose but does not provide any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We make the code publicly available.1 https://eda.rg.cispa.io/prj/diffnaps/ |
| Open Datasets | Yes | We consider phenotypical Cardio data (Ulianova 2017), a Disease diagnosis (Patil and Rathod 2020) dataset, two high-dimensional binarized gene expression datasets for breast cancer, BRCA-N and BRCA-S, that we derived from The Cancer Genome Atlas (TCGA) (see App. A.5), and a human genetic variation data set (The 1000 Genomes Project Consortium 2015; Fischer and Vreeken 2020). |
| Dataset Splits | No | The paper mentions tuning hyperparameters based on "accuracy on a hold-out set" but does not provide specific details on the sizes or percentages of train/validation/test splits for any dataset. |
| Hardware Specification | No | The experiments for the neural approaches, i.e. DIFFNAPS and RLL, are executed on GPUs. No specific GPU models, CPU details, or other hardware specifications are provided. |
| Software Dependencies | No | The paper does not mention any specific software dependencies or libraries with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | No | The paper mentions tuning hyperparameters like λc, κ, and λ, and states "In practice, we increase λc until the classification error saturates." and "We fit the hyperparameters of DIFFNAPS based on our loss function." However, it does not explicitly state the specific numerical values used for these or other training parameters like learning rate, batch size, or number of epochs. |