Black Box FDR
Authors: Wesley Tansey, Yixin Wang, David Blei, Raul Rabadan
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In benchmarks, BB-FDR outperforms competing state-of-the-art methods in both stages of analysis. We apply BB-FDR to two real studies on cancer drug efficacy. |
| Researcher Affiliation | Academia | 1Data Science Institute, Columbia University, New York, NY, USA 2Department of Systems Biology, Columbia University Medical Center, New York, NY, USA 3Department of Statistics, Columbia University, New York, NY, USA 4Department of Computer Science, Columbia University, New York, NY, USA. |
| Pseudocode | No | The paper includes a graphical model (Figure 2) but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about making its source code publicly available, nor does it include a link to a code repository. |
| Open Datasets | Yes | As a case study of how BB-FDR is useful in practice, we apply it to two high-throughput cancer drug screening studies (Lapatinib and Nutlin-3) from the Genomics of Drug Sensitivity in Cancer (GDSC) (Yang et al., 2012). |
| Dataset Splits | No | The paper does not provide specific train/validation/test dataset splits with percentages, absolute sample counts, or explicit references to predefined splits for reproducibility. It mentions varying sample sizes and using '3 folds to create 3 separate models' but no clear split details. |
| Hardware Specification | No | The paper mentions that BB-FDR 'runs easily on a laptop' but does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper mentions 'RMS-prop (Tieleman & Hinton, 2012)' and 'gradient boosting trees (Chen & Guestrin, 2016)' but does not provide specific version numbers for these or any other software dependencies, libraries, or frameworks used for replication. |
| Experiment Setup | Yes | For BB-FDR, we use a 50 200 200 2 network with Re LU activation; for training we use RMS-prop (Tieleman & Hinton, 2012) with dropout, learning rate 3 10 4, and batch size 100, and run for 50 epochs, with 3 folds to create 3 separate models as in Neural FDR; we set the λ regularization term to 10 4. |