Identifying Best Interventions through Online Importance Sampling
Authors: Rajat Sen, Karthikeyan Shanmugam, Alexandros G. Dimakis, Sanjay Shakkottai
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically show that our algorithms outperform the state of the art in the Flow Cytometry data-set, and also apply our algorithm for model interpretation of the Inception-v3 deep net that classifies images. ... Extensive Empirical Validation: We demonstrate that our algorithm outperforms the prior works (Lattimore et al., 2016; Audibert & Bubeck, 2010) on the Flow Cytometry data-set (Sachs et al., 2005) (in Section 4.1). We exhibit an innovative application of our algorithm for model interpretation of the Inception Deep Network (Szegedy et al., 2015) for image classification (refer to Section 4.2). |
| Researcher Affiliation | Collaboration | Rajat Sen * 1 Karthikeyan Shanmugam * 2 Alexandros G. Dimakis 1 Sanjay Shakkottai 1 *Equal contribution 1The University of Texas at Austin 2IBM Thomas J. Watson Research Center. Correspondence to: Rajat Sen <rajat.sen@utexas.edu>. |
| Pseudocode | Yes | Algorithm 1 Successive Rejects with Importance Sampling -v1 (SRISv1)... Algorithm 2 Successive Rejects with Importance Sampling -v2 (SRISv2)... Algorithm 3 Allocate Allocates a given budget among the arms to reduce variance. |
| Open Source Code | No | The paper does not contain any explicit statements about making the source code available, nor does it provide links to a code repository for the methodology described. |
| Open Datasets | Yes | We empirically show that our algorithms outperform the state of the art in the Flow Cytometry data-set... (Sachs et al., 2005)... for model interpretation of the Inception-v3 deep net (Szegedy et al., 2015) that classifies images. |
| Dataset Splits | No | The paper does not provide specific percentages or sample counts for training, validation, or test splits. It mentions using a pre-trained Inception-v3 network and conducting experiments with a total sample budget, but no explicit dataset partitioning details for reproducibility. |
| Hardware Specification | No | The paper does not explicitly mention any specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using a "GLM gamma model (Hardin et al., 2007)" but does not specify any software names with version numbers for reproducibility (e.g., Python, specific libraries, or frameworks with their versions). |
| Experiment Setup | Yes | The experiments are performed in the budget setting S1, where all arms except arm 0 are deemed to be difficult. We plot our results as a function of the total samples T, while the fractional budget of the difficult arms (B) is set to 1/T. ... The total sample budget (T) is 2500. |