Learning Prescriptive ReLU Networks
Authors: Wei Sun, Asterios Tsiourvas
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experiment with the proposed method on both synthetic and real-world datasets. P-Re LUs exhibit superior prescriptive accuracy over competing benchmarks. |
| Researcher Affiliation | Collaboration | 1IBM Research, Yorktown Heights, NY, USA 2Operations Research Center, Massachusetts Institute of Technology, Cambridge, MA, USA. |
| Pseudocode | No | The paper describes its methods through text and mathematical formulations but does not include structured pseudocode blocks or algorithm listings. |
| Open Source Code | No | The paper refers to implementations of benchmark models (e.g., 'the implementation from Battocchi et al., 2019', 'the python implementation from Rahmattalabi, 2020', 'official implementation from Interpretable AI (2023)', 'scikit-learn (Pedregosa et al., 2011)'), but does not explicitly state that the source code for their proposed P-Re LU method is open source or provide a link to it. |
| Open Datasets | No | For the simulated data, the paper states 'We generate six datasets...' without providing public access information. For the Warfarin dataset, it mentions 'The dataset was collected and curated by Pharmacogenetics and Pharmacogenomics Knowledge Base and the International Warfarin Pharmacogenetics Consortium' but does not provide a specific link, DOI, or repository for direct access to the processed dataset used in the experiments. |
| Dataset Splits | Yes | For each dataset, we create 10,000 training samples and 5,000 testing samples. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions software like 'Adam optimizer' and 'scikit-learn' by name and references 'PyTorch' for their implementation, but it does not specify version numbers for these software components or libraries, which are necessary for reproducibility. |
| Experiment Setup | Yes | For our proposed method, we consider a five-layer P-Re LU network with 100 neurons per hidden layer. We train the model using Adam optimizer (Kingma & Ba, 2014) with learning rate equal to 10^-3 for 20 epochs, batch size equal to 64, and ยต = 10^-4. |