Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

When to Intervene: Learning Optimal Intervention Policies for Critical Events

Authors: Niranjan Damera Venkata, Chiranjib Bhattacharyya

NeurIPS 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we demonstrate RNN-based OTI policies with experiments and show that they outperform popular intervention methods. and Experiments are performed on two real-world datasets
Researcher Affiliation Collaboration Niranjan Damera Venkata Digital and Transformation Organization HP Inc., Chennai, India EMAIL Chiranjib Bhattacharyya Dept. of CSA and RBCCPS Indian Institute of Science, Bangalore, India EMAIL
Pseudocode No The paper includes an architecture diagram (Figure 1) but does not contain any pseudocode or algorithm blocks.
Open Source Code No The code is proprietary.
Open Datasets Yes Turbofan Engine Failure Data [26, 32]: This dataset from NASA provides train and test data... Azure Predictive Maintenance Guide Data[25]: This is a dataset from a guide provided by Microsoft
Dataset Splits Yes For both datasets, we train on 70% of randomly selected co-variate time-series sequences and hold out 30% of the sequences for testing. From the training set a further 30% of the sequences are set aside as validation data to tune model parameters and policy thresholds. This process is repeated to produce 10 random train-validation-test splits.
Hardware Specification Yes All experiments were run on a Tensorbook Laptop with 32GB RAM and having a single NVIDIA Ge Force GTX 1070 with Max-Q GPU.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes All RNNs share the same encoder architecture which is an LSTM with 128 step look-back and hidden state dimension of 16 units. WBI and TTE thresholds are tuned (individually, for 125 different settings of Cα {8, 10, , 256}) on each validation set using an empirical intervention policy risk: