Explain Temporal Black-Box Models via Functional Decomposition
Authors: Linxiao Yang, Yunze Tong, Xinyue Gu, Liang Sun
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our approach in a wide range of time series applications, including anomaly detection, classification, and forecasting, showing its superior performance to the state-of-the-art algorithms. |
| Researcher Affiliation | Collaboration | 1DAMO Academy, Alibaba Group, Hangzhou, China 2Department of Computer Science and Technology, Zhejiang University, Hangzhou, China. |
| Pseudocode | No | The paper describes the proposed method using mathematical formulations and prose but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing open-source code for the described methodology, nor does it include links to a code repository. |
| Open Datasets | Yes | Large Kitchen Appliances dataset is a public benchmark dataset from UCR 1, which contains 375 training samples and 375 testing samples. (footnote 1 points to https://www.cs.ucr.edu/~eamonn/time_series_data/) |
| Dataset Splits | Yes | We randomly split the dataset into three parts, i.e., training set with 14942 samples, validation set with 3448 samples, and test set with 4598 samples. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used (e.g., CPU/GPU models, memory) to run its experiments. Table 6 lists LSTM configurations but no hardware details. |
| Software Dependencies | No | The paper mentions optimizers like 'Adam' and models like 'LSTM' but does not specify software dependencies with version numbers (e.g., 'Python 3.x', 'PyTorch 1.x'). |
| Experiment Setup | Yes | The configuration of LSTMs as black-box models in each task is summarized in Table 6. Parameters include Latent size (e.g., 20, 200, 120), # layers (e.g., 3, 4), Drop out (e.g., 0.4, 0.6), Adam learning rate (e.g., 0.01, 0.002, 0.001), and epoch (e.g., 100, 200, 300). |