Hybrid Neural Networks for Learning the Trend in Time Series
Authors: Tao Lin, Tian Guo, Karl Aberer
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Tre Net demonstrates its effectiveness by outperforming CNN, LSTM, the cascade of CNN and LSTM, Hidden Markov Model based method and various kernel based baselines on real datasets. |
| Researcher Affiliation | Academia | Tao Lin , Tian Guo , Karl Aberer School of Computer and Communication Sciences Ecole polytechnique federale de Lausanne Lausanne, Switzerland {tao.lin, tian.guo, karl.aberer}@epfl.ch |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any link or statement about open-sourcing its code. |
| Open Datasets | Yes | Power Consumption (PC). This dataset2 contains measurements of electric power consumption... 2https://archive.ics.uci.edu/ml/datasets/Individual+household +electric+power+consumption; Gas Sensor (Gas Sensor). This dataset3 contains the recordings of chemical sensors... 3https://archive.ics.uci.edu/ml/datasets/Gas+sensor+ array+under+dynamic+gas+mixtures |
| Dataset Splits | Yes | We then do random shuffling over such data instances, where 10% of the data instances is held out as the testing dataset and the rest is used for cross-validation. |
| Hardware Specification | No | The paper does not specify any hardware details like GPU models, CPU types, or memory used for the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | In Tre Net, CNN has two stacked convolutional layers, which have 32 filters of size 2 and 4. The number of memory cells in LSTM is 600. In addition to the learning rate, the number of neurons in the feature fusion layer is chosen from the range {300, 600, 900, 1200} to achieve the best performance. We use dropout and L2 regularization to control the capacity of neural networks to prevent overfitting, and set the values to 0.5 and 5 10 4 respectively for all datasets [Mao et al., 2014]. |