Deep Unsupervised Binary Coding Networks for Multivariate Time Series Retrieval

Authors: Dixian Zhu, Dongjin Song, Yuncong Chen, Cristian Lumezanu, Wei Cheng, Bo Zong, Jingchao Ni, Takehiko Mizoguchi, Tianbao Yang, Haifeng Chen1403-1411

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Thoroughly empirical studies on three public datasets demonstrated that the proposed DUBCNs can outperform state-of-the-art unsupervised binary coding techniques.
Researcher Affiliation Collaboration Dixian Zhu, Dongjin Song, Yuncong Chen, Cristian Lumezanu, Wei Cheng, Bo Zong, Jingchao Ni, Takehiko Mizoguchi, Tianbao Yang, Haifeng Chen University of Iowa, IA 52242, USA NEC Laboratories America, Inc., NJ 08540, USA {dixian-zhu, tianbao-yang}@uiowa.edu, {dsong, yuncong, lume, weicheng, bzong, tmizoguchi, jni, haifeng}@nec-labs.com
Pseudocode No The paper describes the architecture and objective function of DUBCNs but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code or providing a link to a code repository for its methodology.
Open Datasets Yes EEG Eye State dataset 1 is collected with the Emotiv EEG Neuroheadset. The eye state is detected via a camera during the EEG measurement. 1 indicates the eye-closed and 0 denotes the eye-open state. In our experiment, we sequentially sample 7,488 segments with the size of window w=5 and interval 2. 1https://archive.ics.uci.edu/ml/datasets/EEG+Eye+State... Twitter dataset 2 does not contain class information and is originally used for Buzz prediction. Here, we use it for multivariate time series retrieval and sequentially generate 58,323 segments with window size w=20 and interval 10. 2https://archive.ics.uci.edu/ml/datasets/Buzz+in+social+ media
Dataset Splits Yes For those segments in each dataset, the first half is used for training, the next 10% are used for validation, and the last 40% are used for the test in our empirical studies.
Hardware Specification Yes DUBCNs is implemented with Tensor Flow and trained on a server with Intel(R) Xeon(R) CPU E5-2637 v4 @ 3.50GHz and 4 NVIDIA GTX 1080 Ti graphics cards.
Software Dependencies No The paper mentions that DUBCNs is 'implemented with Tensor Flow' but does not provide specific version numbers for TensorFlow or any other software dependencies.
Experiment Setup Yes DUBCNs contain 6 hyper-parameters. For simplicity, we fix mini-batch size as 128 in all experiments. The learning rate is selected from {10 6,10 5,10 4,10 3}. In addition, we set the hidden feature dimension of LSTM encoder/decoder as m = 64, 128, 256 to obtain different lengths of binary codes. For those two hyper-parameters λ1 and λ2 in the objective Eq 21, they are optimized based upon grid search over λ1 = {10 3, 10 2, 10 1, 1} and λ2 = {10 5, 10 4, 10 3, 10 2} when the number of cluster varies k = {2, 4, 8}.