Semi-Supervised Knowledge Amalgamation for Sequence Classification
Authors: Jidapa Thadajarassiri, Thomas Hartvigsen, Xiangnan Kong, Elke A Rundensteiner9859-9867
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach, TC, using three challenging settings of the SKA problem on four datasets. We compare the performance of TC to eight state-of-the-art alternatives. |
| Researcher Affiliation | Academia | Jidapa Thadajarassiri, Thomas Hartvigsen, Xiangnan Kong, Elke A Rundensteiner Data Science Program and Computer Science Department, Worcester Polytechnic Institute 100 Institute Road, Worcester MA, USA 01609 {jthadajarassiri, twhartvigsen, xkong, rundenst}@wpi.edu |
| Pseudocode | No | The paper describes its proposed model and provides mathematical equations, but it does not include a formal pseudocode or algorithm block. |
| Open Source Code | Yes | All code, datasets, and experimental detail are available at https://github.com/jida-thada/SKA. |
| Open Datasets | Yes | We focus our experiments on four well-known time series classification datasets below. Synthetic Control (SYN) (Alcock, Manolopoulos et al. 1999). Melbourne Pedestrian (PED) (Carter et al. 2020). Human Activity Recognition Using Smartphones (HAR) (Anguita et al. 2013). Electric Devices (ELEC) (Lines and Bagnall 2014). |
| Dataset Splits | Yes | the original testing file is split to train and evaluate the student network (70% for training, 10% for validation, and 20% for testing). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | Our model is implemented using Py Torch and optimized using Adam (Kingma and Ba 2014). While software components are mentioned, specific version numbers for PyTorch or other libraries are not provided. |
| Experiment Setup | No | The paper mentions that the models are LSTMs and are optimized using Adam, but it does not provide specific hyperparameter values such as learning rate, batch size, number of epochs, or other detailed training configurations. |