Modeling Electrical Motor Dynamics Using Encoder-Decoder with Recurrent Skip Connection
Authors: Sagar Verma, Nicolas Henwood, Marc Castella, Francois Malrait, Jean-Christophe Pesquet1387-1394
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that the proposed architecture can achieve a good learning performance on our high-frequency high-variance datasets. Two datasets are considered: the first one is generated using a simulator based on the physics of an induction motor and the second one is recorded from an industrial electrical motor. We benchmark our solution using variants of traditional neural networks like feedforward, convolutional, and recurrent networks. We evaluate various design choices of our architecture and compare it to the baselines. |
| Researcher Affiliation | Collaboration | 1Universit e Paris-Saclay, Centrale Sup elec, Inria, Centre de Vision Num erique 2Schneider Toshiba Inverter Europe 3Samovar, CNRS, T el ecom Sud Paris, Institut Polytechnique de Paris |
| Pseudocode | No | The paper does not contain any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that source code for the described methodology is publicly available. |
| Open Datasets | No | It seems there is no large electrical motor operations dataset available in the research community to train deep neural networks. We thus introduce two different datasets for our experiments; one dataset consists of simulations performed by using the control law proposed in (Jadot et al. 2009) and the second dataset is recorded from an industrial electrical motor. |
| Dataset Splits | Yes | In our experiments, we split the data into four parts; training and validation parts consist of 70% and 30% of the simulation data, respectively. We use 20% of the raw sensor data to fine tune the model trained on the training set of the simulated data and the rest for testing. |
| Hardware Specification | Yes | For all our experiments we use an Ubuntu 18.04 OS with V100 GPU. |
| Software Dependencies | No | Py Torch is employed to implement the benchmark and proposed architectures. |
| Experiment Setup | Yes | To find the best architecture, we use the validation set of the simulated data. Then we fine-tune the best model on the training set of the raw data and test it on the raw data test set. We also train the best performing model using mean square loss to compare it with the proposed loss function. |