Autoencoder Regularized Network For Driving Style Representation Learning

Authors: Weishan Dong, Ting Yuan, Kai Yang, Changsheng Li, Shilei Zhang

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on a challenging driver number estimation problem and the driver identification problem show that ARNet can learn a good generalized driving style representation: It significantly outperforms existing methods and alternative architectures by reaching the least estimation error on average (0.68, less than one driver) and the highest identification accuracy (by at least 3% improvement) compared with traditional supervised learning methods.
Researcher Affiliation Collaboration 1Baidu Research 2Civil Aviation Management Institute of China 3Beijing University of Posts and Telecommunications 4University of Electronic Science and Technology of China 5IBM Research China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about making its source code publicly available, nor does it provide a link to a code repository for the methodology described.
Open Datasets No We use a large real yet private dataset in experiments. The dataset is collected by an insurance company, containing over 500,000 GPS trips from over 2,500 drivers.
Dataset Splits Yes For each driver, we use 80% trips as training data, and the rest 20% as classification validation data. Training ARNet and CONet stops until the validation accuracy is maximized (at epochs 33 and 116, respectively).
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running its experiments. It only mentions "in training these networks" without specifying the hardware.
Software Dependencies No The paper mentions using "ADADELTA optimizer [Zeiler, 2012]" and the "scikit-learn implementation of AP [Pedregosa et al., 2011]" but does not provide specific version numbers for these software components or other ancillary software used in the experimental setup.
Experiment Setup Yes For all the nets, we set 256 hidden units in gru1 and gru2... We use dropout probability 0.5. We set 50 hidden units in fc1... We use λ=1e-5, ADADELTA optimizer [Zeiler, 2012] with learning rate 1.0, ρ=0.95 and ϵ=1e-8, and batch size 2560 in training these networks.