Locate-Then-Detect: Real-time Web Attack Detection via Attention-based Deep Neural Networks
Authors: Tianlong Liu, Yu Qi, Liang Shi, Jianan Yan
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments are carried out on both benchmarks and real Web traffic. |
| Researcher Affiliation | Collaboration | Tianlong Liu 1 , Yu Qi 2, , Liang Shi 3 and Jianan Yan 1 1Alibaba Cloud Intelligence Business Group, Alibaba Group, China 2College of Computer Science and Technology, Zhejiang University, China 3AI&Data Department, Dingxiang Tech.Inc, China |
| Pseudocode | No | The paper describes the system architecture and process flow (e.g., PLN, PCN components), but it does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Two datasets of one CSIC 2010 benchmark dataset and one realworld Web traffic dataset are used. The CSCI 2010 dataset contains a generated traffic targeted to an e-Commerce Web application[Nguyen et al., 2011]. |
| Dataset Splits | No | The paper mentions constructing a training dataset and a testing dataset, but it does not provide explicit details about a separate validation dataset split (e.g., percentages, sample counts, or methodology for creating it). |
| Hardware Specification | Yes | Mac Book Pro Intel Core i7, 16GB RAM; 64 Intel(R) Xeon(R) CPU@2.50GHz, Nvidia Tesla P100 GPU *2, 32GB RAM |
| Software Dependencies | No | The paper mentions using specific algorithms and models like Adam optimizer, Xception model, and a text classification neural network by Kim [2014], but it does not provide version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | Both PLN and PCN are optimized using Adam optimizer [Kingma and Ba, 2014]. For the PLN, the learning rate and weight decay are 1e 6 and 0.99 respectively. For the PCN, the parameters are set to 1e 5 and 0.995, respectively. The number of candidate regions is set to 3. For PLN, an input sequence is projected into a tensor Ts with shape of 1000 8 1. The size of the feature map is 32 8, so there will be 32 75 = 2400 anchors in total. We also use a Non-Maximum Suppression method to reduce the redundant anchors, and the Io S (Intersection over Sequence) threshold is set to 0.7. Moreover, in the training phase, we set a limit of the ratio of negative anchors to positive anchors as 3:1 when Nneg : Npos > 3 : 1. For PCN, the maximum length is set to 512 and the embedding size is set to 64. In detection phase, we select the top-3 ranked proposals per sequence from the PLN and pass them to the PCN for classification. |