Deep Open Intent Classification with Adaptive Decision Boundary

Authors: Hanlei Zhang, Hua Xu, Ting-En Lin14374-14382

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three benchmark datasets show that our method yields significant improvements compared with the state-of-the-art methods.
Researcher Affiliation Collaboration Hanlei Zhang,1, 2 Hua Xu,1, 2 Ting-En Lin1, 2 1State Key Laboratory of Intelligent Technology and Systems, Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China, 2 Beijing National Research Center for Information Science and Technology (BNRist), Beijing 100084, China zhang-hl20@mails.tsinghua.edu.cn, xuhua@tsinghua.edu.cn, ting-en.lte@alibaba-inc.com
Pseudocode No The paper describes the proposed approach using textual descriptions and mathematical equations, but it does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes The codes are released at https://github.com/thuiar/Adaptive-Decision-Boundary.
Open Datasets Yes BANKING A fine-grained dataset in the banking domain (Casanueva et al. 2020). OOS A dataset for intent classification and out-of-scope prediction (Larson et al. 2019). Stack Overflow A dataset published in Kaggle.com. We use the processed dataset (Xu et al. 2015).
Dataset Splits Yes All datasets are divided into training, validation and test sets. Dataset Classes #Training #Validation #Test Vocabulary Size Length (max / mean) BANKING 77 9,003 1,000 3,080 5,028 79 / 11.91 OOS 150 15,000 3,000 5,700 8,376 28 / 8.31 Stack Overflow 20 12,000 2,000 6,000 17,182 41 / 9.18
Hardware Specification No The paper does not specify the hardware (e.g., GPU models, CPU models, RAM) used for running the experiments. It only mentions using BERT and PyTorch.
Software Dependencies No The paper mentions using "the BERT model (bert-uncased, with 12-layer transformer) implemented in Py Torch (Wolf et al. 2019)". While PyTorch and the Transformers library (implied by Wolf et al. 2019) are software dependencies, their specific version numbers are not provided.
Experiment Setup Yes The training batch size is 128, and the learning rate is 2e-5. For the boundary loss Lb, we employ Adam (Kingma and Ba 2014) to optimize the boundary parameters at a learning rate of 0.05.