Learning Class-Transductive Intent Representations for Zero-shot Intent Detection
Authors: Qingyi Si, Yuanxin Liu, Peng Fu, Zheng Lin, Jiangnan Li, Weiping Wang
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on two real-world datasets show that CTIR brings considerable improvement to the baseline systems. |
| Researcher Affiliation | Academia | 1 Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China 2School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code, datasets and Appendix are available at https://github. com/Phoebus Si/CTIR |
| Open Datasets | Yes | We conduct experiments on two benchmarks for intent detection. SNIPS [Coucke et al., 2018] is a corpus to evaluate the performance of voice assistants, which contains 5 seen intents and 2 unseen intents. CLINC [Larson et al., 2019] includes out-of-scope queries and 22,500 in-scope queries covering intent classes from 10 domains. We use the in-scope data to build our dataset with 50 seen intents and 10 unseen intents. For more details of the datasets, please refer to Appendix A1. The code, datasets and Appendix are available at https://github. com/Phoebus Si/CTIR |
| Dataset Splits | No | The paper states: "We use the test set for hyperparameter(Appendix B1) tuning, which is the same with most ZSID work." This indicates that a separate validation set split was not used; hyperparameter tuning was performed on the test set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory specifications). |
| Software Dependencies | No | The paper mentions general software components like "BERT" but does not provide specific version numbers for any software dependencies (e.g., Python, TensorFlow, PyTorch, or other libraries). |
| Experiment Setup | No | The paper mentions hyperparameters like alpha and lambda (α and λ) and margins (m+, m), but their specific numerical values are not provided in the main text, instead referring to appendices for details (e.g., "We use the test set for hyperparameter(Appendix B1) tuning", "please see Appendix D1 for details"). |