Cross-Domain Slot Filling as Machine Reading Comprehension

Authors: Mengshi Yu, Jian Liu, Yufeng Chen, Jinan Xu, Yujie Zhang

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on SNIPS and ATIS datasets show that our approach consistently outperforms the existing state-of-the-art methods by a large margin.
Researcher Affiliation Academia Mengshi Yu , Jian Liu , Yufeng Chen , Jinan Xu and Yujie Zhang Beijing Jiaotong University, Beijing, China {19120432, jianliu, chenyf, jaxu, yjzhang}@bjtu.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code and data available at https://github.com/mengshi Y/RCSF
Open Datasets Yes Code and data available at https://github.com/mengshi Y/RCSF. We evaluate our framework on SNIPS [Coucke et al., 2018], a public spoken language understanding dataset... We use another commonly used dataset ATIS [Hemphill et al., 1990] as target domain to test our model.
Dataset Splits Yes We fine-tune all hyper-parameters on the validation set and use the best checkpoint to test our model. To simulate the cross-domain scenarios, we follow [Liu et al., 2020b] to split the dataset, that is, we choose one domain as the target domain and the other six domains as the source domains each time.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 'Bert For Question Answering2 implemented by Hugging Face as our base model, and load the pre-trained weights provided by deepset3' and 'Adam optimizer', but does not provide specific version numbers for the software libraries.
Experiment Setup Yes Adam optimizer [Kingma and Ba, 2014] is applied to optimize all parameters with a learning rate 1e-5. We set the batch size to 64 and the maximum sequence length to 128. The patience of early stop is set to 5.