Deep Semantic Compliance Advisor for Unstructured Document Compliance Checking

Authors: Honglei Guo, Bang An, Zhili Guo, Zhong Su

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on both banking data and available public data in this section. In order to verify the wide applications of our GNN-based sentence encoder, we evaluate our method and the state-of-the-art methods on the available public data set. Experimental results show that our method outperforms other syntactic sentence encoders. Meanwhile, we also evaluate our clause relatedness detection method on a real banking data set from our customer. All the experimental results show that our method achieves better performances in various applications.
Researcher Affiliation Industry Honglei Guo , Bang An , Zhili Guo and Zhong Su IBM Research China {guohl, abangbj, guozhili, suzhong}@cn.ibm.com
Pseudocode No The paper describes model architectures and mathematical equations but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code (no specific repository link, explicit code release statement, or mention of code in supplementary materials) for the methodology described.
Open Datasets Yes In order to compare our GNN-based deep semantic comparison model with existing syntactic sentence encoders and GNN models, we conduct experiments on the public Stanford Natural Language Inference (SNLI) dataset (https://nlp.stanford.edu/projects/snli/).
Dataset Splits Yes Models are evaluated on the validation data after each epoch and early stop with 3 epoch patience.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions tools and optimizers like GloVe, Stanford NLP, Adam, and Adadelta, but it does not specify software library versions (e.g., PyTorch 1.x, TensorFlow 2.x) or programming language versions (e.g., Python 3.x) with specific version numbers.
Experiment Setup Yes Dropout with rate 0.1 is applied after each MLP layer (except the last layer). β is set to 1e-6. Learning rate is initialized as 5e-4 and is decreased by the factor of 0.2 if the performance does not improve after an epoch. We use Adam [Kingma and Ba, 2015] as the optimizer and set batch size to 64.