Self-Supervised Logic Induction for Explainable Fuzzy Temporal Commonsense Reasoning

Authors: Bibo Cai, Xiao Ding, Zhouhao Sun, Bing Qin, Ting Liu, Baojun wang, Lifeng Shang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on TIMEDIAL, a challenging dataset for temporal reasoning over dialog, show that our method, Logic Induction Enhanced Contextualized TEmporal Reasoning (LECTER), can yield great improvements over the traditional language model for temporal reasoning.
Researcher Affiliation Collaboration 1Research Center for Social Computing and Information Retrieval Harbin Institute of Technology, China 2Huawei Noah s Ark Lab
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper mentions 'The implementation is based on Pytorch' but does not provide a link or explicit statement about releasing the source code for LECTER. It links to a dataset: 'https://github.com/qywu/Dialog Corpus'.
Open Datasets Yes We evaluate the performance of our proposed LECTER model on the challenge dataset TIMEDIAL (Qin et al. 2021). and We leverage other large-scale publicly available corpus containing over 700MB of text1 to construct our selfsupervised training dataset... 1https://github.com/qywu/Dialog Corpus
Dataset Splits Yes After preprocessing, we obtain 97k/24k instances for training/validation.
Hardware Specification Yes The implementation is based on Pytorch and trained on a Tesla V100 GPU with Adam optimizer with 10 epochs.
Software Dependencies No The implementation is based on Pytorch, but no specific version number for PyTorch or other software dependencies is provided.
Experiment Setup Yes During the training, the batch size is set to 32. The combination weight λ in Eq.7 is set to 1. We search the learning rate with grid search in lr {5e 6, 1e 5, 5e 5} for the baseline and LECTER. The implementation is based on Pytorch and trained on a Tesla V100 GPU with Adam optimizer with 10 epochs.